Now showing items 1-20 of 196

• #### Numerical Smoothing with Multilevel Monte Carlo for Efficient Option Pricing and Density Estimation

(2020-08-12) [Presentation]
When approximating the expectation of a functional of a certain stochastic process, the robustness and performance of multilevel Monte Carlo (MLMC) method, may be highly deteriorated by the low regularity of the integrand with respect to the input parameters. To overcome this issue, a smoothing procedure is needed to uncover the available regularity and improve the performance of the MLMC estimator. In this work, we consider cases where we cannot perform an analytic smoothing. Thus, we introduce a novel numerical smoothing technique based on root-finding combined with a one dimensional integration with respect to a single well-chosen variable. Our study is motivated by option pricing problems and our main focus is on dynamics where a discretization of the asset price is needed. Through our analysis and numerical experiments, we demonstrate how numerical smoothing significantly reduces the kurtosis at the deep levels of MLMC, and also improves the strong convergence rate, when using Euler scheme. Due to the complexity theorem of MLMC, and given a pre-selected tolerance, $\text{TOL}$, this results in an improvement of the complexity from $\mathcal{O}\left(\text{TOL}^{-2.5}\right)$ in the standard case to $\mathcal{O}\left(\text{TOL}^{-2} \log(\text{TOL})^2\right)$. Moreover, we show how our numerical smoothing combined with MLMC enables us also to estimate density functions, which standard MLMC (without smoothing) fails to achieve.
• #### Importance sampling for a robust and efficient multilevel Monte Carlo estimator

(2020-08-11) [Presentation]
The multilevel Monte Carlo (MLMC) method for continuous time Markov chains, first introduced by Anderson and Higham (SIAM Multiscal Model. Simul. 10(1), 2012), is a highly efficient simulation technique that can be used to estimate various statistical quantities for stochastic reaction networks (SRNs), and in particular for stochastic biological systems. Unfortunately, the robustness and performance of the multilevel method can be deteriorated due to the phenomenon of high kurtosis, observed at the deep levels of MLMC, which leads to inaccurate estimates for the sample variance. In this work, we address cases where the high kurtosis phenomenon is due to catastrophic coupling (characteristic of pure jump processes where coupled consecutive paths are identical in most of the simulations, while differences only appear in a very small proportion), and introduce a pathwise dependent importance sampling technique that improves the robustness and efficiency of the multilevel method. Our analysis, along with the conducted numerical experiments, demonstrates that our proposed method significantly reduces the kurtosis at the deep levels of MLMC, and also improves the strong convergence rate. Due to the complexity theorem of MLMC and given a pre-selected tolerance, TOL, this results in an improvement of the complexity from O(TOL^{-2} (log(TOL))^2) in the standard case to O(TOL^{-2}), which is the optimal complexity of the MLMC estimator. We achieve all these improvements with a negligible additional cost since our IS algorithm is only applied a few times across each simulated path.
• #### Hierarchical Approximation Methods for Option Pricing and Stochastic Reaction Networks

(2020-07-02) [Presentation]
In biochemically reactive systems with small copy numbers of one or more reactant molecules, stochastic effects dominate the dynamics. In the first part of this thesis, we design novel efficient simulation techniques for a reliable and robust estimation of various statistical quantities for stochastic biological and chemical systems under the framework of Stochastic Reaction Networks (SRNs). In systems characterized by having simultaneously fast and slow timescales, existing discrete state-space stochastic path simulation methods can be very slow. In the first work in this part, we propose a novel hybrid multilevel Monte Carlo (MLMC), which uses a novel split-step implicit tau-leap scheme at the coarse levels, where the explicit scheme is not applicable due to numerical instability issues. Our analysis illustrates the advantage of our proposed method over MLMC combined with the explicit scheme. In a second work related to the first part, we solve another challenge present in this context called the high kurtosis phenomenon. We address cases where the high kurtosis, observed for the MLMC estimator, is due to catastrophic coupling. We propose a novel method that provides a more reliable and robust multilevel estimator. Our approach combines the MLMC method with a pathwise-dependent importance sampling technique for simulating the coupled paths. Through our theoretical estimates and numerical analysis, we show that our approach not only improves the robustness of the multilevel estimator by reducing the kurtosis significantly but also improves the strong convergence rate, and consequently, the complexity rate of the MLMC method. We achieve all these improvements with a negligible additional cost. In the second part of this thesis, we design novel numerical methods for pricing financial derivatives. Option pricing can be formulated as an integration problem, which is usually challenging due to a combination of two complications: 1) The high dimensionality of the input space, and 2) The low regularity of the integrand on the input parameters. We address these challenges by using different techniques for smoothing the integrand to uncover the available regularity and improve quadrature methods' convergence behavior. We develop different ways of smoothing that depend on the characteristics of the problem at hand. Then, we approximate the resulting integrals using hierarchical quadrature methods. In the first work in this part, we design a fast method for pricing European options under the rough Bergomi model. This model exhibits several numerical and theoretical challenges. As a consequence, classical numerical methods for pricing become either inapplicable or computationally expensive. In our approach, we first smoothen the integrand analytically and then use quadrature methods. These quadrature methods are coupled with Brownian bridge construction and Richardson extrapolation, to approximate the resulting integral. Numerical examples with different parameter constellations exhibit the performance of our novel methodology. Indeed, our hierarchical methods demonstrate substantial computational gains compared to the MC method, which is the prevalent method in this context. In the second work in this part, we consider cases where we cannot perform an analytic smoothing. Consequently, we perform a numerical smoothing based on root-finding techniques, with a particular focus on cases where discretization of the asset price dynamics is needed. We illustrate the advantage of our approach, which combines numerical smoothing with adaptive sparse grids' quadrature, over the MC approach. Furthermore, we demonstrate that our numerical smoothing procedure improves the robustness and the complexity rate of the MLMC estimator. Finally, our numerical smoothing, coupled with MLMC, enables us also to estimate density functions efficiently.
• #### Discrete changes in fault free-face roughness: constraining past earthquakes characteristics

(Copernicus GmbH, 2020-03-09) [Presentation]
A driving motivator in many active tectonics studies is to learn more about the recurrence large and potentially destructive earthquakes, providing the means to assess the respective fault&#8217;s future seismic behavior. Doing so requires long records of earthquake recurrence. The lack of sufficiently long instrumental seismic records (that would be best suited for this task) has led to the development of other approaches that may constrain the recurrence of surface rupturing earthquakes along individual faults. These approaches take different forms, depending on the specific tectonic and geographic conditions of an investigated region. For example, around the Mediterranean Sea, we frequently find bedrock scarps along normal faults. Assuming that bedrock (i.e., fault free-face) exposure is caused by the occurrence of sub-sequent large earthquakes, we may measure certain rock properties to constrain the time and size of past earthquakes as well as the fault&#8217;s geologic slip-rate. A now-classic example in this regard is the measurement of $^{36}$Cl concentrations along exposed fault scarps in limestones. For the presented study, we looked at another property of the exposed fault free-face, namely its morphologic roughness. We aim to identify whether fault free-face roughness contains information to constrain earthquake occurrence and fault slip-rates following the assumption that &#160;sub-sequent exposure to the elements and sub-areal erosional conditions may leave a signal in how rough (or smooth) the fault free-face is (assuming a somewhat uniform pre-exposure roughness). Here, we present observations of fault free-face surface roughness for the Mt. Vettore fault (last ruptured in 2016) and the Rocca Preturo fault (The underlying models of fault free-face morphology were generated using the Structure-from-Motion approach and a large suite of unregistered optical images.). Employing different metrics to quantify morphologic roughness, we were indeed able to observe a) an increase in surface roughness with fault scarp height (i.e., longer exposure to sub-areal erosion causes higher roughness), and b) distinct (rather than gradual) changes in surface roughness, suggesting a correlation to individual exposure events such as earthquakes. Hence, fault free-face morphology of bedrock faults may serve as an additional metric to reconstruct earthquake recurrence patterns.
• #### Particulate Monitoring and Evaluation of the Low-cost Sensors Performance at the Middle East

(Copernicus GmbH, 2020-03-09) [Presentation]
Key Words:Air Quality; Air Pollution Monitoring; Low-Cost Sensors; Reference Methods, Microsensors, Experimental Campaign&#160;Abstract: Air quality in the Middle East (ME) is strongly affected by desert dust besides anthropogenic pollutants. The health hazards associated with particulate matter (PM) are the most severe in this desert region. The enhancement of Air quality monitoring is needed to implement abatement strategies and stimulate environmental awareness among citizens. Several techniques are used to monitor PM concentration. The air quality monitoring stations (AQMS) equipped with certified instrumentation is the most reliable option. However, AQMSs are quite expensive and require regular maintenance. Another option is low-cost sensors (LCS) that seen as&#160;innovative&#160;tools for future smart cities. They are cost-effective and allow to increase the spatial coverage of air-quality measurements as the number of conventional AQMS is generally quite small, so the current density of the monitoring stations in the Middle East is low. In this work, we evaluated the PM air-quality climatology in the major cities in Saudi Arabia (Jeddah, Riyadh, and Dammam) for four years between 2016 and continued until 2020. We used the measurement data that were conducted by&#160;the Saudi Authority for Industrial Cities and Technology Zones (MODON)&#160;using certified reference AQMS installed inside the suburban areas of the three major cities in Saudi Arabia. Also, we tested the performance of the five LCS systems for eight months, starting in May 2019 and continued until January 2020. For this purpose, we set AQMS with the PM reference instrumentation (based on beta-ray absorption) side-by-side with five different LCS systems (based on light scattering) in the industrial part of Jeddah city. We collected, filtered, validated PM data, and applied standard measurement and calibration procedures.The AQMS measurements show that in Summer, the daily mean PM concentrations exceed the World Health Organization (WHO) limits for PM2.5 and PM10 almost every day in Jeddah, Riyadh, and Dammam. The WHO limits are also frequently violated in the winter months. The AQMS measurements reliably show dust storm spikes when PM pollution is extremely high while all the LCSs fail to capture these severe events. We found that LCS and AQMS PM measurements are poorly correlated in Summer, but show slightly better results in fair-weather Winter days when humidity and temperature are low. But they still cannot capture severe dust events.
• #### Volcanic ash chemical aging from multiple observational constraints for the Pinatubo eruption

(Copernicus GmbH, 2020-03-09) [Presentation]
&#160; Condensation of sulfuric acid formed from the co-injected sulfur dioxide on volcanic ash particles, so-called chemical aging, increases the particle size and changes their microphysical and optical properties. The larger aged particles have a higher removal rate, which reduces their lifetime. On the other hand, the aging increases the scattering cross-section, and therefore the ash optical depth is increasing due to aging. The uptake of sulfuric acid by volcanic ash delays the formation of new sulfate particles depending on the level of aging, which is characterized by the number of sulfuric acid layers coating a single ash particle (i.e., monolayers). Both the formation of sulfate aerosols and sulfuric acid uptake by ash particles affect the development of a volcanic plume and its radiative impact. We employ the ECHAM5/MESSy atmospheric chemistry general circulation model (EMAC) to simulate the chemical aging of volcanic ash in the 1991 Pinatubo eruption volcanic plume. We emit 17Mt of SO$_{2}$ and 75Mt of fine ash. Two aerosol modes represent ash size distribution: accumulation and coarse with 0.23 and 3.4 um median radii, respectively. We allow the sulfuric acid to condense on the ash particles and assume different levels of aging (from not aged to highly aged). We use independent observations for sulfur dioxide, volcanic ash mass, volcanic ash optical depth, and plume coverage area from the Advanced Very-High-Resolution Radiometer (AVHRR) observations and total optical depth from the Stratospheric Aerosol and Gas Experiment II (SAGE II). We constrain the number of monolayers on ash particles by testing simulated ash surface area and optical depth calculated within a fully coupled online stratospheric-tropospheric chemistry model against observations. The level of volcanic ash aging strongly affects the surface area of the volcanic ash plume, ranging from 3x10$^{6 }$km$^{2}$ to 6x10$^{6}$ km$^{2}$, compared to 3.8x10$^{6}$km$^{2}$ from AVHRR retrievals. The volcanic ash optical depth, averaged over the volcanic plume area, ranges between 2 and 3.6. Using five monolayer coating assumption allows us to better reproduce the observed SO$_{2}$ mass, its decay rate, total plume surface area, and ash optical depth. Most of the coarse ash particles are removed within a week after the eruption reducing the amount of sulfuric acid within the volcanic plume. The smaller particles have much longer residence time and continue to uptake sulfuric acid for more than three months. &#160; &#160;
• #### Interaction of the Vertical Profile of Dust Aerosols with Land/sea Breezes over the Eastern Coast of the Red Sea from LIDAR and High-resolution WRF-Chem Simulations

(Copernicus GmbH, 2020-03-09) [Presentation]
<p>With the advances in modeling approaches, and the application of satellite and ground-based data in dust-related research, our understanding of the dust cycle is significantly improved in recent decades. However, two aspects of the dust cycle, the vertical profiles and diurnal cycles of dust aerosols have not been understood adequately, mainly due to the sparsity of observations. A micro-pulse LIDAR has been operating at the King Abdullah University of Science and Technology (KAUST) campus located on the east coast of the Red Sea (22.3N, 39.1E), measuring the backscattering from atmospheric aerosols at a high temporal resolution for several years since 2015. It is the only operating LIDAR system over the Arabian Peninsula. We use this LIDAR data together with other collocated observations and high-resolution WRF-Chem model simulations to study the 3-d structure of aerosols, with a focus on dust over the Red Sea Arabian coastal plains.&#160;</p><p>Firstly, we investigate the vertical profiles of aerosol extinction and concentration in terms of their seasonal and diurnal variability. Secondly, using the hourly model output and observations, we study the diurnal cycle of aerosols over the site. Thirdly, we explore the interactions between dust aerosols and land/sea breezes, which are the critical components of the local diurnal circulation in the region.&#160;</p><p>We found a substantial variation in the vertical profile of aerosols in different seasons. There is also a marked difference in the daytime and nighttime vertical distribution of aerosols in the study site, as shown by LIDAR data. A prominent dust layer is observed at ~5-7km at night in the LIDAR data, corresponding to the long-range transported dust of non-local origin. The vertical profiles of aerosol extinction are consistently reproduced in LIDAR, MERRA-2 reanalysis, and CALIOP data, as well as in WRF-Chem simulations in all seasons. Our results show that the sea breezes are much deeper (~1km) than the land breezes (~200m), and both of them prominently affect the distribution of dust aerosols over the study site. Sea breezes mainly trap the dust aerosols near the coast, brought by the northeasterly trade winds from inland deserts, causing elevated dust maxima at the height of ~1.5km. Also, sea and land breezes intensify dust emissions from the coastal region in daytime and nighttime, respectively. Such dust emissions caused by sea breezes and land breezes are most active in spring and winter. Finally, WRF-Chem successfully captures the onset, demise, and the height of some large-scale dust events as compared to LIDAR data qualitatively.&#160;</p>
• #### Assessment of Air Pollution in the Middle East Using Reanalyses Products and High-resolution WRF-Chem Simulations

(Copernicus GmbH, 2020-03-09) [Presentation]
The Middle East is notorious for high air pollution that affects both air-quality and regional climate. The Middle East generates about 30% of world dust annually and emits about 10% of anthropogenic SO$_{2}$. In this study we use Modern-Era Retrospective analysis for Research and Applications v.2 (MERRA-2), Copernicus Atmosphere Monitoring Service Operational Analysis (CAMS-OA) data assimilation products, and a regional Weather Research and Forecasting model (10 km resolution) coupled with Chemistry (WRF-Chem) to evaluate natural and anthropogenic air pollution in the ME. The SO$_{2}$ anthropogenic emissions used in WRF-Chem are updated using the independent satellite SO$_{2}$ emission dataset obtained from the Ozone Monitoring Instrument (OMI) observations onboard NASA EOS Aura satellite. Satellite and ground-based aerosol optical depth (AOD) observations, as well as Particulate Matter (PM) and SO$_{2}$ in situ measurements for 2015-2016, were used for validation and model evaluation.&#160; Although aerosol fields in regional WRF-Chem and global assimilation products are quite consistent, WRF-Chem, due to its higher spatial resolution and novel OMI SO$_{2}$ emissions, is preferable for analysis of regional air-quality over the ME. We found that conventional emission inventories (EDGAR-4.2, MACCity, and HTAP-2.2) have uncertainties in the location and magnitude of SO$_{2}$ sources in the ME and significantly underestimate SO$_{2}$ emissions in the Arabian Gulf. CAMS reanalysis tends to overestimate PM$_{2.5}$ and underestimate PM$_{10}$ concentrations. In the coastal areas, MERRA2 underestimates sulfate and tends to overestimate sea salt concentrations. The WRF-Chem&#8217;s PM background concentrations exceed the World Health Organization (WHO) guidelines over the entire ME. The major contributor to PM (~75&#8211;95%) is mineral dust. In the ME urban centers and near oil recovery fields, non-dust aerosols (primarily sulfate) contribute up to 26% into PM$_{2.5}$. The contribution of sea salt into PM can rich up to 5%. The contribution of organic matter into PM prevails over black carbon. SO$_{2}$ surface concentrations in major ME cities frequently exceed European air-quality limits.
• #### ENSO sensitivity to radiative forcing

(Copernicus GmbH, 2020-03-09) [Presentation]
To improve El Ni&#241;o / Southern Oscillation (ENSO) predictions and projections in a changing climate, it is essential to better understand ENSO&#8217;s sensitivities to external radiative forcings. Strong volcanic eruptions can help to clarify ENSO&#8217;s sensitivities, mechanisms, and feedbacks. Strong explosive volcanic eruptions inject millions of tons of sulfur dioxide into the stratosphere, where they are converted into sulfate aerosols. For equatorial volcanoes, these aerosols can spread globally, scattering and absorbing incoming sunlight, and inducing a global-mean surface cooling. Despite this global-mean cooling effect, paleo data confirm remarkable warming of the eastern equatorial Pacific in the two years after a tropical eruption, with a shift towards an El Ni&#241;o-like state. To illuminate this response and explain why it tends to occur during particular seasons and ENSO phases, we present a unified framework that includes the roles of the seasonal cycle, stochastic wind forcing, eruption magnitude, and various tropical Pacific climate feedbacks. Analyzing over 20,000 years of large-ensemble simulations from the GFDL-CM2.1 climate model forced by volcanic eruptions, we find that the ENSO response comprises both stochastic and deterministic components, which vary depending on the perturbation season and the ocean preconditioning. For boreal winter eruptions, stochastic dispersion largely obscures the deterministic response, being the largest for the strong El Ni&#241;o preconditioning. Deterministic El Ni&#241;o-like responses to summer eruptions are well seen on neutral ENSO and weak to moderate El Ni&#241;o preconditioning and grow with the eruption magnitude. The relative balance of these components determines the predictability and strength of the ENSO response. The results clarify why previous studies obtained seemingly conflicting results.
• #### Rayleigh wave ellipticity measurements in the North Tanzanian Divergence (Eastern African Rift)

(Copernicus GmbH, 2020-03-09) [Presentation]
Rayleigh wave ellipticity depends, in theory, only on the Earth structure below a seismic station, offering the advantage of a &#8220;single-station&#8221; method to infer crustal properties. Therefore, ellipticity measurements can be used to construct pseudo 3-D shear velocity models of the earth structure using even seismic stations that did not record simultaneously.&#160;&#160; Based on that, we carried-out ellipticity measurements by using teleseismic waveforms recorded by the OPS seismic network we deployed at the western flank of the North Tanzanian Divergence between June 2016 and May 2018, covering 17 sites. We then expanded our measurements on the waveforms recorded by the adjacent CRAFTI seismic network from January 2013 and December 2014, available on IRIS, which comprised more than 30 sites.&#160; While the OPS network covers the transition between the Tanzania Craton and North Tanzanian Divergence, the CRAFTI network is entirely contained in the North Tanzanian Divergence. Therefore, the imaging that can be obtained by integrating the two asynchronous passive seismology experiments will help to better understand the dynamics of this segment of the eastern branch of the Eastern African Rift. Preliminary results show heterogeneity structure that are in agreement with previous tomographic studies based on ambient noise cross-correlation and body-waves arrival-times. In regions where previous seismological studies are not available, results match the known geological structure of the transition between the Tanzanian Craton and the North Tanzanian Divergence. This demonstrates that measurements of ellipticity can be a useful and integrative tool for earth structure imaging, especially at the edges of the active rifts where the seismicity is scarce.
• #### Test of Dust Emission Over the Middle East

(Copernicus GmbH, 2020-03-09) [Presentation]
Abstract The dust emission simulated within the up-to-date global and regional models differs by almost an order of magnitude. The models are tuned to reproduce the observed aerosol optical depth (AOD) that, with some caveats, reflects the dust mass retained in the atmosphere. However, the amount of dust suspended in the atmosphere is controlled independently by the dust emission and deposition; therefore, only AOD observations are insufficient to constrain both these processes. To calculate the dust emission over the Middle East (ME), in this study, we employ dust deposition observations, AERONET AOD, micro-pulse lidar, and satellite observations to constrain the WRF-Chem simulations. The dust deposition is measured on a monthly bases for 2015-2019 using passive samplers over six sites over land and the sea. We compare the WRF-Chem simulations, conducted with 10-km grid spacing, with the recent MERRA-2 and CAMS reanalysis. WRF-Chem is configured with the GOCART dust scheme. We calculate the meteorological and aerosol initial and boundary conditions using the MERRA-2 reanalysis.&#160; We evaluated the dust regional mass balance controlled by emission, deposition, and cross-boundary transport. The smallest dust particles are transported at vast distances while the heaviest ones deposit inside of the domain. Since the model accounts for dust particles with radii
• #### Hierarchical adaptive sparse grids and quasi-Monte Carlo for option pricing under the rough Bergomi model

(2020-02-27) [Presentation]
The rough Bergomi (rBergomi) model, introduced recently in (Bayer, Friz, Gatheral, 2016), is a promising rough volatility model in quantitative finance. It is a parsimonious model depending on only three parameters, and yet remarkably fits with empirical implied volatility surfaces. In the absence of analytical European option pricing methods for the model, and due to the non Markovian nature of the fractional driver, the prevalent option is to use the Monte Carlo (MC) simulation for pricing. Despite recent advances in the MC method in this context, pricing under the rBergomi model is still a time-consuming task. To overcome this issue, we have designed a novel, hierarchical approach, based on i) adaptive sparse grids quadrature (ASGQ), and ii) quasi-Monte Carlo (QMC). Both techniques are coupled with a Brownian bridge construction and a Richardson extrapolation on the weak error. By uncovering the available regularity, our hierarchical methods demonstrate substantial computational gains with respect to the standard MC method, when reaching a sufficiently small relative error tolerance in the price estimates across different parameter constellations, even for very small values of the Hurst parameter. Our work opens a new research direction in this field, i.e., to investigate the performance of methods other than Monte Carlo for pricing and calibrating under the rBergomi model.
• #### Numerical smoothing and hierarchical approximations for efficient option pricing and density estimation

(2020-02-25) [Presentation]
When approximating expectations of certain quantity of interest (QoI), the efficiency and performance of deterministic quadrature methods, such as sparse grids, and hierarchical variance reduction methods such as multilevel Monte Carlo (MLMC), may be highly deteriorated, in different ways, by the low regularity of the QoI with respect to the input parameters. To overcome this issue, a smoothing procedure is needed to uncover the available regularity and improve the performance of the aforementioned numerical methods. In this work, we consider cases where we can not perform an analytic smoothing and introduce a novel numerical smoothing technique, based on root finding combined with a one dimensional integration with respect to a single well-chosen variable. We prove that under appropriate conditions the resulting function of the remaining variables is a highly smooth function, so potentially allowing a higher efficiency of adaptive sparse grids quadrature (ASGQ), in particular when it is combined with hierarchical transformations (Brownian bridge and Richardson extrapolation on the weak error) to treat effectively the high dimensionality. Our study is motivated by option pricing problems and our main focus is on dynamics where a discretization of the asset price is needed. Through our analysis and numerical experiments, we illustrate the advantage of combining numerical smoothing with ASGQ, over the Monte Carlo (MC) approach. Furthermore, we demonstrate how the numerical smoothing significantly reduces the kurtosis at the deep levels of MLMC, and also improves the strong convergence rate, when using Euler scheme from 1/2 for the standard case (without smoothing), to 1. Due to the complexity theorem of MLMC and given a pre-selected tolerance, TOL, this results in an improvement of the complexity from O(TOL^{-2.5}) in the standard case to O(TOL^{-2}(log(TOL))^2). Finally, we show how our numerical smoothing combined with MLMC enables us also to estimate density functions, a situation where standard MLMC fails.
• #### Multilevel hybrid split-step implicit tau-leap for stochastic reaction networks

(2020-02-21) [Presentation]
In biochemically reactive systems with small copy numbers of one or more reactant molecules, the dynamics are dominated by stochastic effects. To approximate those systems, discrete state-space and stochastic simulation approaches have been shown to be more relevant than continuous state-space and deterministic ones. These stochastic models constitute the theory of Stochastic Reaction Networks (SRNs). In systems characterized by having simultaneously fast and slow timescales, existing discrete space-state stochastic path simulation methods, such as the stochastic simulation algorithm (SSA) and the explicit tau-leap (explicit-TL) method, can be very slow. In this talk, we propose a novel implicit scheme, split-step implicit tau-leap (SSI-TL), to improve numerical stability and provide efficient simulation algorithms for those systems. Furthermore, to estimate statistical quantities related to SRNs, we propose a novel hybrid Multilevel Monte Carlo (MLMC) estimator in the spirit of the work by Anderson and Higham (SIAM Multiscal Model. Simul. 10(1), 2012). This estimator uses the SSI-TL scheme at levels where the explicit-TL method is not applicable due to numerical stability issues, and then, starting from a certain interface level, it switches to the explicit scheme. We present numerical examples that illustrate the achieved gains of our proposed approach in this context.
• #### Importance sampling for a robust and efficient multilevel Monte Carlo estimator for stochastic reaction networks

(2020-02-18) [Presentation]
The multilevel Monte Carlo (MLMC) method for continuous time Markov chains, first introduced by Anderson and Higham (SIAM Multiscal Model. Simul. 10(1), 2012), is a highly efficient simulation technique that can be used to estimate various statistical quantities for stochastic reaction networks (SRNs), and in particular for stochastic biological systems. Unfortunately, the robustness and performance of the multilevel method can be deteriorated due to the phenomenon of high kurtosis, observed at the deep levels of MLMC, which leads to inaccurate estimates for the sample variance. In this work, we address cases where the high kurtosis phenomenon is due to catastrophic coupling (characteristic of pure jump processes where coupled consecutive paths are identical in most of the simulations, while differences only appear in a very small proportion), and introduce a pathwise dependent importance sampling technique that improves the robustness and efficiency of the multilevel method. Our analysis, along with the conducted numerical experiments, demonstrates that our proposed method significantly reduces the kurtosis at the deep levels of MLMC, and also improves the strong convergence rate. Due to the complexity theorem of MLMC and given a pre-selected tolerance, TOL, this results in an improvement of the complexity from O(TOL^{-2} (log(TOL))^2) in the standard case to O(TOL^{-2}).
• #### Computation of Electromagnetic Fields Scattered From Dielectric Objects of Uncertain Shapes Using MLMC

(2020) [Presentation]
• #### An MPI-based Parallel Multiphysics Discontinuous Galerkin Framework for Photoconductive Devices

(2019-12-20) [Presentation]
The proper functioning of photoconductive devices (e.g., photoconductive antennas and photomixers) relies on the nonlinear interactions between the electromagnetic (EM) fields/waves induced on the whole device and the carriers inside the semiconductor region [1-2]. Therefore, simulator developed for the numerical characterization of these devices should be capable of accurately modeling EM field interactions and carrier dynamics individually and also the (nonlinear) coupling between them [3-4]. The characteristic scales of EM fields and carriers differ by several orders of magnitude in space and time, which makes the simulation computationally very challenging. Additionally, the use of geometrically intricate nanostructures, which are used to increase the optical-to-terahertz (THz) conversion efficiency by focusing optical EM fields, has further increased the computational requirements of the simulator being developed. In this talk, we present an MPI-based parallel multiphysics discontinuous Galerkin (DG) framework for the analysis of photoconductive devices. This framework couples Maxwell and drift-diffusion equations and discretizes them using DG schemes. The resulting (coupled) semi-discrete system is integrated in time using an explicit time marching scheme to yield the sampled values of the EM field and the carrier density. The two solvers for two sets of equations are MPI-parallelized within the same object-oriented framework. The device is discretized with a single mesh, while different (geometric) partitions of the mesh are used in individual solvers. Several concerns on improving the computational efficiency of the parallel DG framework are discussed next. First, carriers response slower than the variation of EM fields, consequently different time integration schemes and different time step sizes are used by the two solvers. Second, because the two solvers have different computation domains, different workload weights are assigned to different partitions of the mesh to obtain higher parallel efficiency and better scalability. Third, the photoconductive antenna simulation requires modeling of both optical- and THz-frequency EM waves. Direct simulation of the THz-frequency EM radiation of the antenna using the mesh size and time step size required by the optical-frequency EM wave is prohibitively expensive. A two-step simulation procedure is used to alleviate this problem. First, the transient photocurrent is obtained from the interaction between the optical pump and the semiconductor device using the multiphysics DG framework. Then this current is fed into the THz antenna, and the THz radiation is modeled using an EM-only DG scheme as a post-processing step.
• #### An Efficient Implementation of Perfectly Matched Layers within a High-order Discontinuous Galerkin Time Domain Method

(2019-12-19) [Presentation]
The perfectly matched layer (PML) is one easy-to-implement and also accurate technique used for domain truncation in wave propagation and scattering simulations carried out using differential-equation based Maxwell equation solvers. The most popular solvers include finite difference methods, finite element methods, and more recently-developed pseudo-spectral time domain method and discontinuous Galerkin time domain method (DGTD). To achieve high-accuracy in the solution (i.e., to reduce the reflection back into the computation domain as much as possible), the PML ideally requires a high conductivity value and/or has to be thick. But, in practice, one cannot increase the thickness too much which would result in higher computational requirements and cannot use a constant high value of conductivity which would increase the numerical reflection at the interface between the computation domain and the PML. Therefore a smoothly increasing conductivity profile is often used for increased accuracy. The DGTD implements the smooth conductivity profile in two different ways. The first method considers the spatial variation of the conductivity inside each element, thus gives a high-order accurate approximation of the material property. However, this results in a different mass matrix for each element and significantly increases the memory requirement since every elemental mass matrix (or its inverse) has to be stored individually. For example, for the stretched-coordinate PML [1], the memory cost of the elemental mass matrix is 3 × 5 × N p × N p, where N p is the number of interpolating nodes in each element, 5 comes from the coefficients in front of the field and the auxiliary variable, and 3 comes from the field components in Cartesian coordinate system. The second method assumes that the conductivity in each element is constant. This significantly reduces the memory cost (no need for storing individual mass matrices) and simplifies the implementation of the DGTD. However, it introduces material discontinuity between neighboring elements, which eventually leads to large reflections that destroy the high-order convergence of the error in the solution. Thus, to reach a similar level of accuracy as in the first method, the conductivity jump between neighboring elements has to be reduced and therefore more PML layers (and elements) are required. In this work, we present an efficient implementation of the stretched-coordinate PML within the nodal DGTD method. This implementation considers the local variation of the conductivity in each element (as in the first method above) and approximate the local mass matrix with a weight-adjusted approximation (WAA) [2]. By using WAA, only (additional) storage of 15 × N q is required for each element, where N q is the number of quadrature points close to N p. The PML performance is same as the one implemented by the first method. The reflection decreases exponentially with the increasing the interpolation polynomial order, i.e., showing high-order convergence.
• #### Directionality Fields Generated by a Local Hilbert Transform in Optics

(IEEE, 2019-10-17) [Presentation]
Conventional Hilbert transform in time (Kramers Kronig relation) is due to causality, in other words to time asymmetry, i.e. to the arrow of time. The same Hilbert transform applied in space breaks analogously the space symmetry and can introduce unidirectional invisibility, or unidirectional coupling or scattering of fields [1]. We propose a local Hilbert transform, in order to design non-Hermitian optical potentials, generating arbitrary directionality of scattering, with desired shapes and topologies [2], We derive a local Hilbert transform to systematically build such non-Hennitian potentials in two-dimensional space, by modifying the background potentials, the refraction index profiles, being either regular or random, extended or localized. In particular, we explore particular directionality fields, for instance in the form of a focus to create sinks for probe fields, which could help to increase absorption at the sink (see Fig.1, left part), or for instance to generate the vortices or currents in the probe fields, as another example. Physically, the proposed directionality field generation provides a flexible new mechanism for dynamically shaping and precise control over probe fields leading to novel effects in wave dynamics [3].
• #### Hierarchical adaptive sparse grids and quasi Monte Carlo for option pricing under the rough Bergomi model

(2019-07-12) [Presentation]
The rough Bergomi (rBergomi) model, introduced recently in [4], is a promising rough volatility model in quantitative finance. It is a parsimonious model depending on only three parameters, and yet exhibits remarkable fit to empirical implied volatility surfaces. In the absence of analytical European option pricing methods for the model, and due to the non-Markovian nature of the fractional driver, the prevalent option is to use Monte Carlo (MC) simulation for pricing. Despite recent advances in the MC method in this context, pricing under the rBergomi model is still a time-consuming task. To overcome this issue, we design a novel, alternative, hierarchical approach, based on i) adaptive sparse grids quadrature (ASGQ), and ii) quasi Monte Carlo (QMC). Both techniques are coupled with Brownian bridge construction and Richardson extrapolation. By uncovering the available regularity, our hierarchical methods demonstrate substantial computational gains with respect to the standard MC method, when reaching a sufficiently small relative error tolerance in the price estimates across different parameter constellations, even for very small values of the Hurst parameter. Our work opens a new research direction in this field, i.e. to investigate the performance of methods other than Monte Carlo for pricing and calibrating under the rBergomi model.