Presentations
Recent Submissions
-
Inverse Modeling of the Initial Stage of the 1991 Pinatubo Volcanic Cloud Accounting for Radiative Feedback of Volcanic Ash(Copernicus GmbH, 2023-02-26) [Presentation]The evolution of volcanic clouds is sensitive to the initial three-dimensional (3D) distributions of volcanic material, which are often unknown. Here, we conduct inverse modeling of the fresh Mt. Pinatubo cloud to estimate the time-dependent emissions profiles and initial 3D spatial distributions of volcanic ash and SO2. We account for aerosol radiative feedback and dynamic lofting of volcanic ash. It results in a lower (by 1 km for ash) injection height than that without ash radiative feedback. The solution captures the elevated ash layer between 14 and 24 km and the meridional height gradient during the first two days after an eruption. A significant fraction of the emissions (i.e., 6/16.6 Mt of SO2 and 34/64.22 Mt of fine ash) did not reach the stratosphere. The results demonstrate that the Pinatubo eruption ejected ~78% of fine ash at 12 to 23 km, ~64% of SO2 at 17 to 23 km, and most of the ash and SO2 mass for the first two days after the eruption resides in the 15- to 22- km layer. 6 Mt of tropospheric SO2 oxidized into sulfate aerosol within a week. This outcome might help to explain the discrepancies between the observations and model simulations recently discussed in the literature. The long-term evolution of the Pinatubo aerosol optical depth simulated using the obtained ash and SO2 initial distributions converges with the available stratospheric aerosol and gas experiment (SAGE) observations a month after the eruption when the tropospheric aerosol cloud dissipated.
-
A Preliminary Green Function Database for Global 3-D Centroid Moment Tensor Inversions(Copernicus GmbH, 2023-02-26) [Presentation]Currently, the accuracy of synthetic seismograms used for Global CMT inversion, which are based on modern 3D Earth models, is limited by the validity of the path-average approximation for mode summation and surface-wave ray theory. Inaccurate computation of the ground motion’s amplitude and polarization as well as other effects that are not modeled may bias inverted earthquake parameters. Synthetic seismograms of higher accuracy will improve the determination of seismic sources in the CMT analysis, and remove concerns about this source of uncertainty. Strain tensors, and databases thereof, have recently been implemented for the spectral-element solver SPECFEM3D (Ding et al., 2020) based on the theory of previous work (Zhao et al., 2006) for regional inversion of seismograms for earthquake parameters. The main barriers to a global database of Green functions have been storage, I/O, and computation. But, compression tricks and smart selection of spectral elements, fast I/O data formats for high-performance computing, such as ADIOS, and wave-equation solution on GPUs, have dramatically decreased the cost of storage, I/O, and computation, respectively. Additionally, the global spectral-element grid matches the accuracy of a full forward calculation by virtue of Lagrange interpolation. Here, we present our first preliminary database of stored Green functions for 17 seismic stations of the global seismic networks to be used in future 3-D centroid moment tensor inversions. We demonstrate the fast retrieval and computation of seismograms from the database.
-
Quantifying lifetime water scarcity(Copernicus GmbH, 2023-02-25) [Presentation]Water scarcity is a growing concern in many regions worldwide, as demand for clean water increases and supply becomes increasingly uncertain under climate change. Already today, more than 4 billion people experience water scarcity at least one month per year (Mekonnen and Hoekstra, 2016). Developing socio-economic conditions and growing population increase water demands, while climate change leads to changes in freshwater availability. Most studies quantify water scarcity in discrete time windows, with fixed population and climate change signals (e.g., 30 years or long-term averages). Recently, however, Thiery et al. (2021) proposed a novel approach, in which climate change impacts are integrated over a person's lifetime. In this cohort perspective, lifetime impact values are comparable across generations and regions. Evaluating this perspective for natural hazards, they showed, for example, that a newborn will experience a sixfold increase in drought exposure compared to a 60-year-old (Thiery et al., 2021). In this study, we use this cohort perspective to study how much water scarcity a person experiences during their lifetime. Based on monthly fluctuations in water demand and availability, we estimate the total amount of water demand not met and refer to it as 'lifetime water deficit'. To this end, we use an ensemble of four global hydrological models (MATSIRO, CWatM, LPJmL and H08), each forced by four GCMs and two RCP scenarios from the InterSectoral Impact Model Intercomparison Project (ISIMIP2b). The simulations account for varying population and socio-economic conditions in the historical and future period, following the SSP2 scenario. Combined with country-based population, cohort distribution and life expectancies, lifetime water deficits are quantified for different generations on a country level. Our findings reveal high water lifetime deficit values for regions that are already water scarce today, such as the Mediterranean, North Africa and the Middle East. In these regions, more than 70% of the lifetime water demand is not met when needed. Further comparison reveals differences in spatial, intergenerational and climate change scenarios, and provides information on different scenarios. Overall, this study provides a new perspective on quantifying water scarcity and the climate and population impacts.
-
Simulation of ground motions with high frequency components obtained from Fourier neural operators(Copernicus GmbH, 2023-02-25) [Presentation]Seismic hazards analysis relies on accurate knowledge of ground motions arising from potential earthquakes to assess the risk of damage to buildings and infrastructure. It is necessary to perform ground motion simulations because recorded strong-motion data from specific combinations of earthquake magnitudes, epicentral distances, and site conditions are still limited. Physics-based simulations provide reliable ground motion estimates, but their application in practice is limited to frequency ranges f < 1Hz, largely due to limited computational resources and lack of information regarding earthquake sources and medium. While hybrid ground-motion computations combining deterministic low frequency components with stochastic high frequency components are often used, their stochastic high frequency components fail to correctly account for source and path effects and lead to inconsistent building responses. The large database of ground motion records from Japan lends itself to develop machine learning approaches to estimate high frequency ground motions. Applying state-of-the-art machine learning techniques, like Fourier neural operators (FNOs) and Generative Adversarial Networks (GANs), we estimate seismograms at higher frequencies from their low frequency counterparts. In our approach, the time and frequency properties of ground motions are estimated using two different FNO models. In the time domain, a relationship is established between normalised low pass filtered and broadband waveforms. Frequency domain analysis involves the learning of high frequency spectrum from low frequency spectrum. Finally, the time and frequency properties are combined to produce broadband ground motions. Source, site, and path aspects are naturally incorporated into the training process. We use ground motion data collected between 1996 and 2020 at 18 stations in the Ibaraki province of Japan to train our models and validate them on different events (Mw 4 to 7) around Japan. Using goodness of fit measures (GOFs), we show that the resulting ground motions match the recorded observations with good to acceptable GOF values for most of the predictions. To enhance our predictions, we include uncertainty estimation by utilizing a conditioned GAN approach. Lastly, to demonstrate the practicality of the approach, we compute high frequency components for a physics based simulated hypothetical Mw 5.0 earthquake in Japan.
-
Global biogeography of the glacier-fed stream microbiome(Copernicus GmbH, 2023-02-22) [Presentation]Glacier-fed streams (GFSs) serve as headwaters to many of the world’s largest river networks. Although being characterized by extreme environmental conditions (i.e., low water temperatures, oligotrophy) GFSs host an underappreciated microbial biodiversity, especially within benthic biofilms which play pivotal roles in downstream biogeochemical cycles. Yet, we still lack a global overview of the GFS biofilm microbiome. In addition, little is known on how environmental conditions shape bacterial diversity, and how these relationships drive global distribution patterns. This is particularly important as mountain glaciers are currently vanishing at a rapid pace due to global warming. Here, we used 16S rRNA gene sequencing data from the Vanishing Glaciers project to conduct a first comprehensive analysis of the benthic microbiome from 148 GFSs across 11 mountain ranges. Our analyses revealed marked biogeographic patterns in the GFS microbiome, mainly driven by the replacement of phylogenetically closely related taxa. Strikingly, the GFS microbiome was characterized by pronounced level of endemism, with >58% of the Amplicon Sequence Variants (ASVs) being specific to one mountain range. Consistent with the marked dissimilarities across mountain ranges, we found a very small taxonomic core including only 200 ASVs, yet accounting for >25% of the total relative abundance of the ASVs. Finally, we found that spatial effects such as dispersal limitation, isolation and spatially autocorrelated environmental conditions overwhelmed the effect of the environment by itself on benthic biofilm beta diversity. Our findings shed light on the previously unresolved global diversity and biogeography of the GFS microbiome now at risk across the world’s major mountain ranges because of rapidly shrinking glaciers.
-
The potential application of statistical post processing techniques on landslide early warning system(Copernicus GmbH, 2023-02-22) [Presentation]With the increase of frequency and intensity of heavy precipitation in the future, rainfall triggered landslides (RTL) can be one of the major threat to human life and property security. Early warning systems of natural hazards are one of the most effective measure for reducing disaster losses and risks. However, the forecast of RTL in near-real-time (NRT) is extremely difficult since the quality of NRT precipitation data is relatively poor. Quantile regression forest (QRF), a state-of-the-art statistical postprocessing method, has been proved to reduce the difference existing between NRT satellite precipitation estimates and ground-based rainfall data. When predicted rainfall maps are put side by side with raw NRT satellite product, the pattern of the first matches much more closely the locations where landslide events have been mapped in a test site in North-Eastern Turkey. This leave an optimistic perspective on the application of statistical postprocessing techniques in the field of weather science and in general for natural hazard assessment. Ideally, by correcting the continuous information in space and time provided by satellite rainfall estimates, one could create a new operational tool for landslide early warning system, not bound to the financial and deployment requirement typical of rain gauge and terrestrial radar stations.
-
The effects of coarse dust in the models and observations in the dust source regions(Copernicus GmbH, 2023-02-22) [Presentation]In dust source regions, such as the Middle East, dust is a major environmental factor affecting climate, air quality, and human health. Dust also hampers solar energy harvesting by weakening downward solar flux and depositing on optically active surfaces of solar energy devices. In this study, we combine fine-resolution WRF-Chem simulations with size-segregated measurements of dust deposition to quantify the contribution of coarse (2.5 um < r < 10 um) and giant (10 um <r < 100 um) dust particles in aerosols radiative forcing and deposition rates. Most up-to-date models do not represent the particles with r > 10 um. The absence of large particles in the models does not significantly affect the radiative fluxes, as their contribution to AOD is relatively small, but they comprise the most dust-deposited mass. We found that dust deposition rates calculated in WRF-Chem and reanalysis products are 2-3 times smaller than the observed. However, the deposition rate of particulate matter with a diameter smaller than 10 um (PM10) is in good agreement between the models and observations. In the Middle East, fine dust particles are predominantly responsible for the significant reduction (> 5 %) of the downward solar flux hampering solar energy production. Still, dust-deposited mass, primarily associated with coarse particles, causes about a 2% loss of PV panel efficiency daily due to soiling. As was suggested previously, WRF-Chem, like many other models, tends to overestimate the atmospheric concentration of fine (r < 2.5 um) dust particles and underestimate the concentration of coarse particles. As seen from the comparison of the size distribution of deposited dust in simulations and observations, the latter is caused not as much by too fast deposition of large particles but due to underestimating their emission in the models.
-
Sensitivity of African Easterly Waves to Dust Direct Radiative Forcing(Copernicus GmbH, 2023-02-22) [Presentation]African Easterly Waves (AEWs) are the most important precipitation-producing dynamic systems in tropical Africa and Atlantic, where dust in the atmosphere is abundant. But the past studies lack consensus on the sign and magnitude of the dust radiative forcing impact on AEWs primarily because of the disagreement in calculating dust solar radiation absorption. The incapability of coarse-resolution models to represent various dust-AEW interactions is another source of uncertainty. The present study uses a high-resolution atmospheric general circulation model, HiRAM, to investigate the sensitivity of AEWs to the dust direct radiative forcing when dust shortwave absorption varies within the observed limits. Global simulations are conducted with the 25 km grid spacing to adequately simulate AEWs and associated circulation features. Four 10-year experiments are conducted: One control experiment without dust and three others with dust assuming dust is an inefficient, standard, and very efficient shortwave absorber. The results show that AEWs are highly sensitive to dust shortwave absorption. As dust shortwave absorption increases, AEW activity intensifies and broadens the wave track shifting it southward. The 6-9 day waves are more sensitive to dust shortwave absorption than the 3-5 day waves, where the response in the former has a stark land-sea contrast. The sensitivity of AEW to dust solar radiation absorption arises from a combination of energy conversion mechanisms. Although baroclinic energy conversion dominates the energy cycle, the responses to dust shortwave heating in barotropic and generation terms are comparable to that in baroclinic conversion.
-
Insights into the drivers and spatio-temporal trends of extreme wildfires with statistical deep-learning(Copernicus GmbH, 2023-02-22) [Presentation]Extreme wildfires continue to be a significant cause of human death and biodiversity destruction across the globe, with recent worrying trends in their activity (i.e., occurrence and spread) suggesting that wildfires are likely to be highly impacted by climate change. In order to facilitate appropriate risk mitigation for extreme wildfires, it is imperative to identify their main drivers and assess their spatio-temporal trends, with a view to understanding the impacts of global warming on fire activity. To this end, we analyse monthly burnt area due to wildfires using a hybrid statistical deep-learning framework that exploits extreme value theory and quantile regression. Three study regions are considered: the contiguous U.S., Mediterranean Europe and Australia.
-
The importance of station corrections for local earthquake magnitudes: the example from the seismicity in NEOM (Gulf of Aqaba and northern Red Sea)(Copernicus GmbH, 2023-02-22) [Presentation]The NEOM multi-billion-dollar project on the eastern coast of the Gulf of Aqaba will bring underground infrastructures, new cities, and tourist destinations. This project will dramatically increase the seismic risk associated with active faults in the Gulf of Aqaba and the northern Red Sea. The Gulf of Aqaba, located between northern Saudi Arabia and the Sinai peninsula and formed by the transtension at the southern termination of the Dead Sea Transform, is a 180-km-long fault system that can generate earthquakes of magnitude at least 7.3 (as occurred in 1995). South of the gulf, the fault system connects to the Red Sea rift, where an earthquake of magnitude larger than 5 occurred in 2020. To investigate the regional tectonics and to better understand the associated seismic hazard, we have run a temporary network of 12 broadband seismic stations in the area since 2019. In this contribution, we present a new local magnitude scale calibrated by using more than 10,000 half-peak-to-peak amplitudes, automatically measured and Wood-Anderson-corrected, from earthquakes recorded by our network from May 2019 to February 2021. We used the amplitudes from the two horizontal components of each station to constrain the constants of the distance-dependent correction term of the local magnitude formula (n, related to the geometrical spreading, and k, related to the attenuation), magnitudes, and station corrections. We used a least-square regression scheme in two steps to ensure the convergence of the solution and independence of the results from the initial values. In the first step, we only inverted for n, k, and magnitudes. In the second step, we also inverted for station corrections and we used the magnitudes obtained in the first step as initial values for the second step. Conversely to most previous studies, we did not introduce any constraints on the station corrections. We run several regressions in a grid search approach to tackle the trade-off between n and k and find the best solution. We found that the estimated station corrections, because of the lack of constraints on them, are strongly correlated with the rock properties and topographic attributes. We also compared the frequency-magnitude distributions obtained with our best solution, including the station corrections (case A), the Hutton and Boore (1967) formula (case B), and Hutton and Boore formula with our station corrections (case C). We found that magnitudes for A are lower than for B and C. However, differences in statistical parameters, such as b-values, between A and C are neglectable. Our work provides NEOM with a reliable and locally calibrated earthquake magnitude scale. This new magnitude scale can also be applied in surrounding regions with similar geological features (e.g., Egypt, Jordan, and Israel). Moreover, this work highlights that estimations of station corrections are critical, and at least, as important as a locally calibrated magnitude scale.
-
Space-time modelling of co-seismic and post-seismic landslide hazard via Ensemble Neural Networks.(Copernicus GmbH, 2023-02-22) [Presentation]Until now, a full numerical description of the spatio-temporal dynamics of a landslide could be achieved only via physics-based models. The part of the geoscientific community developing data-driven model has instead focused on predicting where landslides may occur via susceptibility models. Moreover, they have estimated when landslides may occur via models that belong to the early-warning-system or to the rainfall-threshold themes. In this context, few published researches have explored a joint spatio-temporal model structure. Furthermore, the third element completing the hazard definition, i.e., the landslide size (i.e., areas or volumes), has hardly ever been modeled over space and time. However, technological advancements in data-driven models have reached a level of maturity that allows to model all three components (Where, When and Size). This work takes this direction and proposes for the first time a solution to the assessment of landslide hazard in a given area by jointly modeling landslide occurrences and their associated areal density per mapping unit, in space and time. To achieve this, we used a spatio-temporal landslide database generated for the Nepalese region affected by the Gorkha earthquake. The model relies on a deep-learning architecture trained using an Ensemble Neural Network, where the landslide occurrences and densities are aggregated over a squared mapping unit of 1x1 km and classified/regressed against a nested 30~m lattice. At the nested level, we have expressed predisposing and triggering factors. As for the temporal units, we have used an approximately 6-month resolution. The results are promising as our model performs satisfactorily both in the susceptibility (AUC = 0.93) and density prediction (Pearson r = 0.93) tasks. This model takes a significant distance from the common susceptibility literature, proposing an integrated framework for hazard modeling in a data-driven context. To promote reproducibility and repeatability of the analyses in this work, we share data and codes in a GitHub repository accessible from this link: https://github.com/ashokdahal/LandslideHazard.
-
Advancing Stein Variational Gradient Descent for geophysical uncertainty estimation(Copernicus GmbH, 2023-02-22) [Presentation]Stein Variational Gradient Descent (SVGD) is a powerful algorithm for uncertainty quantification algorithm introduced by the statistics community that has recently found applications in geophysical inverse problems. It is a non-parametric, iterative method for making probabilistic predictions by sampling from a set of particles, which are updated using gradient information of the Kullback-Leibler divergence between the target posterior distribution and a user-defined proposal distribution. One of the main benefits of SVGD is its ability to handle complex, high-dimensional distributions. This is particularly useful in geophysics, where datasets can be large, and the subsurface model of interest is high-dimensional. However, the computational cost of SVGD increases with the number of particles used to characterize the posterior. Its performance also depends on the choice of the prior distribution, as it influences the SVGD ability to provide geophysical and geological realism in the posterior samples. There have been efforts to improve the efficiency of SVGD for large data sets, such as by introducing mini-batch techniques and deep learning-based prior aimed at producing high-resolution posterior samples. Along this direction, we are advancing the frontier of the SVGD algorithm by coupling it with the Plug-and-Play (PnP) framework, which allows sampling from a regularized target posterior distribution, where the target posterior distribution is regularized by a convolutional neural network (CNN) based denoiser. We showcase its ability to produce high-resolution, geologically trustworthy samples representative of the subsurface structures on a variety of geophysical problems for reservoir characterization and velocity model building (e.g., full waveform inversion).
-
Nanoscale observation of electro-synaptic response(Proceedings of the Neuromorphic Materials, Devices, Circuits and Systems, FUNDACIO DE LA COMUNITAT VALENCIANA SCITO, 2023-01-09) [Presentation]Resistive switching technologies like information storage and neuromorphic computation require a high integration density. Hence, studying ultra-small devices with few nanometres in length is important to extract accurate conclusions. However, sometimes patterning small devices is challenging and one good option is to study the electrical properties of the materials at the nanoscale using conductive atomic force microscopy (CAFM). In this seminar, I will explain multiple types of experiments that are useful to characterize the electronic properties of different materials and devices at the nanoscale using CAFM, explaining the different setups that we have developed. I will describe some of the properties I have analysed in metal-oxides, graphene, molybdenum disulphide, hexagonal boron nitride, and nanoparticles.
-
Unlocking the desalination processes future roadmap(Desalination and Water Treatment, Desalination Publications, 2023-01) [Presentation]Energy-water-environment nexus is very important to attain COP21 goal, maintaining environment temperature increase below 2°C, but unfortunately two third share of CO2 emission has already been used and the remaining will be exhausted by 2050. A number of technological developments in power and desalination sectors improved their efficiencies to save energy and carbon emission but still they are operating far from their thermodynamic limits. The theoretical thermodynamics limit for seawater desalination at normal conditions is about 0.78 kWh/m3 depending on the initial salt contents. However, practical plants are operated at several folds higher than this limit due mainly to inherent losses in the processes that were incurred in removing dissolved salts. Technological advancement in thermally driven processes hybridization have set the new bench mark for lowest energy consumption that has boosted the water production trend of desalination industry. In this paper, we presented multi-effect desalination (MED) hybridization with pressure swing adsorption (PSAD) cycle to overcome lower brine temperature limitations to boost overall performance of the system. The synergetic effect from hybridization of MED-PSAD permits a higher overall operational range and inter-stage temperature differences, leading to a boost in water production up to 2–3 folds. We showed that the proposed hybrid cycle can achieve highest performance SUPR = 20% of thermodynamic limit: one of the highest performance reported in the literature up till now. These figures can be translated to less than US$ 0.47/m3 – a lowest specific cost ever reported in the literature. The proposed cycle is not only tested at pilot scale, but also successfully commercialized into industry and received many international awards as one of the most efficient and sustainable desalination technology.
-
Deep Learning Based Surrogate Model of CO2 Mineral Trapping in Heterogenous Reactive Formations(2022-12-12) [Presentation]
-
Deep-Learning-based Prediction of Geological CO2 Sequestration in Highly Heterogeneous Naturally Fractured Reservoirs(2022-12-12) [Presentation]Naturally fractured reservoirs (NFRs), such as fractured carbonate reservoirs, are ubiquitous across the worldwide and are potentially very good source to store carbondioxide (CO2) for a longer period of time. The simulation models are great tool to assess the potential and understanding the physics behind CO2-brine interaction in subsurface reservoirs. Simulating the behavior of fluid flow in NFR reservoirs during CO2 are computationally expensive because of the multiple reasons such as highly-fractured and heterogeneous nature of the rock, fast propagation of CO2 plume in the fracture network, and high capillary contrast between matrix and fractures. This paper presents a data-driven deep learning surrogate modeling approach that can accurately and efficiently capture the temporal-spatial dynamics of CO2 saturation plumes during injection and post-injection monitoring periods of Geological Carbon Sequestration (GCS) operations in NFRs. We have built a physics-based numerical simulation model to simulate the process of CO2 injection in a naturally fractured deep saline aquifers. A standalone package was developed to couple the discrete fracture network in a fully compositional numerical simulation model. Then reservoir model was sampled using the Latin-Hypercube approach to account for a wide range of petrophysical, geological, reservoir, and operational parameters. The simulation model parameters were obtained from extensive geological surveys published in literature. These samples generated a massive physics-informed database (about 900 simulations) that provides sufficient training dataset for the Deep Learning surrogate models. Average Absolute Percentage Error (AAPE) and coefficient of determination (R2) were used as error metrics to evaluate the performance of the surrogate models. The developed workflow showed superior performance by giving AAPE less than 5% and R2 more than 0.95 between ground truth and predictions of the state variables. The proposed Deep Learning framework provides an innovative approach to track CO2 plume in a fractured carbonate reservoir and can be used as a quick assessment tool to evaluate the long term feasibility of CO2 movement in fractured carbonate medium.
-
Single-cell Individual Full-length mtDNA Sequencing Enables Quantitative Haplotype-resolved Variant Analysis in Native and Genome-Edited Mitochondria(HUMAN GENE THERAPY, 2022-12-01) [Presentation]The ontogeny and dynamics of mtDNA heteroplasmy remainunclear due to limitations of current mtDNA sequencing meth-ods. We developedindividualMitochondrialGenomesequen-cing (iMiGseq) of full-length mtDNA for ultra-sensitive variantdetection, complete haplotyping, and unbiased evaluation ofheteroplasmy levels, all at the individual mtDNA molecule level.iMiGseq detected sequential acquisition of detrimental muta-tions in defective mtDNA in NARP/Leigh syndrome patient-derived induced pluripotent stem cells (iPSCs). iMiGseq iden-tified unintended heteroplasmy shifts in mitoTALEN editedNARP/Leigh syndrome iPSCs. iMiGseq of mitochondrial baseeditor DdCBE-edited cells did not detect any appreciable levelof unintended mutations in mtDNA. iMiGseq uncovered unap-preciated levels of heteroplasmic variants in single healthy hu-man oocytes well below the conventional NGS detection limit,of which numerous variants are deleterious and associated withlate-onset mitochondrial disease and cancer. iMiGseq revealeddramatic shifts in variant frequency and clonal expansion oflarge structural variants during oogenesis and stable hetero-plasmy levels during human blastoid generation. It showed thefirst haplotype-resolved mitochondrial genomes from singlehuman oocytes and single human blastoids. Therefore, iMiGseqcould not only help elucidate the mitochondrial etiology ofdiseases, but also enhance the precision of mitochondrial diseasediagnosis.
-
CRISPR-induced on-target large deletions can be controlled by modulating PolQ and RPA(HUMAN GENE THERAPY, 2022-12-01) [Presentation]CRISPR-Cas9, an efficient genome editing tool, has been widelyused in research and holds great promise in the clinic. However, thelarge unintended rearrangements of the genome after CRISPR-Cas9editing occur frequently and their potential risk cannot be ignored. In this study, we detected large deletions (LDs) induced by Cas9 inhuman embryonic stem cells (hESCs) and found micro-homologyend joining (MMEJ) repair pathway plays a predominant role inLD. We genetically targeted PARP1, RPA, PolQ and Lig3, whichplays critical roles in MMEJ, during CRISPR-Cas9 editing. Wefound that knocking down PARP1 and Lig3 does not alter LDfrequency, while knocking down or inhibiting PolQ dramaticallyreduces Cas9-induced LD frequency. Knocking down RPA in-creases LD frequency, and consistently, overexpression of RPAsreduces the frequency of LD. In conclusion, RPAs and PolQ playopposite roles in Cas9-induced LD and may be promising targets forreducing large rearrangement frequency during genome editing.
-
Analyzing industrial figure of merit for single-component organic solar cells(Proceedings of Asia-Pacific International Conference on Perovskite, Organic Photovoltaics and Optoelectronics, FUNDACIO DE LA COMUNITAT VALENCIANA SCITO, 2022-11-21) [Presentation]Thanks to the emerging non-fullerene acceptors (NFAs), power conversion efficiencies (PCEs) of bulk heterojunction (BHJ) organic solar cells (OSCs) continue increasing towards the 20% milestone. Nevertheless, important factors for industrial application are mostly neglected, such as photostability and cost potential. Single-component organic solar cells (SCOSCs) employing materials with donor and acceptor moieties chemically bonded within one molecule or polymer, successfully overcome the immiscibility between donor and acceptor as well as the resultant self-aggregation under external stress [1]. To inspire a broader interest, in this work, the industrial figure of merit (i-FOM) of OSCs is calculated and analyzed, which includes PCE, photostability, and synthetic complexity (SC) index [2]. Based on the notable advantages of SCOSCs over the correspondent BHJ OSCs, especially the enhanced stability and simplified film processing, we systematically compare the i-FOM values of BHJ OSCs and the corresponding SCOSCs. SCMs exhibit overall much higher i-FOM values compared with the BHJ OSCs, and the highest value reaches 0.3, which is even higher than the famous PM6:Y6, even though the PCE (8%) is only half of PM6:Y6. With the increase in efficiency, SCOSCs possess the further potential for a higher i-FOM value. Among all factors, the synthetic complexity of SCOSCs is slightly higher than that of the corresponding BHJ OSCs due to the extra synthetic step for connecting donor and acceptor moieties. This feature however overcomes the large-scale phase separation and stability issue for the corresponding BHJ systems [3]. SCOSCs based on dyad 1 exhibit surprisingly high photostability under concentrated light (7.5 suns and 30 suns), corresponding to almost unchanged device stability up to 10,000 hours under 1-sun illumination. For realizing industrial application, SCOSCs have to give a high efficiency comparable to the current high-efficiency BHJ OSCs, while BHJ should be developed in the direction of less complicated synthesis [4]. With joint efforts of researchers from multidiscipline, SCOSCs will see continuing progress in efficiency, reaching an i-FOM value enough for industrial application.