• Omnidirectional Photonic Band Gap Using Low Refractive Index Contrast Materials and its Application in Optical Waveguides

      Vidal Faez, Angelo (2012-07)
      Researchers have argued for many years that one of the conditions for omnidirectional reflection in a one-dimensional photonic crystal is a strong refractive index contrast between the two constituent dielectric materials. Using numerical simulations and the theory of Anderson localization of light, in this work we demonstrate that an omnidirectional band gap can indeed be created utilizing low refractive index contrast materials when they are arranged in a disordered manner. Moreover, the size of the omnidirectional band gap becomes a controllable parameter, which now depends on the number of layers and not only on the refractive index contrast of the system, as it is widely accepted. This achievement constitutes a major breakthrough in the field since it allows for the development of cheaper and more efficient technologies. Of particular interest is the case of high index contrast one-dimensional photonic crystal fibers, where the propagation losses are mainly due to increased optical scattering from sidewall roughness at the interfaces of high index contrast materials. By using low index contrast materials these losses can be reduced dramatically, while maintaining the confinement capability of the waveguide. This is just one of many applications that could be proven useful for this discovery.
    • On Lattice Sequential Decoding for Large MIMO Systems

      Ali, Konpal S. (2014-04)
      Due to their ability to provide high data rates, Multiple-Input Multiple-Output (MIMO) wireless communication systems have become increasingly popular. Decoding of these systems with acceptable error performance is computationally very demanding. In the case of large overdetermined MIMO systems, we employ the Sequential Decoder using the Fano Algorithm. A parameter called the bias is varied to attain different performance-complexity trade-offs. Low values of the bias result in excellent performance but at the expense of high complexity and vice versa for higher bias values. We attempt to bound the error by bounding the bias, using the minimum distance of a lattice. Also, a particular trend is observed with increasing SNR: a region of low complexity and high error, followed by a region of high complexity and error falling, and finally a region of low complexity and low error. For lower bias values, the stages of the trend are incurred at lower SNR than for higher bias values. This has the important implication that a low enough bias value, at low to moderate SNR, can result in low error and low complexity even for large MIMO systems. Our work is compared against Lattice Reduction (LR) aided Linear Decoders (LDs). Another impressive observation for low bias values that satisfy the error bound is that the Sequential Decoder's error is seen to fall with increasing system size, while it grows for the LR-aided LDs. For the case of large underdetermined MIMO systems, Sequential Decoding with two preprocessing schemes is proposed – 1) Minimum Mean Square Error Generalized Decision Feedback Equalization (MMSE-GDFE) preprocessing 2) MMSE-GDFE preprocessing, followed by Lattice Reduction and Greedy Ordering. Our work is compared against previous work which employs Sphere Decoding preprocessed using MMSE-GDFE, Lattice Reduction and Greedy Ordering. For the case of large systems, this results in high complexity and difficulty in choosing the sphere radius. Our schemes, particularly 2), perform better in terms of complexity and are able to achieve almost the same error curves, depending on the bias used.
    • On the Capacity of Underlay Cognitive Radio Systems

      Sboui, Lokman (2013-05-05)
      Due to the scarcity of frequency spectrum in view of the evolution of wireless communication technologies, the cognitive radio (CR) concept has been introduced to efficiently exploit the available spectrum. This concept consists in introducing unlicensed/secondary users (SU’s) in existing networks to share the spectrum of licensed/primary users (PU’s) without harming primary communications hence the name of “spectrum sharing” technique. We study in this dissertation, the capacity and the achievable rate of the secondary user within various communication settings. We, firstly, investigate the capacity of the (SU’s) at low power regime for Nakagami fading channels and present closed form of the capacity under various types of interference and/or power constraints. We explicitly characterize two regimes where either the interference constraint or the power constraint dictates the optimal power profile. Our framework also highlights the effects of different fading parameters on the secondary link ergodic capacity. Interestingly, we show that the low power regime analysis provides a specific insight on the capacity behavior of CR that has not been reported by previous studies. Next, we determine the spectral efficiency gain of an uplink CR Multi-Input Multi- Output (MIMO) system in which the SU is allowed to share the spectrum with the PU using a specific precoding scheme to communicate with a common receiver. Applied to Rayleigh fading channels, we show, through numerical results, that our proposed scheme enhances considerably the cognitive achievable rate. For instance, in case of a perfect detection of the PU signal, after applying Successive Interference Cancellation (SIC), the CR rate remains non-zero for high Signal to Noise Ratio (SNR) which is usually impossible when we only use space alignment technique. In addition, we show that the rate gain is proportional to the allowed interference threshold by providing a fixed rate even in the high SNR range. Finally, we study the impact of the broadcast approach and multi-layer coding on the throughput of CR systems for general fading channels. And we found that at the absence of the channel state information(CSI), we show that this improvement could be almost reached by 2-Layers coding. Then, we introduce a quantized CSI policy and highlight its improvement in terms of throughput before we study the rate when BA with quantized CSI is adopted. Numerical results show that the improvement of the additional layers is decreasing as the number of quantized regions increases.
    • On the Impact of User Distribution on Cooperative Spectrum Sensing and Data Transmission with Multiuser Diversity

      Rao, Anlei (2011-07)
      In this thesis, we investigate the independent but not identically distributed (i.n.i.d.) situations for spectrum sensing and data transmission. In particular, we derive the false-alarm probability and the detection probability of cooperative spectrum sensing with the scheme of energy fusion over i.n.i.d. Nakagami fading channels. Then, the performance of adaptive modulation with single-cell multiuser scheduling over i.n.i.d. Nakagami fading channels is analyzed. Closed-form expressions are derived for the average channel capacity, spectral efficiency, and bit-error-rate (BER) for both constant-power variable-rate and variable-power variable-rate uncoded M- ary quadrature amplitude modulation (M-QAM) schemes. In addition, we study the impact of time delay on the average BER of adaptive M-QAM. From the selected numerical results, we can see that cooperative spectrum sensing and multiuser diversity brings considerably better performance even over i.n.i.d. fading environments.
    • On the MSE Performance and Optimization of Regularized Problems

      Alrashdi, Ayed (2016-11)
      The amount of data that has been measured, transmitted/received, and stored in the recent years has dramatically increased. So, today, we are in the world of big data. Fortunately, in many applications, we can take advantages of possible structures and patterns in the data to overcome the curse of dimensionality. The most well known structures include sparsity, low-rankness, block sparsity. This includes a wide range of applications such as machine learning, medical imaging, signal processing, social networks and computer vision. This also led to a specific interest in recovering signals from noisy compressed measurements (Compressed Sensing (CS) problem). Such problems are generally ill-posed unless the signal is structured. The structure can be captured by a regularizer function. This gives rise to a potential interest in regularized inverse problems, where the process of reconstructing the structured signal can be modeled as a regularized problem. This thesis particularly focuses on finding the optimal regularization parameter for such problems, such as ridge regression, LASSO, square-root LASSO and low-rank Generalized LASSO. Our goal is to optimally tune the regularizer to minimize the mean-squared error (MSE) of the solution when the noise variance or structure parameters are unknown. The analysis is based on the framework of the Convex Gaussian Min-max Theorem (CGMT) that has been used recently to precisely predict performance errors.
    • On the Synthesis and Optical Characterization of Zero-Dimensional-Networked Perovskites

      Almutlaq, Jawaher (2017-04-26)
      The three-dimensional perovskites are known for their wide range of interesting properties including spectral tunability, charge carrier mobility, solution-based synthesis and many others. Such properties make them good candidates for photovoltaics and photodetectors. Low-dimensional perovskites, on the other hand, are good as light emitters due to the quantum confinement originating from their nanoparticle size. Another class of low-dimensional perovskites, also called low-dimensional-networked perovskites (L-DN), is recently reemerging. Those interesting materials combine the advantages of the nanocrystals and the stability of the bulk. For example, zero-dimensional-networked perovskite (0-DN), a special class of perovskites and the focus of this work, consists of building blocks of isolated lead-halide octahedra that could be synthesized into mm-size single crystal without losing their confinement. This thesis focuses on the synthesis and investigation of the optical properties of the 0-DN perovskites through experimental, theoretical and computational tools. The recent discovery of the retrograde solubility of the perovskites family (ABX3), the basis of the inverse temperature crystallization (ITC), inspired the reinvestigation of the low-dimensional-networked perovskites. The results of the optical characterization showed that the absorption and the corresponding PL spectra were successfully tuned to cover the visible spectrum from 410 nm for Cs4PbCl6, to 520 nm and 700 m for Cs4PbBr6 and Cs4PbI6, respectively. Interestingly, the exciton binding energies (Eb) of the 0-DNs were found to be in the order of few hundred meV(s), at least five times larger than their three-dimensional counterpart. Such high Eb is coupled with a few nanoseconds lifetime and ultimately yielded a high photoluminesce quantum yield (PLQY). In fact, the PLQY of Cs4PbBr6 powder showed a record of 45%, setting a new benchmark for solid-state luminescent perovskites. Computational methods were used to calculate the bandgap and study the corresponding excitonic behavior. However, the unexpected mismatch between the calculated and experimental bandgaps questions the origin of the high luminescence, which to date, remains an area of scientific debate that needs further study. Until then, the high PLQY, together with the spectral tunability, insensitivity to particle size and stability all offer a new avenue for more sustainability in light-emitting materials
    • Ontology Design Patterns for Combining Pathology and Anatomy: Application to Study Aging and Longevity in Inbred Mouse Strains

      Alghamdi, Sarah M. (2018-05-13)
      In biomedical research, ontologies are widely used to represent knowledge as well as to annotate datasets. Many of the existing ontologies cover a single type of phenomena, such as a process, cell type, gene, pathological entity or anatomical structure. Consequently, there is a requirement to use multiple ontologies to fully characterize the observations in the datasets. Although this allows precise annotation of different aspects of a given dataset, it limits our ability to use the ontologies in data analysis, as the ontologies are usually disconnected and their combinations cannot be exploited. Motivated by this, here we present novel ontology design methods for combining pathology and anatomy concepts. To this end, we use a dataset of mouse models which has been characterized through two ontologies: one of them is the mouse pathology ontology (MPATH) covering pathological lesions while the other is the mouse anatomy ontology (MA) covering the anatomical site of the lesions. We propose four novel ontology design patterns for combining these ontologies, and use these patterns to generate four ontologies in a data-driven way. To evaluate the generated ontologies, we utilize these in ontology-based data analysis, including ontology enrichment analysis and computation of semantic similarity. We demonstrate that there are significant differences between the four ontologies in different analysis approaches. In addition, when using semantic similarity to confirm the hypothesis that genetically identical mice should develop more similar diseases, the generated combined ontologies lead to significantly better analysis results compared to using each ontology individually. Our results reveal that using ontology design patterns to combine different facets characterizing a dataset can improve established analysis methods.
    • Optical and Micro-Structural Characterization of MBE Grown Indium Gallium Nitride Polar Quantum Dots

      El Afandy, Rami (2011-07-07)
      Gallium nitride and related materials have ushered in scientific and technological breakthrough for lighting, mass data storage and high power electronic applications. These III-nitride materials have found their niche in blue light emitting diodes and blue laser diodes. Despite the current development, there are still technological problems that still impede the performance of such devices. Three-dimensional nanostructures are proposed to improve the electrical and thermal properties of III-nitride optical devices. This thesis consolidates the characterization results and unveils the unique physical properties of polar indium gallium nitride quantum dots grown by molecular beam epitaxy technique. In this thesis, a theoretical overview of the physical, structural and optical properties of polar III-nitrides quantum dots will be presented. Particular emphasis will be given to properties that distinguish truncated-pyramidal III-nitride quantum dots from other III-V semiconductor based quantum dots. The optical properties of indium gallium nitride quantum dots are mainly dominated by large polarization fields, as well as quantum confinement effects. Hence, the experimental investigations for such quantum dots require performing bandgap calculations taking into account the internal strain fields, polarization fields and confinement effects. The experiments conducted in this investigation involved the transmission electron microscopy and x-ray diffraction as well as photoluminescence spectroscopy. The analysis of the temperature dependence and excitation power dependence of the PL spectra sheds light on the carrier dynamics within the quantum dots, and its underlying wetting layer. A further analysis shows that indium gallium nitride quantum dots through three-dimensional confinements are able to prevent the electronic carriers from getting thermalized into defects which grants III-nitrides quantum dot based light emitting diodes superior thermally induced optical properties compared to other nanostructures. Excitation power dependent PL measurements reveal an increase in the excitonic confinements and hence higher quantum efficiencies compared to lower dimensional nanostructures. Finally it is argued that such characteristics allows quantum dots based InGaN structures to become potentially a strong candidate for high quantum efficiency white solid-state light emitting diodes and ultra-violet/blue laser diode operating at room temperature.
    • Optimal Node Placement in Underwater Acoustic Sensor Network

      Felemban, Muhamad (2011-10)
      Almost 70% of planet Earth is covered by water. A large percentage of underwater environment is unexplored. In the past two decades, there has been an increase in the interest of exploring and monitoring underwater life among scientists and in industry. Underwater operations are extremely difficult due to the lack of cheap and efficient means. Recently, Wireless Sensor Networks have been introduced in underwater environment applications. However, underwater communication via acoustic waves is subject to several performance limitations, which makes the relevant research issues very different from those on land. In this thesis, we investigate node placement for building an initial Underwater Wireless Sensor Network infrastructure. Firstly, we formulated the problem into a nonlinear mathematic program with objectives of minimizing the total transmission loss under a given number of sensor nodes and targeted volume. We conducted experiments to verify the proposed formulation, which is solved using Matlab optimization tool. We represented each node with a truncated octahedron to fill out the 3D space. The truncated octahedrons are tiled in the 3D space with each node in the center where locations of the nodes are given using 3D coordinates. Results are supported using ns-3 simulator. Results from simulation are consistent with the obtained results from mathematical model with less than 10% error.
    • Optimal Power Allocation of a Wireless Sensor Node under Different Rate Constraints

      Solares, Jose (2011-07)
      Wireless sensor networks consist of the placement of sensors over a broad area in order to acquire data. Depending on the application, different design criteria should be considered in the construction of the sensors but among all of them, the battery life-cycle is of crucial interest. Power minimization is a problem that has been addressed from different approaches which include an analysis from an architectural perspective and with bit error rate and/or discrete instantaneous transmission rate constraints, among others. In this work, the optimal transmit power of a sensor node while satisfying different rate constraints is derived. First, an optimization problem with an instantaneous transmission rate constraint is addressed. Next, the optimal power is analyzed, but now with an average transmission rate constraint. The optimal solution for a class of fading channels, in terms of system parameters, is presented and a suboptimal solution is also proposed for an easier, yet efficient, implementation. Insightful asymptotical analysis for both schemes, considering a Rayleigh fading channel, are shown. Furthermore, the optimal power allocation for a sensor node in a cognitive radio environment is analyzed where an optimum solution for a class of fading channels is again derived. In all cases, numerical results are provided for either Rayleigh or Nakagami-m fading channels. The results obtained are extended to scenarios where we have either one transmitter-multiple receivers or multiple transmitters-one receiver.
    • Optimisation of Lagrangian Flash Flood Microsensors Dropped by Unmanned Aerial Vehicle

      Abdulaal, Mohammed (2014-05)
      Abstract Physical Sciences and Engineering Division Mechanical Engineering Department Master of Science Optimisation of Lagrangian Flash Flood Microsensors Dropped by Unmanned Aerial Vehicle by Mohammed Abdulaal Floods are the most common natural disasters, causing thousands of casualties every year in the world. In particular, ash ood events are particularly deadly because of the short timescales on which they occur. Classical sensing solutions such as xed wireless sensor networks or satellite imagery are either too expensive or too inaccurate. Nevertheless, Unmanned Aerial Vehicles equipped with mobile microsensors could be capable of sensing ash oods in real time for a low overall cost, saving lives and greatly improving the e ciency of the emergency response. Using ood simulation data, we show that this system could be used to detect ash oods. We also present an ongoing implementation of this system using 3D printed sensors and sensor delivery systems on a UAV testbed as well as some preliminary results.
    • Optimization and Efficiency of DNA Extraction from Drinking Water Samples

      Felemban, Mashael (2018-05)
      Water quality evaluation is a global concern due to its effect on public health. Different procedures can be implemented to evaluate specific standards of water quality. DNA extraction to characterize the microbial community in the water distribution systems is important. To optimize the DNA extraction process the effect of residual chlorine and water composition was tested. The results exposed the limited effect of the samples dechlorination. Total cell number effect can be varied according to water quality. Also, the study indicated the possible inhibitory effect of the rust on the DNA extraction from drinking water samples.
    • Optimization of an Efficient and Sustainable Sonogashira Cross-Coupling Protocol

      Walter, Philipp E. (2012-12)
      Cross coupling reactions are a well-established tool in modern organic synthesis and play a crucial role in the synthesis of a high number of organic compounds. Their importance is highlighted by the Nobel Prize in chemistry to Suzuki, Heck and Negishi in 2010. The increasing importance of sustainability requirements in chemical production has furthermore promoted the development of cross-coupling protocols that comply with the principles of “Green Chemistry”1. The Sonogashira reaction is today the most versatile and powerful way to generate aryl alkynes, a moiety recurring in many pharmaceutical and natural products. Despite many improvements to the original reaction, reports on generally applicable protocols that work under sustainable conditions are scarce. Our group recently reported an efficient protocol for a copperfree Sonogashira cross-coupling at low temperature, in aqueous medium and with no addition of organic solvents or additives2. The goal of this work was to further investigate the effects of different reaction parameters on the catalytic activity in order to optimize the protocol. Limitations of the protocol were tested in respect to reaction temperature, heating method, atmosphere, base type and amount, catalyst loading, reaction time and work up procedure. The reaction worked successfully under air and results were not affected by the presence of oxygen in the water phase. Among a variety of bases tested, triethylamine was confirmed to give the best results and its required excess could be reduced from nine to four equivalents. Catalyst loading could also be reduced by up to 90%: Good to near quantitative yields for a broad range of substrates were achieved using a catalyst concentration of 0.25mol% and 5 eq of Et3N at 50°C while more reactive substrates could be coupled with a catalyst concentration as low as 0.025mol%. Filtration experiments showed the possibility of a simplified work up procedure and a protocol completely free of organic solvents. This optimized protocol can be applied to a broad range of substrates, delivers high yields, avoids formation of toxic byproducts, works under air and aqueous conditions, allows for simple product isolation and thus meets not only the criteria of “Green Chemistry” but also those of “Click-Chemistry”
    • Optimization of Broadband Seismic Network in the Kingdom of Saudi Arabia

      Alshuhail, Abdulrahman (2011-05)
      Saudi Arabia covers a large portion of the Arabian plate, a region characterized by seismic activity, along complex divergent and convergent plate boundaries. In order to understand these plate boundaries it is essential to optimize the design of the broadband seismic station network to accurately locate earthquakes. In my study, I apply an optimization method to design the broadband station distribution in Saudi Arabia. This method is based on so called D-optimal planning criterion that optimizes the station distribution for locating the hypocenters of earthquakes. Two additional adjustments were implemented: to preferentially acquire direct and refracted wave, and to account for geometric spreading of seismic waves (and thus increases the signal to noise ratio). The method developed in this study for optimizing the geographical location of broadband stations uses the probability of earthquake occurrence and a 1-D velocity model of the region, and minimizes the ellipsoid volume of the earthquake location errors. The algorithm was applied to the current seismic network, operated by the Saudi Geologic Survey (SGS). Based on the results, I am able to make recommendations on, how to expand the existing network. Furthermore, I quantify the efficiency of our method by computing the standard error of epicenter and depth before and after adding the proposed stations.
    • Optimization of O3 as Pre-Treatment and Chemical Enhanced Backwashing in UF and MF Ceramic Membranes for the Treatment of Secondary Wastewater Effluent and Red Sea Water

      Herrera, Catalina (2011-12-12)
      Ceramic membranes have proven to have many advantages over polymeric membranes. Some of these advantages are: resistance against extreme pH, higher permeate flux, less frequent chemical cleaning, excellent backwash efficiency and longer lifetime. Other main advantage is the use of strong chemical agent such as Ozone (O3), to perform membrane cleaning. Ozone has proven to be a good disinfection agent, deactivating bacteria and viruses. Ozone has high oxidation potential and high reactivity with natural organic matter (NOM). Several studies have shown that combining ozone to MF/UF systems could minimize membrane fouling and getting higher operational fluxes. This work focused on ozone – ceramic membrane filtration for treating wastewater effluent and seawater. Effects of ozone as a pre – treatment or chemical cleaning with ceramic membrane filtration were identified in terms of permeate flux and organic fouling. Ozonation tests were done by adjusting O3 dose with source water, monitoring flux decline and membrane fouling. Backwashing availability and membrane recovery rate were also analyzed. Two types of MF/UF ceramics membranes (AAO and TAMI) were used for this study. When ozone dosage was higher in the source water, membrane filtration improved in performance, resulting in a reduced flux decline. In secondary wastewater effluent, raw source water declined up to 77% of normalized flux, while with O3 as pre – treatment, source water at its higher O3 dose, flux decreased only 33% of normalized flux. For seawater, membrane performance increase from declining to 37% of its final normalized flux to 21%, when O3 as a pre – treatment was used. Membrane recovery rate also improved even with low O3 dose, as an example, with 8 mg/L irreversible fouling decreases from 58% with no ozone addition to 29% for secondary wastewater effluent treatment. For seawater treatment, irreversible fouling decreased from 37% with no ozone addition to 21% at 8 mg/L, proving ozone is a useful chemical to be used as pre – treatment for both source waters. Finally, transparent exopolymer particles (TEP) showed a decrease in concentration on the active layer of the membrane surface after chemical enhanced backwashing (CEB) using ozone (O3).
    • Optimization of Paper Discoloration via Pyrolysis Using Lasers

      Alhashem, Mayadah M. (2017-04)
      Printing ink is a main component of the modern printer, and it has always been throughout the history of printing. Ink and toners are expensive replaceable components that inkjet and laser printers cannot function without. The digital printing industry, which is majorly composed of monochrome printing, is expected to increase by 225% by 2024 from a 2013 baseline (Smithers et al., 2014). Expenses aside, toner cartridges and ink cartridges pose an overlooked threat to the environment. Manufacturing, packaging, transporting, and waste disposal of printer ink and toners result in carbon dioxide emissions. The complete elimination of ink in monochrome printing is potentially viable with the patented new discoloration technique. The patent studies a discoloration method by carbonizing a paper’s surface (Alhashem et al., 2015). The printing method optimizes surface paper pyrolysis via laser heating. The aim is obtaining the darkest possible shade without compromising paper quality. The challenge is in creating a printed area from the paper material itself, rather than depositing ink on paper. A 75-watt CO2 laser engraving machine emitting a 10.6 μm wavelength beam for heating is used with low power settings to carbonize a fraction of the paper surface. The carbonization is essentially a combustion reaction. Solid fuel burns in three stages: drying, devolatilization (pyrolysis, or distillation phase), and lastly, the char (charcoal) combustion. These stages are driven by heat from the CO2 laser. Moving the laser rapidly above the paper surface arrests the reaction at the second stage, after the formation of blackened char. The control variables in the experimental method are laser power, speed, and the vertical position that affects the laser intensity. Computer software controls these variables. The discoloration of paper is quantified by measuring the light absorptivity using a UV-Vis-IR Spectrometer.
    • Optimizing The Performance of Streaming Numerical Kernels On The IBM Blue Gene/P PowerPC 450

      Malas, Tareq Majed Yasin (2011-07)
      Several emerging petascale architectures use energy-efficient processors with vectorized computational units and in-order thread processing. On these architectures the sustained performance of streaming numerical kernels, ubiquitous in the solution of partial differential equations, represents a formidable challenge despite the regularity of memory access. Sophisticated optimization techniques beyond the capabilities of modern compilers are required to fully utilize the Central Processing Unit (CPU). The aim of the work presented here is to improve the performance of streaming numerical kernels on high performance architectures by developing efficient algorithms to utilize the vectorized floating point units. The importance of the development time demands the creation of tools to enable simple yet direct development in assembly to utilize the power-efficient cores featuring in-order execution and multiple-issue units. We implement several stencil kernels for a variety of cached memory scenarios using our Python instruction simulation and generation tool. Our technique simplifies the development of efficient assembly code for the IBM Blue Gene/P supercomputer's PowerPC 450. This enables us to perform high-level design, construction, verification, and simulation on a subset of the CPU's instruction set. Our framework has the capability to implement streaming numerical kernels on current and future high performance architectures. Finally, we present several automatically generated implementations, including a 27-point stencil achieving a 1.7x speedup over the best previously published results.
    • Optimizing UF Cleaning in UF-SWRO System Using Red Sea Water

      Bahshwan, Mohanad (2012-07)
      Increasing demand for fresh water in arid and semi-arid areas, similar to the Middle East, pushed for the use of seawater desalination techniques to augment freshwater. Seawater Reverse Osmosis (SWRO) is one of the techniques that have been commonly used due to its cost effectiveness. Recently, the use of Ultrafiltration (UF) was recommended as an effective pretreatment for SWRO membranes, as opposed to conventional methods (i.e. sand filtration). During UF operation, intermittent cleaning is required to remove particles and contaminants from the membrane's surface and pores. The different cleaning steps consume chemicals and portion of the product water, resulting in a decrease in the overall effectiveness of the process and hence an increase in the production cost. This research focused on increasing the plant's efficiency through optimizing the cleaning protocol without jeopardizing the effectiveness of the cleaning process. For that purpose, the design of experiment (DOE) focused on testing different combinations of these cleaning steps while all other parameters (such as filtration flux or backwash flux) remained constant. The only chemical used was NaOCI during the end of each experiment to restore the trans-membrane pressure (TMP) to its original state. Two trains of Dow™ Ultrafiltration SFP-2880 were run in parallel for this study. The first train (named UF1) was kept at the manufacturer's recommended cleaning steps and frequencies, while the second train (named UF2) was varied according to the DOE. The normalized final TMP was compared to the normalized initial TMP to measure the fouling rate of the membrane at the end of each experiment. The research was supported by laboratory analysis to investigate the cause of the error in the data by analyzing water samples collected at different locations. Visual inspection on the results from the control unit showed that the data cannot be reproduced with the current feed water quality. Statistical analysis using SAS JMP® was performed on the data obtained from UF2 determined that the error in the data was too significant, accounting for 42%. Laboratory inspection on water samples concluded that the water quality feeding to the UF membranes was worse than that of the raw water. This led to a conclusion that severe contamination occurred within the main feed tank where the water was retained before arriving to the UF modules. The type of contamination present in the feed tank is yet to be investigated. Though, frequent cleaning or flushing of the feed tank is recommended on regular basis.
    • Organic Carbon Reduction in Seawater Reverse Osmosis (SWRO) Plants, Jeddah, Saudi Arabia

      Alshahri, Abdullah (2015-12)
      Desalination is considered to be a major source of usable water in the Middle East, especially the Gulf countries which lack fresh water resources. A key and sometimes the only solution to produce high quality water in these countries is through the use of seawater reverse osmosis (SWRO) desalination technology. Membrane fouling is an economic and operational defect that impacts the performance of SWRO desalination technology. To limit this fouling phenomenon, it is important to implement the appropriate type of intake and pre-treatment system design. In this study, two types of systems were investigated, a vertical well system and a surface-water intake at a 9m depth. The purpose of this investigation is to study the impact of the different intake systems and pre-treatment stages in minimizing the concentrations of algae, bacteria, natural organic matter (NOM) and transparent exopolymer particles (TEP), in the feed water prior to pre-treatment, through the pre-treatment stages, and in the product water and concentrate. Water samples were collected from the surface seawater, the intakes (wells for site A, 9 m depth open ocean intake at site B), after the media filter, after the cartridge filter, and from the permeate and reject streams. The measured parameters included physical parameters, algae, bacteria, total organic carbon (TOC), fractions of dissolved NOM, particulate and colloidal TEP. The results of this study prove that the natural filtration and biological treatment of the seawater which occur in the aquifer matrix are very effective in improving the raw water quality to a significant degree. The results demonstrated that algae and biopolymers were 100% removed, the bacterial concentrations were significantly removed and roughly 50% or greater of TOC concentrations was eliminated by the aquifer matrix at site A. The aquifer feeding the vertical wells reduced TEP concentrations, but to differing degree. There is a slight decrease in the concentrations of, algae, bacteria, TOC, NOM, and TEP in the feed water at 9 m depth compared to the surface seawater at site B. The pre-treatment was of significant effectiveness and the improvements in reducing the membrane fouling potential were quite high and strong at this site. Investigation of the permeate stream showed some breakthrough of bacteria which is of concern because it may indicate a problem within the membrane system (e.g., broken seal and perforation). The aquifer feeding the wells in the subsurface system plays a main role in the improvement of water quality, so the pre-treatment seems less effective in site A plant. This proves that the subsurface intake is better than open ocean intake in terms of providing better raw water quality and ultimately reducing membrane biofouling.
    • Oxidation and pyrolysis study on different gasoline surrogates in the jet-stirred reactor

      Almalki, Maram M. (2018-05)
      A better understanding and control of internal combustion engine pollutants require more insightful investigation of gasoline oxidation chemistry. An oxidation study has been done on n-heptane, iso-octane, their binary mixtures (Primary Reference Fuel, (PRF)), and nine hydrocarbon mixtures which represent the second generation of gasoline surrogates (multi-component surrogates). This study aims to develop a better understanding of the combustion reaction by studying the oxidation reaction of different fuels inside a jet-stirred reactor and numerically simulating the reaction using different models under the following conditions: pressure 1 bar, temperature 500-1050K, residence time 1.0 and 2.0s, and two fuel-to-oxygen ratios (ϕ=0.5 and 1.0). Intermediate and product species mole fractions versus temperature profiles were experimentally measured using a GC (gas chromatograph). The experiment was performed within the high and low-temperature regions, where the high-temperature oxidation showed similar behavior for different compositions but the low-temperature oxidation showed significant dependence on the composition of the surrogates. Additionally, the effect of octane number on oxidation chemistry has been investigated and it was found that the low octane number surrogates were more reactive than high octane number surrogates during the low temperature regime. Furthermore, Kinetic analysis was conducted to provide insightful understanding of different factors of fuel reactivity. In addition, the pyrolysis of two TPRF, (Toluene primary reference fuels) mixtures (TPRF70 and TPRF97.5), representing low octane (research octane number 70) and high octane (research octane number 97.5) gasoline, was also studied in jet-stirred reactor coupled with gas chromatography (GC) analysis to investigate the formation of soot and polycyclic aromatic hydrocarbons (PAH) formation.