• Joint Preprocesser-Based Detectors for One-Way and Two-Way Cooperative Communication Networks

      Abuzaid, Abdulrahman I. (2014-05)
      Efficient receiver designs for cooperative communication networks are becoming increasingly important. In previous work, cooperative networks communicated with the use of L relays. As the receiver is constrained, channel shortening and reduced-rank techniques were employed to design the preprocessing matrix that reduces the length of the received vector from L to U. In the first part of the work, a receiver structure is proposed which combines our proposed threshold selection criteria with the joint iterative optimization (JIO) algorithm that is based on the mean square error (MSE). Our receiver assists in determining the optimal U. Furthermore, this receiver provides the freedom to choose U for each frame depending on the tolerable difference allowed for MSE. Our study and simulation results show that by choosing an appropriate threshold, it is possible to gain in terms of complexity savings while having no or minimal effect on the BER performance of the system. Furthermore, the effect of channel estimation on the performance of the cooperative system is investigated. In the second part of the work, a joint preprocessor-based detector for cooperative communication networks is proposed for one-way and two-way relaying. This joint preprocessor-based detector operates on the principles of minimizing the symbol error rate (SER) instead of minimizing MSE. For a realistic assessment, pilot symbols are used to estimate the channel. From our simulations, it can be observed that our proposed detector achieves the same SER performance as that of the maximum likelihood (ML) detector with all participating relays. Additionally, our detector outperforms selection combining (SC), channel shortening (CS) scheme and reduced-rank techniques when using the same U. Finally, our proposed scheme has the lowest computational complexity.
    • Joint Subcarrier Pairing and Resource Allocation for Cognitive Network and Adaptive Relaying Strategy

      Soury, Hamza (2012-05)
      Recent measurements show that the spectrum is under-utilized by licensed users in wireless communication. Cognitive radio (CR) has been proposed as a suitable solution to manage the inefficient usage of the spectrum and increase coverage area of wireless networks. The concept is based on allowing a group of secondary users (SUs) to share the unused radio spectrum originally owned by the primary user (PUs). The operation of CR should not cause harmful interference to the PUs. In the other hand, relayed transmission increases the coverage and achievable capacity of communication systems and in particular in CR systems. In fact there are many types of cooperative communications, however the two main ones are decode-and-forward (DAF) and amplify-and-forward (AAF). Adaptive relaying scheme is a relaying technique by which the benefits of the amplifying or decode and forward techniques can be achieved by switching the forwarding technique according to the quality of the signal. In this dissertation, we investigate the power allocation for an adaptive relaying protocol (ARP) scheme in cognitive system by maximizing the end-to-end rate and searching the best carriers pairing distribution. The optimization problem is under the interference and power budget constraints. The simulation results confirm the efficiency of the proposed adaptive relaying protocol in comparison to other relaying techniques, and the consequence of the choice of the pairing strategy.
    • Kinetic Studies of Oxidative Coupling of Methane Reaction on Model Catalysts

      Khan, Abdulaziz M. (2016-04-26)
      With the increasing production of natural gas as a result of the advancement in the technology, methane conversion to more valuable products has become a must. One of the most attractive processes which allow the utilization of the world’s most abundant hydrocarbon is the oxidative coupling. The main advantage of this process is the ability of converting methane into higher paraffins and olefins (primarily C2) in a direct way using a single reactor. Nevertheless, low C2+ yields have prevented the process to be commercialized despite the fact that great number of attempts to prepare catalysts were conducted so that it can be economically viable. Due to these limitations, understanding the mechanism and kinetics of the reaction can be utilized in improving the catalysts’ performance. The reaction involves the formation of methyl radicals that undergo gas-phase radical reactions. CH4 activation is believed to be done the surface oxygen species. However, recent studies showed that, in addition to the surface oxygen mediated pathway, an OH radical mediated pathway have a large contribution on the CH4 activation. The experiments of Li/MgO, Sr/La2O3 and NaWO4/SiO2 catalysts revealed variation of behavior in activity and selectivity. In addition, water effect analysis showed that Li/MgO deactivate at the presence of water due to sintering phenomena and the loss of active sites. On the other hand, negative effect on the C2 yield and CH4 conversion rate was observed with Sr/La2O3 with increasing the water partial pressure. Na2WO4/SiO2 showed a positive behavior with water in terms of CH4 conversion and C2 yield. In addition, the increment in CH4 conversion rate was found to be proportional with PO2 ¼ PH2O ½ which is consistent with the formation of OH radicals and the OH-mediated pathway. Experiments of using ring-dye laser, which is used to detect OH in combustion experiments, were tried in order to detect OH radicals in the gas-phase of the catalyst. Nevertheless, noisy signals were obtained that prevented the ability of detecting OH at the expected few ppms concentrations. Further optimization of the experimental setup is required.
    • Large-scale Comparative Study of Hi-C-based Chromatin 3D Structure Modeling Methods

      Wang, Cheng (2018-05-17)
      Chromatin is a complex polymer molecule in eukaryotic cells, primarily consisting of DNA and histones. Many works have shown that the 3D folding of chromatin structure plays an important role in DNA expression. The recently proposed Chro- mosome Conformation Capture technologies, especially the Hi-C assays, provide us an opportunity to study how the 3D structures of the chromatin are organized. Based on the data from Hi-C experiments, many chromatin 3D structure modeling methods have been proposed. However, there is limited ground truth to validate these methods and no robust chromatin structure alignment algorithms to evaluate the performance of these methods. In our work, we first made a thorough literature review of 25 publicly available population Hi-C-based chromatin 3D structure modeling methods. Furthermore, to evaluate and to compare the performance of these methods, we proposed a novel data simulation method, which combined the population Hi-C data and single-cell Hi-C data without ad hoc parameters. Also, we designed a global and a local alignment algorithms to measure the similarity between the templates and the chromatin struc- tures predicted by different modeling methods. Finally, the results from large-scale comparative tests indicated that our alignment algorithms significantly outperform the algorithms in literature.
    • Large-Scale Context-Aware Volume Navigation using Dynamic Insets

      Al-Awami, Ali K. (2012-07)
      Latest developments in electron microscopy (EM) technology produce high resolution images that enable neuro-scientists to identify and put together the complex neural connections in a nervous system. However, because of the massive size and underlying complexity of this kind of data, processing, navigation and analysis suffer drastically in terms of time and effort. In this work, we propose the use of state-of- the-art navigation techniques, such as dynamic insets, built on a peta-scale volume visualization framework to provide focus and context-awareness to help neuro-scientists in their mission to analyze, reconstruct, navigate and explore EM neuroscience data.
    • Learning Gene Regulatory Networks Computationally from Gene Expression Data Using Weighted Consensus

      Fujii, Chisato (2015-04-16)
      Gene regulatory networks analyze the relationships between genes allowing us to un- derstand the gene regulatory interactions in systems biology. Gene expression data from the microarray experiments is used to obtain the gene regulatory networks. How- ever, the microarray data is discrete, noisy and non-linear which makes learning the networks a challenging problem and existing gene network inference methods do not give consistent results. Current state-of-the-art study uses the average-ranking-based consensus method to combine and average the ranked predictions from individual methods. However each individual method has an equal contribution to the consen- sus prediction. We have developed a linear programming-based consensus approach which uses learned weights from linear programming among individual methods such that the methods have di↵erent weights depending on their performance. Our result reveals that assigning di↵erent weights to individual methods rather than giving them equal weights improves the performance of the consensus. The linear programming- based consensus method is evaluated and it had the best performance on in silico and Saccharomyces cerevisiae networks, and the second best on the Escherichia coli network outperformed by Inferelator Pipeline method which gives inconsistent results across a wide range of microarray data sets.
    • Level Set Projection Method for Incompressible Navier-Stokes on Arbitrary Boundaries

      Williams-Rioux, Bertrand (2012-01-12)
      Second order level set projection method for incompressible Navier-Stokes equations is proposed to solve flow around arbitrary geometries. We used rectilinear grid with collocated cell centered velocity and pressure. An explicit Godunov procedure is used to address the nonlinear advection terms, and an implicit Crank-Nicholson method to update viscous effects. An approximate pressure projection is implemented at the end of the time stepping using multigrid as a conventional fast iterative method. The level set method developed by Osher and Sethian [17] is implemented to address real momentum and pressure boundary conditions by the advection of a distance function, as proposed by Aslam [3]. Numerical results for the Strouhal number and drag coefficients validated the model with good accuracy for flow over a cylinder in the parallel shedding regime (47 < Re < 180). Simulations for an array of cylinders and an oscillating cylinder were performed, with the latter demonstrating our methods ability to handle dynamic boundary conditions.
    • Light Condensation and Localization in Disordered Photonic Media: Theory and Large Scale ab initio Simulations

      Toth, Laszlo Daniel (2013-05-07)
      Disordered photonics is the study of light in random media. In a disordered photonic medium, multiple scattering of light and coherence, together with the fundamental principle of reciprocity, produce a wide range of interesting phenomena, such as enhanced backscattering and Anderson localization of light. They are also responsible for the existence of modes in these random systems. It is known that analogous processes to Bose-Einstein condensation can occur in classical wave systems, too. Classical condensation has been studied in several contexts in photonics: pulse formation in lasers, mode-locking theory and coherent emission of disordered lasers. All these systems have the common theme of possessing a large ensemble of waves or modes, together with nonlinearity, dispersion or gain. In this work, we study light condensation and its connection with light localization in a disordered, passive dielectric medium. We develop a theory for the modes inside the disordered resonator, which combines the Feshbach projection technique with spin-glass theory and statistical physics. In particular, starting from the Maxwell’s equations, we map the system to a spherical p-spin model with p = 2. The spins are replaced by modes and the temperature is related to the fluctuations in the environment. We study the equilibrium thermodynamics of the system in a general framework and show that two distinct phases exist: a paramagnetic phase, where all the modes are randomly oscillating and a condensed phase, where the energy condensates on a single mode. The thermodynamic quantities can be explicitly interpreted and can also be computed from the disorder-averaged time domain correlation function. We launch an ab initio simulation campaign using our own code and the Shaheen supercomputer to test the theoretical predictions. We construct photonic samples of varying disorder and find computationally relevant ways to obtain the thermodynamic quantities. We observe the phase transition and also link the condensation process to the localization. Our research could be a step towards the ultimate goal: to build a ”photonic mode condenser”, which transforms a broadband spectrum to a narrow one - ideally, to a single mode - with minimal energy loss, aided solely by disorder.
    • Light Management in Optoelectronic Devices with Disordered and Chaotic Structures

      Khan, Yasser (2012-07)
      With experimental realization, energy harvesting capabilities of chaotic microstructures were explored. Incident photons falling into chaotic trajectories resulted in energy buildup for certain frequencies. As a consequence, many fold enhancement in light trapping was observed. These ellipsoid like chaotic microstructures demonstrated 25% enhancement in light trapping at 450nm excitation and 15% enhancement at 550nm excitation. Optimization of these structures can drive novel chaos-assisted energy harvesting systems. In subsequent sections of the thesis, prospect of broadband light extraction from white light emitting diodes were investigated, which is an unchallenged but quintessential problem in solid-state lighting. Size dependent scattering allows microstructures to interact strongly with narrow-band light. If disorder is introduced in spread and sizes of microstructures, broadband light extraction is possible. A novel scheme with Voronoi tessellation to quantify disorder in physical systems was also introduced, and a link between voronoi disorder and state disorder of statistical mechanics was established. Overall, in this thesis some nascent concepts regarding disorder and chaos were investigated to efficiently manage electromagnetic waves in optoelectronic devices.
    • Linear Simulations of the Cylindrical Richtmyer-Meshkov Instability in Hydrodynamics and MHD

      Gao, Song (2013-05)
      The Richtmyer-Meshkov instability occurs when density-stratified interfaces are impulsively accelerated, typically by a shock wave. We present a numerical method to simulate the Richtmyer-Meshkov instability in cylindrical geometry. The ideal MHD equations are linearized about a time-dependent base state to yield linear partial differential equations governing the perturbed quantities. Convergence tests demonstrate that second order accuracy is achieved for smooth flows, and the order of accuracy is between first and second order for flows with discontinuities. Numerical results are presented for cases of interfaces with positive Atwood number and purely azimuthal perturbations. In hydrodynamics, the Richtmyer-Meshkov instability growth of perturbations is followed by a Rayleigh-Taylor growth phase. In MHD, numerical results indicate that the perturbations can be suppressed for sufficiently large perturbation wavenumbers and magnetic fields.
    • Link Label Prediction in Signed Citation Network

      Akujuobi, Uchenna Thankgod (2016-04-12)
      Link label prediction is the problem of predicting the missing labels or signs of all the unlabeled edges in a network. For signed networks, these labels can either be positive or negative. In recent years, different algorithms have been proposed such as using regression, trust propagation and matrix factorization. These approaches have tried to solve the problem of link label prediction by using ideas from social theories, where most of them predict a single missing label given that labels of other edges are known. However, in most real-world social graphs, the number of labeled edges is usually less than that of unlabeled edges. Therefore, predicting a single edge label at a time would require multiple runs and is more computationally demanding. In this thesis, we look at link label prediction problem on a signed citation network with missing edge labels. Our citation network consists of papers from three major machine learning and data mining conferences together with their references, and edges showing the relationship between them. An edge in our network is labeled either positive (dataset relevant) if the reference is based on the dataset used in the paper or negative otherwise. We present three approaches to predict the missing labels. The first approach converts the label prediction problem into a standard classification problem. We then, generate a set of features for each edge and then adopt Support Vector Machines in solving the classification problem. For the second approach, we formalize the graph such that the edges are represented as nodes with links showing similarities between them. We then adopt a label propagation method to propagate the labels on known nodes to those with unknown labels. In the third approach, we adopt a PageRank approach where we rank the nodes according to the number of incoming positive and negative edges, after which we set a threshold. Based on the ranks, we can infer an edge would be positive if it goes a node above the threshold. Experimental results on our citation network corroborate the efficacy of these approaches. With each edge having a label, we also performed additional network analysis where we extracted a subnetwork of the dataset relevant edges and nodes in our citation network, and then detected different communities from this extracted sub-network. To understand the different detected communities, we performed a case study on several dataset communities. The study shows a relationship between the major topic areas in a dataset community and the data sources in the community.
    • Liquid Marbles

      Khalil, Kareem (2012-12)
      Granulation, the process of formation of granules from a combination of base powders and binder liquids, has been a subject of research for almost 50 years, studied extensively for its vast applications, primarily to the pharmaceutical industry sector. The principal aim of granulation is to form granules comprised of the active pharmaceutical ingredients (API’s), which have more desirable handling and flowability properties than raw powders. It is also essential to ensure an even distribution of active ingredients within a tablet with the goal of achieving time‐controlled release of drugs. Due to the product‐specific nature of the industry, however, data is largely empirical [1]. For example, the raw powders used can vary in size by two orders of magnitude with narrow or broad size distributions. The physical properties of the binder liquids can also vary significantly depending on the powder properties and required granule size. Some significant progress has been made to better our understanding of the overall granulation process [1] and it is widely accepted that the initial nucleation / wetting stage, when the binder liquid first wets the powders, is key to the whole process. As such, many experimental studies have been conducted in attempt to elucidate the physics of this first stage [1], with two main mechanisms being observed – classified by Ivenson [1] as the “Traditional description” and the “Modern Approach”. See Figure 1 for a graphical definition of these two mechanisms. Recent studies have focused on the latter approach [1] and a new, exciting development in this field is the Liquid Marble. This interesting formation occurs when a liquid droplet interacts with a hydrophobic (or superhydrophobic) powder. The droplet can become encased in the powder, which essentially provides a protective “shell” or “jacket” for the liquid inside [2]. The liquid inside is then isolated from contact with other solids or liquids and has some fascinating physical properties, which will be described later on. The main potential use for these liquid marbles appears to be for the formation of novel, hollow granules [3], which may have desirable properties in specific pharmaceutical applications (e.g. respiratory devices). They have also been shown to be a highly effectively means of water recovery and potentially as micro‐transporters and micro‐reactors [4]. However, many studies in the literature are essentially proof‐of‐concept approaches for applications and a systematic study of the dynamics of the marble formation and the first interactions of the liquid droplet with the powder is lacking. This is the motivation for this research project, where we aim to provide such information from an experimental study of drop impact onto hydrophobic powders with the use of high‐speed imaging.
    • Living Polycondensation: Synthesis of Well-Defined Aromatic Polyamide-Based Polymeric Materials

      Alyami, Mram Z. (2016-11)
      Chain growth condensation polymerization is a powerful tool towards the synthesis of well-defined polyamides. This thesis focuses on one hand, on the synthesis of well-defined aromatic polyamides with different aminoalkyl pendant groups with low polydispersity and controlled molecular weights, and on the other hand, on studying their thermal properties. In the first project, well-defined poly (N-octyl-p-aminobenzoate) and poly (N-butyl-p-aminobenzoate) were synthesized, and for the first time, their thermal properties were studied. In the second project, ethyl4-aminobenzoate, ethyl 4-octyl aminobenzoate and 4-(hydroxymethyl) benzoic acid were used as novel efficient initiators of ε-caprolactone with t-BuP2 as a catalyst. Macroinitiator and Macromonomer of poly (ε-caprolactone) were synthesized with ethyl 4-octyl aminobenzoate and ethyl 4-aminobenzoate as initiators to afford polyamide-block-poly (ε-caprolactone) and polyamide-graft-poly (ε-caprolactone) by chain growth condensation polymerization (CGCP). In the third project, a new study has been done on chain growth condensation polymerization to discover the probability to synthesize new polymers and studied their thermal properties. For this purpose, poly (N-cyclohexyl-p-aminobenzoate) and poly (N-hexyl-p-aminobenzoate) were synthesized with low polydispersity and controlled molecular weights.
    • Local Likelihood Approach for High-Dimensional Peaks-Over-Threshold Inference

      Baki, Zhuldyzay (2018-05-14)
      Global warming is affecting the Earth climate year by year, the biggest difference being observable in increasing temperatures in the World Ocean. Following the long- term global ocean warming trend, average sea surface temperatures across the global tropics and subtropics have increased by 0.4–1◦C in the last 40 years. These rates become even higher in semi-enclosed southern seas, such as the Red Sea, threaten- ing the survival of thermal-sensitive species. As average sea surface temperatures are projected to continue to rise, careful study of future developments of extreme temper- atures is paramount for the sustainability of marine ecosystem and biodiversity. In this thesis, we use Extreme-Value Theory to study sea surface temperature extremes from a gridded dataset comprising 16703 locations over the Red Sea. The data were provided by Operational SST and Sea Ice Analysis (OSTIA), a satellite-based data system designed for numerical weather prediction. After pre-processing the data to account for seasonality and global trends, we analyze the marginal distribution of ex- tremes, defined as observations exceeding a high spatially varying threshold, using the Generalized Pareto distribution. This model allows us to extrapolate beyond the ob- served data to compute the 100-year return levels over the entire Red Sea, confirming the increasing trend of extreme temperatures. To understand the dynamics govern- ing the dependence of extreme temperatures in the Red Sea, we propose a flexible local approach based on R-Pareto processes, which extend the univariate Generalized Pareto distribution to the spatial setting. Assuming that the sea surface temperature varies smoothly over space, we perform inference based on the gradient score method over small regional neighborhoods, in which the data are assumed to be stationary in space. This approach allows us to capture spatial non-stationarity, and to reduce the overall computational cost by taking advantage of distributed computing resources. Our results reveal an interesting extremal spatial dependence structure: in particular, from our estimated model, we conclude that significant extremal dependence prevails for distances up to about 2500 km, which roughly corresponds to the Red Sea length.
    • Local Ray-Based Traveltime Computation Using the Linearized Eikonal Equation

      Almubarak, Mohammed S. (2013-05)
      The computation of traveltimes plays a critical role in the conventional implementations of Kirchhoff migration. Finite-difference-based methods are considered one of the most effective approaches for traveltime calculations and are therefore widely used. However, these eikonal solvers are mainly used to obtain early-arrival traveltime. Ray tracing can be used to pick later traveltime branches, besides the early arrivals, which may lead to an improvement in velocity estimation or in seismic imaging. In this thesis, I improved the accuracy of the solution of the linearized eikonal equation by constructing a linear system of equations (LSE) based on finite-difference approximation, which is of second-order accuracy. The ill-conditioned LSE is initially regularized and subsequently solved to calculate the traveltime update. Numerical tests proved that this method is as accurate as the second-order eikonal solver. Later arrivals are picked using ray tracing. These traveltimes are binned to the nearest node on a regular grid and empty nodes are estimated by interpolating the known values. The resulting traveltime field is used as an input to the linearized eikonal algorithm, which improves the accuracy of the interpolated nodes and yields a local ray-based traveltime. This is a preliminary study and further investigation is required to test the efficiency and the convergence of the solutions.
    • Location-Based Resource Allocation for OFDMA Cognitive Radio Systems

      Ghorbel, Mahdi (2011-05)
      Cognitive radio is one of the hot topics for emerging and future wireless communication. It has been proposed as a suitable solution for the spectrum scarcity caused by the increase in frequency demand. The concept is based on allowing unlicensed users, called cognitive or secondary users, to share the unoccupied frequency bands with their owners, called the primary users, under constraints on the interference they cause to them. In order to estimate this interference, the cognitive system usually uses the channel state information to the primary user, which is often impractical to obtain. However, we propose to use location information, which is easier to obtain, to estimate this interference. The purpose of this work is to propose a subchannel and power allocation method which maximizes the secondary users' total capacity under the constraints of limited budget power and total interference to the primary under certain threshold. We model the problem as a constrained optimization problem for both downlink and uplink cases. Then, we propose low-complexity resource allocation schemes based on the waterfilling algorithm. The simulation results show the efficiency of the proposed method with comparison to the exhaustive search algorithm.
    • Low Complexity Precoder and Receiver Design for Massive MIMO Systems: A Large System Analysis using Random Matrix Theory

      Sifaou, Houssem (2016-05)
      Massive MIMO systems are shown to be a promising technology for next generations of wireless communication networks. The realization of the attractive merits promised by massive MIMO systems requires advanced linear precoding and receiving techniques in order to mitigate the interference in downlink and uplink transmissions. This work considers the precoder and receiver design in massive MIMO systems. We first consider the design of the linear precoder and receiver that maximize the minimum signal-to-interference-plus-noise ratio (SINR) subject to a given power constraint. The analysis is carried out under the asymptotic regime in which the number of the BS antennas and that of the users grow large with a bounded ratio. This allows us to leverage tools from random matrix theory in order to approximate the parameters of the optimal linear precoder and receiver by their deterministic approximations. Such a result is of valuable practical interest, as it provides a handier way to implement the optimal precoder and receiver. To reduce further the complexity, we propose to apply the truncated polynomial expansion (TPE) concept on a per-user basis to approximate the inverse of large matrices that appear on the expressions of the optimal linear transceivers. Using tools from random matrix theory, we determine deterministic approximations of the SINR and the transmit power in the asymptotic regime. Then, the optimal per-user weight coe cients that solve the max-min SINR problem are derived. The simulation results show that the proposed precoder and receiver provide very close to optimal performance while reducing signi cantly the computational complexity. As a second part of this work, the TPE technique in a per-user basis is applied to the optimal linear precoding that minimizes the transmit power while satisfying a set of target SINR constraints. Due to the emerging research eld of green cellular networks, such a problem is receiving increasing interest nowadays. Closed form expressions of the optimal parameters of the proposed low complexity precoding for power minimization are derived. Numerical results show that the proposed power minimization precoding approximates well the performance of the optimal linear precoding while being more practical for implementation.
    • Low cost and conformal microwave water-cut sensor for optimizing oil production process

      Karimi, Muhammad Akram (2015-08)
      Efficient oil production and refining processes require the precise measurement of water content in oil (i.e., water-cut) which is extracted out of a production well as a byproduct. Traditional water-cut (WC) laboratory measurements are precise, but are incapable of providing real-time information, while recently reported in-line WC sensors (both in research and industry) are usually incapable of sensing the full WC range (0 – 100 %), are bulky, expensive and non-scalable for the variety of pipe sizes used in the oil industry. This work presents a novel implementation of a planar microwave T-resonator for fully non-intrusive in situ WC sensing over the full range of operation, i.e., 0 – 100 %. As opposed to non-planar resonators, the choice of a planar resonator has enabled its direct implementation on the pipe surface using low cost fabrication methods. WC sensors make use of series resonance introduced by a λ/4 open shunt stub placed in the middle of a microstrip line. The detection mechanism is based on the measurement of the T-resonator’s resonance frequency, which varies with the relative percentage of oil and water (due to the difference in their dielectric properties). In order to implement the planar T-resonator based sensor on the curved surface of the pipe, a novel approach of utilizing two ground planes is proposed in this work. The innovative use of dual ground planes makes this sensor scalable to a wide range of pipe sizes present in the oil industry. The design and optimization of this sensor was performed in an electromagnetic Finite Element Method (FEM) solver, i.e., High Frequency Structural Simulator (HFSS) and the dielectric properties of oil, water and their emulsions of different WCs used in the simulation model were measured using a SPEAG-dielectric assessment kit (DAK-12). The simulation results were validated through characterization of fabricated prototypes. Initial rapid prototyping was completed using copper tape, after which a novel reusable 3D-printed mask based fabrication was also successfully implemented, which would resemble screen printing if it were to be implemented in 3D. In order to verify the design’s applicability for the actual scenario of oil wells, where an oil/water mixture is flowing through the pipes, a basic flow loop was constructed in the IMPACT laboratory at KAUST. The dynamic measurements in the flow loop showed that the WC sensor design is also equally applicable for flowing mixtures. The proposed design is capable of sensing the WC with a fine resolution due to its wide sensing range, in the 80 – 190 MHz frequency band. The experimental results for these low cost and conformal WC sensors are promising, and further characterization and optimization of these sensors according to oil field conditions will enable their widespread use in the oil industry.
    • Low Damage, High Anisotropy Inductively Coupled Plasma for Gallium Nitride based Devices

      Ibrahim, Youssef H. (2013-05-27)
      Group III-nitride semiconductors possess unique properties, which make them versatile materials for suiting many applications. Structuring vertical and exceptionally smooth GaN profiles is crucial for efficient optical device operation. The processing requirements for laser devices and ridge waveguides are stringent as compared to LEDs and other electronic devices. Due to the strong bonding and chemically inert nature of GaN, dry etching becomes a critical fabrication step. The surface morphology and facet etch angle are analyzed using SEM and AFM measurements. The influence of different mask materials is also studied including Ni as well as a SiO2 and resist bilayer. The high selectivity Ni Mask is found to produce high sidewall angles ~79°. Processing parameters are optimized for both the mask material and GaN in order to achieve a highly anisotropic, smooth profile, without resorting to additional surface treatment steps. An optimizing a SF6/O2 plasma etch process resulted in smooth SiO2 mask sidewalls. The etch rate and GaN surface roughness dependence on the RF power was also examined. Under a low 2mTorr pressure, the RF and ICP power were optimized to 150W and 300W respectively, such that a smooth GaN morphology and sidewalls was achieved with reduced ion damage. The The AFM measurements of the etched GaN surface indicate a low RMS roughness ranging from 4.75 nm to 7.66 nm.
    • Low-Complexity Regularization Algorithms for Image Deblurring

      Alanazi, Abdulrahman (2016-11)
      Image restoration problems deal with images in which information has been degraded by blur or noise. In practice, the blur is usually caused by atmospheric turbulence, motion, camera shake, and several other mechanical or physical processes. In this study, we present two regularization algorithms for the image deblurring problem. We first present a new method based on solving a regularized least-squares (RLS) problem. This method is proposed to find a near-optimal value of the regularization parameter in the RLS problems. Experimental results on the non-blind image deblurring problem are presented. In all experiments, comparisons are made with three benchmark methods. The results demonstrate that the proposed method clearly outperforms the other methods in terms of both the output PSNR and structural similarity, as well as the visual quality of the deblurred images. To reduce the complexity of the proposed algorithm, we propose a technique based on the bootstrap method to estimate the regularization parameter in low and high-resolution images. Numerical results show that the proposed technique can effectively reduce the computational complexity of the proposed algorithms. In addition, for some cases where the point spread function (PSF) is separable, we propose using a Kronecker product so as to reduce the computations. Furthermore, in the case where the image is smooth, it is always desirable to replace the regularization term in the RLS problems by a total variation term. Therefore, we propose a novel method for adaptively selecting the regularization parameter in a so-called square root regularized total variation (SRTV). Experimental results demonstrate that our proposed method outperforms the other benchmark methods when applied to smooth images in terms of PSNR, SSIM and the restored image quality. In this thesis, we focus on the non-blind image deblurring problem, where the blur kernel is assumed to be known. However, we developed algorithms that also work in the blind image deblurring. Experimental results show that our proposed methods are robust enough in the blind deblurring and outperform the other benchmark methods in terms of both output PSNR and SSIM values.