The spontaneous polarization (SP) and piezoelectric (PZ) constants of BxAl1-xN and BxGa1-xN (0 ≤ x ≤ 1) ternary alloys were calculated with the hexagonal structure as reference. The SP constants show moderate nonlinearity due to the volume deformation and the dipole moment difference between the hexagonal and wurtzite structures. The PZ constants exhibit significant bowing because of the large lattice difference between binary alloys. Furthermore, the PZ constants of BxAl1-xN and BxGa1-xN become zero at boron compositions of ∼87% and ∼74%, respectively, indicating non-piezoelectricity. The large range of SP and PZ constants of BxAl1-xN (BAlN) and BxGa1-xN (BGaN) can be beneficial for the compound semiconductor device development. For instance, zero heterointerface polarization ΔP can be formed for BAlN and BGaN based heterojunctions with proper B compositions, potentially eliminating the quantum-confined Stark effect for c-plane optical devices and thus removing the need of non-polar layers and substrates. Besides, large heterointerface polarization ΔP is available that is desirable for electronic devices.
Bouacida, Nader; Alghadhban, Amer Mohammad JarAlla; Alalmaei, Shiyam Mohammed Abdullah; Mohammed, Haneen; Shihada, Basem(2017 IEEE International Conference on Communications (ICC), IEEE, 2017-07-31)[Conference Paper]
The controller is a critical piece of the SDN architecture, where it is considered as the mastermind of SDN networks. Thus, its failure will cause a significant portion of the network to fail. Overload is one of the common causes of failure since the controller is frequently invoked by new flows. Even through SDN controllers are often replicated, the significant recovery time can be an overkill for the availability of the entire network. In order to overcome the problem of the overloaded controller failure in SDN, this paper proposes a novel controller offload solution for failure mitigation based on a prediction module that anticipates the presence of a harmful long-term load. In fact, the long-standing load would eventually overwhelm the controller leading to a possible failure. To predict whether the load in the controller is short-term or long-term load, we used three different classification algorithms: Support Vector Machine, k-Nearest Neighbors, and Naive Bayes. Our evaluation results demonstrate that Support Vector Machine algorithm is applicable for detecting the type of load with an accuracy of 97.93% in a real-time scenario. Besides, our scheme succeeded to offload the controller by switching between the reactive and proactive mode in response to the prediction module output.
Both β-Ga2O3 and wurtzite AlN have wide bandgaps of 4.5–4.9 and 6.1 eV, respectively. We calculated the in-plane lattice mismatch between the (−201) plane of β-Ga2O3 and the (0002) plane of AlN, which was found to be 2.4%. This is the smallest mismatch between β-Ga2O3 and binary III-nitrides which is beneficial for the formation of a high quality β-Ga2O3/AlN heterojunction. However, the valence and conduction band offsets (VBO and CBO) at the β-Ga2O3/AlN heterojunction have not yet been identified. In this study, a very thin (less than 2 nm) β-Ga2O3 layer was deposited on an AlN/sapphire template to form the heterojunction by pulsed laser deposition. High-resolution X-ray photoelectron spectroscopy revealed the core-level (CL) binding energies of Ga 3d and Al 2p with respect to the valence band maximum in individual β-Ga2O3 and AlN layers, respectively. The separation between Ga 3d and Al 2p CLs at the β-Ga2O3/AlN interface was also measured. Eventually, the VBO was found to be −0.55 ± 0.05 eV. Consequently, a staggered-gap (type II) heterojunction with a CBO of −1.75 ± 0.05 eV was determined. The identification of the band alignment of the β-Ga2O3/AlN heterojunction could facilitate the design of optical and electronic devices based on these and related alloys.
Peng, Yifan; Dun, Xiong; Sun, Qilin; Heidrich, Wolfgang(ACM Transactions on Graphics, Association for Computing Machinery (ACM), 2017-11-22)[Article]
Computational caustics and light steering displays offer a wide range of interesting applications, ranging from art works and architectural installations to energy efficient HDR projection. In this work we expand on this concept by encoding several target images into pairs of front and rear phase-distorting surfaces. Different target holograms can be decoded by mixing and matching different front and rear surfaces under specific geometric alignments. Our approach, which we call mix-and-match holography, is made possible by moving from a refractive caustic image formation process to a diffractive, holographic one. This provides the extra bandwidth that is required to multiplex several images into pairing surfaces.
Sifaou, Houssem; Kammoun, Abla; Park, Kihong; Alouini, Mohamed-Slim(IEEE Access, Institute of Electrical and Electronics Engineers (IEEE), 2017-11-27)[Article]
Visible light communication (VLC) is an emerging technique that uses light-emitting diodes to combine communication and illumination. It is considered as a promising scheme for indoor wireless communication that can be deployed at reduced costs, while offering high data rate performance. This paper focuses on the design of precoding and receiving schemes for downlink multi-user multiple-input multiple-output VLC systems using angle diversity receivers. Two major concerns need to be considered while solving such a problem. The first one is related to the inter-user interference, basically inherent to our consideration of a multi-user system, while the second results from the users’ mobility, causing imperfect channel estimates. To address both concerns, we propose robust precoding and receiver that solve the max-min SINR problem. The performance of the proposed VLC design is studied under different working conditions, where a significant gain of the proposed robust transceivers over their non-robust counterparts has been observed.
Affara, Lama Ahmed; Ghanem, Bernard; Wonka, Peter(arXiv, 2017-09-27)[Preprint]
Convolutional sparse coding (CSC) is an important building block of many computer vision applications ranging from image and video compression to deep learning. We present two contributions to the state of the art in CSC. First, we significantly speed up the computation by proposing a new optimization framework that tackles the problem in the dual domain. Second, we extend the original formulation to higher dimensions in order to process a wider range of inputs, such as color inputs, or HOG features. Our results show a significant speedup compared to the current state of the art in CSC.
Li, Lichun; Langbort, Cedric; Shamma, Jeff S.(arXiv, 2017-11-07)[Preprint]
This paper considers a zero-sum two-player asymmetric information stochastic game where only one player knows the system state, and the transition law is controlled by the informed player only. For the informed player, it has been shown that the security strategy only depends on the belief and the current stage. We provide LP formulations whose size is only linear in the size of the uninformed player's action set to compute both history based and belief based security strategies. For the uninformed player, we focus on the regret, the difference between 0 and the future payoff guaranteed by the uninformed player in every possible state. Regret is a real vector of the same size as the belief, and depends only on the action of the informed player and the strategy of the uninformed player. This paper shows that the uninformed player has a security strategy that only depends on the regret and the current stage. LP formulations are then given to compute the history based security strategy, the regret at every stage, and the regret based security strategy. The size of the LP formulations are again linear in the size of the uninformed player action set. Finally, an intrusion detection problem is studied to demonstrate the main results in this paper.
Visual Question Answering (VQA) models should have both high robustness and accuracy. Unfortunately, most of the current VQA research only focuses on accuracy because there is a lack of proper methods to measure the robustness of VQA models. There are two main modules in our algorithm. Given a natural language question about an image, the first module takes the question as input and then outputs the ranked basic questions, with similarity scores, of the main given question. The second module takes the main question, image and these basic questions as input and then outputs the text-based answer of the main question about the given image. We claim that a robust VQA model is one, whose performance is not changed much when related basic questions as also made available to it as input. We formulate the basic questions generation problem as a LASSO optimization, and also propose a large scale Basic Question Dataset (BQD) and Rscore (novel robustness measure), for analyzing the robustness of VQA models. We hope our BQD will be used as a benchmark for to evaluate the robustness of VQA models, so as to help the community build more robust and accurate VQA models.
Following the seminal work of Nesterov, accelerated optimization methods have been used to powerfully boost the performance of first-order, gradient-based parameter estimation in scenarios where second-order optimization strategies are either inapplicable or impractical. Not only does accelerated gradient descent converge considerably faster than traditional gradient descent, but it also performs a more robust local search of the parameter space by initially overshooting and then oscillating back as it settles into a final configuration, thereby selecting only local minimizers with a basis of attraction large enough to contain the initial overshoot. This behavior has made accelerated and stochastic gradient search methods particularly popular within the machine learning community. In their recent PNAS 2016 paper, Wibisono, Wilson, and Jordan demonstrate how a broad class of accelerated schemes can be cast in a variational framework formulated around the Bregman divergence, leading to continuum limit ODE's. We show how their formulation may be further extended to infinite dimension manifolds (starting here with the geometric space of curves and surfaces) by substituting the Bregman divergence with inner products on the tangent space and explicitly introducing a distributed mass model which evolves in conjunction with the object of interest during the optimization process. The co-evolving mass model, which is introduced purely for the sake of endowing the optimization with helpful dynamics, also links the resulting class of accelerated PDE based optimization schemes to fluid dynamical formulations of optimal mass transport.
This article carries out a large dimensional analysis of standard regularized discriminant analysis classifiers designed on the assumption that data arise from a Gaussian mixture model with different means and covariances. The analysis relies on fundamental results from random matrix theory (RMT) when both the number of features and the cardinality of the training data within each class grow large at the same pace. Under mild assumptions, we show that the asymptotic classification error approaches a deterministic quantity that depends only on the means and covariances associated with each class as well as the problem dimensions. Such a result permits a better understanding of the performance of regularized discriminant analsysis, in practical large but finite dimensions, and can be used to determine and pre-estimate the optimal regularization parameter that minimizes the misclassification error probability. Despite being theoretically valid only for Gaussian data, our findings are shown to yield a high accuracy in predicting the performances achieved with real data sets drawn from the popular USPS data base, thereby making an interesting connection between theory and practice.
The export option will allow you to export the current search results of the entered query to a file. Different
formats are available for download. To export the items, click on the button corresponding with the preferred download format.
By default, clicking on the export buttons will result in a download of the allowed maximum amount of items.
For anonymous users the allowed maximum amount is 50 search results.
To select a subset of the search results, click "Selective Export" button and make a selection of the items you want to export.
The amount of items that can be exported at once is similarly restricted as the full export.
After making a selection, click one of the export format buttons. The amount of items that will be exported is indicated in the bubble next to export format.