For more information visit:

Recent Submissions

  • Early-Stage Growth Mechanism and Synthesis Conditions-Dependent Morphology of Nanocrystalline Bi Films Electrodeposited from Perchlorate Electrolyte.

    Tishkevich, Daria; Grabchikov, Sergey; Zubar, Tatiana; Vasin, Denis; Trukhanov, Sergei; Vorobjova, Alla; Yakimchuk, Dmitry; Kozlovskiy, Artem; Zdorovets, Maxim; Giniyatova, Sholpan; Shimanovich, Dmitriy; Lyakhov, Dmitry; Michels, Dominik L.; Dong, Mengge; Gudkova, Svetlana; Trukhanov, Alex (Nanomaterials (Basel, Switzerland), MDPI AG, 2020-07-02) [Article]
    Bi nanocrystalline films were formed from perchlorate electrolyte (PE) on Cu substrate via electrochemical deposition with different duration and current densities. The microstructural, morphological properties, and elemental composition were studied using scanning electron microscopy (SEM), atomic force microscopy (AFM), and energy-dispersive X-ray microanalysis (EDX). The optimal range of current densities for Bi electrodeposition in PE using polarization measurements was demonstrated. For the first time, it was shown and explained why, with a deposition duration of 1 s, co-deposition of Pb and Bi occurs. The correlation between synthesis conditions and chemical composition and microstructure for Bi films was discussed. The analysis of the microstructure evolution revealed the changing mechanism of the films' growth from pillar-like (for Pb-rich phase) to layered granular form (for Bi) with deposition duration rising. This abnormal behavior is explained by the appearance of a strong Bi growth texture and coalescence effects. The investigations of porosity showed that Bi films have a closely-packed microstructure. The main stages and the growth mechanism of Bi films in the galvanostatic regime in PE with a deposition duration of 1-30 s are proposed.
  • DTiGEMS+: drug–target interaction prediction using graph embedding, graph mining, and similarity-based techniques.

    Thafar, Maha A.; Olayan, Rawan S.; Ashoor, Haitham; Albaradei, Somayah; Bajic, Vladimir B.; Gao, Xin; Gojobori, Takashi; Essack, Magbubah (Journal of Cheminformatics, Springer Science and Business Media LLC, 2020-07-02) [Article]
    In silico prediction of drug–target interactions is a critical phase in the sustainable drug development process, especially when the research focus is to capitalize on the repositioning of existing drugs. However, developing such computational methods is not an easy task, but is much needed, as current methods that predict potential drug–target interactions suffer from high false-positive rates. Here we introduce DTiGEMS+, a computational method that predicts Drug–Target interactions using Graph Embedding, graph Mining, and Similarity-based techniques. DTiGEMS+ combines similarity-based as well as feature-based approaches, and models the identification of novel drug–target interactions as a link prediction problem in a heterogeneous network. DTiGEMS+ constructs the heterogeneous network by augmenting the known drug–target interactions graph with two other complementary graphs namely: drug–drug similarity, target–target similarity. DTiGEMS+ combines different computational techniques to provide the final drug target prediction, these techniques include graph embeddings, graph mining, and machine learning. DTiGEMS+ integrates multiple drug–drug similarities and target–target similarities into the final heterogeneous graph construction after applying a similarity selection procedure as well as a similarity fusion algorithm. Using four benchmark datasets, we show DTiGEMS+ substantially improves prediction performance compared to other state-of-the-art in silico methods developed to predict of drug-target interactions by achieving the highest average AUPR across all datasets (0.92), which reduces the error rate by 33.3% relative to the second-best performing model in the state-of-the-art methods comparison.
  • Attributed heterogeneous network fusion via collaborative matrix tri-factorization

    Yu, Guoxian; Wang, Yuehui; Wang, Jun; Domeniconi, Carlotta; Guo, Maozu; Zhang, Xiangliang (Information Fusion, Elsevier BV, 2020-06-26) [Article]
    Heterogeneous network based data fusion can encode diverse inter- and intra-relations between objects, and has been sparking increasing attention in recent years. Matrix factorization based data fusion models have been invented to fuse multiple data sources. However, these models generally suffer from the widely-witnessed insufficient relations between nodes and from information loss when heterogeneous attributes of diverse network nodes are transformed into ad-hoc homologous networks for fusion. In this paper, we introduce a general data fusion model called Attributed Heterogeneous Network Fusion (AHNF). AHNF firstly constructs an attributed heterogeneous network composed with different types of nodes and the diverse attribute vectors of these nodes. It uses indicator matrices to differentiate the observed inter-relations from the latent ones, and thus reduces the impact of insufficient relations between nodes. Next, it collaboratively factorizes multiple adjacency matrices and attribute data matrices of the heterogeneous network into low-rank matrices to explore the latent relations between these nodes. In this way, both the network topology and diverse attributes of nodes are fused in a coordinated fashion. Finally, it uses the optimized low-rank matrices to approximate the target relational data matrix of objects and to effectively accomplish the relation prediction. We apply AHNF to predict the lncRNA-disease associations using diverse relational and attribute data sources. AHNF achieves a larger area under the receiver operating curve 0.9367 (by at least 2.14%), and a larger area under the precision-recall curve 0.5937 (by at least 28.53%) than competitive data fusion approaches. AHNF also outperforms competing methods on predicting de novo lncRNA-disease associations, and precisely identifies lncRNAs associated with breast, stomach, prostate, and pancreatic cancers. AHNF is a comprehensive data fusion framework for universal attributed multi-type relational data. The code and datasets are available at
  • Modern Deep Learning in Bioinformatics.

    Li, Haoyang; Tian, Shuye; Li, Yu; Fang, Qiming; Tan, Renbo; Pan, Yijie; Huang, Chao; Xu, Ying; Gao, Xin (Journal of molecular cell biology, Oxford University Press (OUP), 2020-06-24) [Article]
    Deep learning (DL) has shown explosive growth in its application to bioinformatics and has demonstrated thrillingly promising power to mine the complex relationship hidden in large-scale biological and biomedical data. A number of comprehensive reviews have been published on such applications, ranging from high-level reviews with future perspectives to those mainly serving as tutorials. These reviews have provided an excellent introduction to and guideline for applications of DL in bioinformatics, covering multiple types of machine learning (ML) problems, different DL architectures, and ranges of biological/biomedical problems. However, most of these reviews have focused on previous research, whereas current trends in the principled DL field and perspectives on their future developments and potential new applications to biology and biomedicine are still scarce. We will focus on modern DL, the ongoing trends and future directions of the principled DL field, and postulate new and major applications in bioinformatics.
  • Modern Deep Learning in Bioinformatics.

    Li, Haoyang; Tian, Shuye; Li, Yu; Fang, Qiming; Tan, Renbo; Pan, Yijie; Huang, Chao; Xu, Ying; Gao, Xin (Journal of molecular cell biology, Oxford University Press (OUP), 2020-06-24) [Article]
    Deep learning (DL) has shown explosive growth in its application to bioinformatics and has demonstrated thrillingly promising power to mine the complex relationship hidden in large-scale biological and biomedical data. A number of comprehensive reviews have been published on such applications, ranging from high-level reviews with future perspectives to those mainly serving as tutorials. These reviews have provided an excellent introduction to and guideline for applications of DL in bioinformatics, covering multiple types of machine learning (ML) problems, different DL architectures, and ranges of biological/biomedical problems. However, most of these reviews have focused on previous research, whereas current trends in the principled DL field and perspectives on their future developments and potential new applications to biology and biomedicine are still scarce. We will focus on modern DL, the ongoing trends and future directions of the principled DL field, and postulate new and major applications in bioinformatics.
  • Modeling quantitative traits for COVID-19 case reports

    Queralt-Rosinach, Núria; Bello, Susan; Hoehndorf, Robert; Weiland, Claus; Rocca-Serra, Philippe; Schofield, Paul N. (Cold Spring Harbor Laboratory, 2020-06-21) [Preprint]
    <jats:p>Medical practitioners record the condition status of a patient through qualitative and quantitative observations. The measurement of vital signs and molecular parameters in the clinics gives a complementary description of abnormal phenotypes associated with the progression of a disease. The Clinical Measurement Ontology (CMO) is used to standardize annotations of these measurable traits. However, researchers have no way to describe how these quantitative traits relate to phenotype concepts in a machine-readable manner. Using the WHO clinical case report form standard for the COVID-19 pandemic, we modeled quantitative traits and developed OWL axioms to formally relate clinical measurement terms with anatomical, biomolecular entities and phenotypes annotated with the Uber-anatomy ontology (Uberon), Chemical Entities of Biological Interest (ChEBI) and the Phenotype and Trait Ontology (PATO) biomedical ontologies. The formal description of these relations allows interoperability between clinical and biological descriptions, and facilitates automated reasoning for analysis of patterns over quantitative and qualitative biomedical observations.</jats:p>
  • Network Moments: Extensions and Sparse-Smooth Attacks

    Alfadly, Modar; Bibi, Adel; Botero, Emilio; Al-Subaihi, Salman; Ghanem, Bernard (arXiv, 2020-06-21) [Preprint]
    The impressive performance of deep neural networks (DNNs) has immensely strengthened the line of research that aims at theoretically analyzing their effectiveness. This has incited research on the reaction of DNNs to noisy input, namely developing adversarial input attacks and strategies that lead to robust DNNs to these attacks. To that end, in this paper, we derive exact analytic expressions for the first and second moments (mean and variance) of a small piecewise linear (PL) network (Affine, ReLU, Affine) subject to Gaussian input. In particular, we generalize the second-moment expression of Bibi et al. to arbitrary input Gaussian distributions, dropping the zero-mean assumption. We show that the new variance expression can be efficiently approximated leading to much tighter variance estimates as compared to the preliminary results of Bibi et al. Moreover, we experimentally show that these expressions are tight under simple linearizations of deeper PL-DNNs, where we investigate the effect of the linearization sensitivity on the accuracy of the moment estimates. Lastly, we show that the derived expressions can be used to construct sparse and smooth Gaussian adversarial attacks (targeted and non-targeted) that tend to lead to perceptually feasible input attacks.
  • Analysis of transcript-deleterious variants in Mendelian disorders: implications for RNA-based diagnostics.

    Maddirevula, Sateesh; Kuwahara, Hiroyuki; Ewida, Nour; Shamseldin, Hanan E; Patel, Nisha; AlZahrani, Fatema; AlSheddi, Tarfa; AlObeid, Eman; Alenazi, Mona; Alsaif, Hessa S; Alqahtani, Maha; AlAli, Maha; Al Ali, Hatoon; Helaby, Rana; Ibrahim, Niema; Abdulwahab, Firdous; Hashem, Mais; Hanna, Nadine; Monies, Dorota; Derar, Nada; Alsagheir, Afaf; Alhashem, Amal; Alsaleem, Badr; Alhebbi, Hamoud; Wali, Sami; Umarov, Ramzan; Gao, Xin; Alkuraya, Fowzan S. (Genome biology, Springer Science and Business Media LLC, 2020-06-20) [Article]
    BACKGROUND:At least 50% of patients with suspected Mendelian disorders remain undiagnosed after whole-exome sequencing (WES), and the extent to which non-coding variants that are not captured by WES contribute to this fraction is unclear. Whole transcriptome sequencing is a promising supplement to WES, although empirical data on the contribution of RNA analysis to the diagnosis of Mendelian diseases on a large scale are scarce. RESULTS:Here, we describe our experience with transcript-deleterious variants (TDVs) based on a cohort of 5647 families with suspected Mendelian diseases. We first interrogate all families for which the respective Mendelian phenotype could be mapped to a single locus to obtain an unbiased estimate of the contribution of TDVs at 18.9%. We examine the entire cohort and find that TDVs account for 15% of all "solved" cases. We compare the results of RT-PCR to in silico prediction. Definitive results from RT-PCR are obtained from blood-derived RNA for the overwhelming majority of variants (84.1%), and only a small minority (2.6%) fail analysis on all available RNA sources (blood-, skin fibroblast-, and urine renal epithelial cells-derived), which has important implications for the clinical application of RNA-seq. We also show that RNA analysis can establish the diagnosis in 13.5% of 155 patients who had received "negative" clinical WES reports. Finally, our data suggest a role for TDVs in modulating penetrance even in otherwise highly penetrant Mendelian disorders. CONCLUSIONS:Our results provide much needed empirical data for the impending implementation of diagnostic RNA-seq in conjunction with genome sequencing.
  • Unified Analysis of Stochastic Gradient Methods for Composite Convex and Smooth Optimization

    Khaled, Ahmed; Sebbouh, Othmane; Loizou, Nicolas; Gower, Robert M.; Richtarik, Peter (arXiv, 2020-06-20) [Preprint]
    We present a unified theorem for the convergence analysis of stochastic gradient algorithms for minimizing a smooth and convex loss plus a convex regularizer. We do this by extending the unified analysis of Gorbunov, Hanzely \& Richt\'arik (2020) and dropping the requirement that the loss function be strongly convex. Instead, we only rely on convexity of the loss function. Our unified analysis applies to a host of existing algorithms such as proximal SGD, variance reduced methods, quantization and some coordinate descent type methods. For the variance reduced methods, we recover the best known convergence rates as special cases. For proximal SGD, the quantization and coordinate type methods, we uncover new state-of-the-art convergence rates. Our analysis also includes any form of sampling and minibatching. As such, we are able to determine the minibatch size that optimizes the total complexity of variance reduced methods. We showcase this by obtaining a simple formula for the optimal minibatch size of two variance reduced methods (\textit{L-SVRG} and \textit{SAGA}). This optimal minibatch size not only improves the theoretical total complexity of the methods but also improves their convergence in practice, as we show in several experiments.
  • Introduction to spatio-temporal data driven urban computing

    Shang, Shuo; Zheng, Kai; Kalnis, Panos (Distributed and Parallel Databases, Springer Science and Business Media LLC, 2020-06-19) [Article]
    This special issue of Distributed and Parallel Databases journal covers recent advances in spatio-temporal data analytics in the context of urban computing. It contains 9 articles that present solid research studies and innovative ideas in the area of spatio-temporal data analytics for urban computing applications. All of the 9 papers went through at least two rounds of rigorous reviews by the guest editors and invited reviewers. Location-based recommender systems are becoming increasingly important in the community of urban computing. The paper, by Hao Zhou et al., “Hybrid route recommendation with taxi and shared bicycles,” develops a two-phase data-driven recommendation framework that integrates prediction and recommendation phases for providing reliable route recommendation results. Another paper, by Hao Zhang et al., “On accurate POI recommendation via transfer learning,” proposes a transfer learning based deep neural model that fuses cross-domain knowledge to achieve more accurate POI recommendation. Spatial keyword search has been receiving much attention in area of spatio-temporal data analytics. Xiangguo Zhao et al. develop anindex structure that comprehensively considers the social, spatial, and textual information of massive-scale spatio-temporal data to support social-aware spatial keyword group query in their paper “Social-aware spatial keyword top-k group query.” Jiajie Xu et al. propose a hybrid indexing structure that integrate the spatial and semantic information of spatio-temporal datain their paper “Multi-objective spatial keyword query with semantics: a distance-owner based approach.” Matching of spatio-temporal data is a fundamental research problem in spatio-temporal data analytics. The paper, by Ning Wang et al., “An efficient algorithm for spatio-textual location matching,” targets the problem of finding all location pairs whose spatio-textual similarity exceeds a given threshold. This matching query is useful in urban computing applications including hot region detection and traffic congestion alleviation. Additionally, their paper “Privacy-preserving spatial keyword location-to-trajectory matching,” presents a network expansion algorithm and pruning strategies for finding location-trajectory pairs from spatio-temporal data while preserving the users’ privacy. Further, the paper, by Lei Xiao et al., “LSTM-based deep learning for spatial–temporal software testing,” developsa test case prioritization approach using LSTM-based deep learning, which exhibits potential application value in self-driving cars. Another paper, by Zhen chang Xia et al., “ForeXGBoost: passenger car sales prediction based on XGBoost,” presents a prediction model that utilizes data filling algorithms. The model achieves a high prediction accuracy with short running time for vehicle sales prediction. Finally, the paper, by Zhiqiang Liu et al., “A parameter-level parallel optimization algorithm for large-scale spatio-temporal data mining,” propose an efficient parameter-level parallel optimization algorithm for large-scale spatio-temporal data mining. Those nine articles represent diverse directions in the fast-growing area of spatio-temporal data analytics in urban computing community. We hope that these papers will foster the development of urban computing techniques and inspire more research in this promising area.
  • A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning

    Horvath, Samuel; Richtarik, Peter (arXiv, 2020-06-19) [Preprint]
    Modern large-scale machine learning applications require stochastic optimization algorithms to be implemented on distributed compute systems. A key bottleneck of such systems is the communication overhead for exchanging information across the workers, such as stochastic gradients. Among the many techniques proposed to remedy this issue, one of the most successful is the framework of compressed communication with error feedback (EF). EF remains the only known technique that can deal with the error induced by contractive compressors which are not unbiased, such as Top-$K$. In this paper, we propose a new and theoretically and practically better alternative to EF for dealing with contractive compressors. In particular, we propose a construction which can transform any contractive compressor into an induced unbiased compressor. Following this transformation, existing methods able to work with unbiased compressors can be applied. We show that our approach leads to vast improvements over EF, including reduced memory requirements, better communication complexity guarantees and fewer assumptions. We further extend our results to federated learning with partial participation following an arbitrary distribution over the nodes, and demonstrate the benefits thereof. We perform several numerical experiments which validate our theoretical findings.
  • Solving Acoustic Boundary Integral Equations Using High Performance Tile Low-Rank LU Factorization.

    Al-Harthi, Noha A.; Alomairy, Rabab M.; Akbudak, Kadir; Chen, Rui; Ltaief, Hatem; Bagci, Hakan; Keyes, David E. (High Performance Computing, Springer International Publishing, 2020-06-18) [Book Chapter]
    We design and develop a new high performance implementation of a fast direct LU-based solver using low-rank approximations on massively parallel systems. The LU factorization is the most time-consuming step in solving systems of linear equations in the context of analyzing acoustic scattering from large 3D objects. The matrix equation is obtained by discretizing the boundary integral of the exterior Helmholtz problem using a higher-order Nyström scheme. The main idea is to exploit the inherent data sparsity of the matrix operator by performing local tile-centric approximations while still capturing the most significant information. In particular, the proposed LU-based solver leverages the Tile Low-Rank (TLR) data compression format as implemented in the Hierarchical Computations on Manycore Architectures (HiCMA) library to decrease the complexity of “classical” dense direct solvers from cubic to quadratic order. We taskify the underlying boundary integral kernels to expose fine-grained computations. We then employ the dynamic runtime system StarPU to orchestrate the scheduling of computational tasks on shared and distributed-memory systems. The resulting asynchronous execution permits to compensate for the load imbalance due to the heterogeneous ranks, while mitigating the overhead of data motion. We assess the robustness of our TLR LU-based solver and study the qualitative impact when using different numerical accuracies. The new TLR LU factorization outperforms the state-of-the-art dense factorizations by up to an order of magnitude on various parallel systems, for analysis of scattering from large-scale 3D synthetic and real geometries.
  • A Unified Analysis of Stochastic Gradient Methods for Nonconvex Federated Optimization

    Li, Zhize; Richtarik, Peter (arXiv, 2020-06-12) [Preprint]
    In this paper, we study the performance of a large family of SGD variants in the smooth nonconvex regime. To this end, we propose a generic and flexible assumption capable of accurate modeling of the second moment of the stochastic gradient. Our assumption is satisfied by a large number of specific variants of SGD in the literature, including SGD with arbitrary sampling, SGD with compressed gradients, and a wide variety of variance-reduced SGD methods such as SVRG and SAGA. We provide a single convergence analysis for all methods that satisfy the proposed unified assumption, thereby offering a unified understanding of SGD variants in the nonconvex regime instead of relying on dedicated analyses of each variant. Moreover, our unified analysis is accurate enough to recover or improve upon the best-known convergence results of several classical methods, and also gives new convergence results for many new methods which arise as special cases. In the more general distributed/federated nonconvex optimization setup, we propose two new general algorithmic frameworks differing in whether direct gradient compression (DC) or compression of gradient differences (DIANA) is used. We show that all methods captured by these two frameworks also satisfy our unified assumption. Thus, our unified convergence analysis also captures a large variety of distributed methods utilizing compressed communication. Finally, we also provide a unified analysis for obtaining faster linear convergence rates in this nonconvex regime under the PL condition.
  • A self-adaptive deep learning algorithm for accelerating multi-component flash calculation

    Zhang, Tao; Li, Yu; Li, Yiteng; Sun, Shuyu; Gao, Xin (Computer Methods in Applied Mechanics and Engineering, Elsevier BV, 2020-06-11) [Article]
    In this paper, the first self-adaptive deep learning algorithm is proposed in details to accelerate flash calculations, which can quantitatively predict the total number of phases in the mixture and related thermodynamic properties at equilibrium for realistic reservoir fluids with a large number of components under various environmental conditions. A thermodynamically consistent scheme for phase equilibrium calculation is adopted and implemented at specified moles, volume and temperature, and the flash results are used as the ground truth for training and testing the deep neural network. The critical properties of each component are considered as the input features of the neural network and the final output is the total number of phases at equilibrium and the molar compositions in each phase. Two network structures are well designed, one of which transforms the input of various numbers of components in the training and the objective fluid mixture into a unified space before entering the productive neural network. “Ghost components” are defined and introduced to process the data padding work in order to modify the dimension of input flash calculation data to meet the training and testing requirements of the target fluid mixture. Hyperparameters on both two neural networks are carefully tuned in order to ensure the physical correlations underneath the input parameters are preserved properly through the learning process. This combined structure can make our deep learning algorithm to be self-adaptive to the change of input components and dimensions. Furthermore, two Softmax functions are used in the last layer to enforce the constraint that the summation of mole fractions in each phase is equal to 1. An example is presented that the flash calculation results of a 8-component Eagle Ford oil is used as input to estimate the phase equilibrium state of a 14-component Eagle Ford oil. The results are satisfactory with very small estimation errors. The capability of the proposed deep learning algorithm is also verified that simultaneously completes phase stability test and phase splitting calculation. Remarks are concluded at the end to provide some guidance for further research in this direction, especially the potential application of newly developed neural network models.
  • Hierarchical matrix approximations for space-fractional diffusion equations

    Boukaram, Wagih Halim; Lucchesi, Marco; Turkiyyah, George; Le Maître, Olivier; Knio, Omar; Keyes, David E. (Computer Methods in Applied Mechanics and Engineering, Elsevier BV, 2020-06-11) [Article]
    Space fractional diffusion models generally lead to dense discrete matrix operators, which lead to substantial computational challenges when the system size becomes large. For a state of size N, full representation of a fractional diffusion matrix would require O(N2) memory storage requirement, with a similar estimate for matrix–vector products. In this work, we present H2 matrix representation and algorithms that are amenable to efficient implementation on GPUs, and that can reduce the cost of storing these operators to O(N) asymptotically. Matrix–vector multiplications can be performed in asymptotically linear time as well. Performance of the algorithms is assessed in light of 2D simulations of space fractional diffusion equation with constant diffusivity. Attention is focused on smooth particle approximation of the governing equations, which lead to discrete operators involving explicit radial kernels. The algorithms are first tested using the fundamental solution of the unforced space fractional diffusion equation in an unbounded domain, and then for the steady, forced, fractional diffusion equation in a bounded domain. Both matrix-inverse and pseudo-transient solution approaches are considered in the latter case. Our experiments show that the construction of the fractional diffusion matrix, the matrix–vector multiplication, and the generation of an approximate inverse pre-conditioner all perform very well on a single GPU on 2D problems with N in the range 105 – 106. In addition, the tests also showed that, for the entire range of parameters and fractional orders considered, results obtained using the H2 approximations were in close agreement with results obtained using dense operators, and exhibited the same spatial order of convergence. Overall, the present experiences showed that the H2 matrix framework promises to provide practical means to handle large-scale space fractional diffusion models in several space dimensions, at a computational cost that is asymptotically similar to the cost of handling classical diffusion equations.
  • Generative adversarial network-based super-resolution of diffusion-weighted imaging: Application to tumour radiomics in breast cancer.

    Fan, Ming; Liu, Zuhui; Xu, Maosheng; Wang, Shiwei; Zeng, Tieyong; Gao, Xin; Li, Lihua (NMR in biomedicine, Wiley, 2020-06-11) [Article]
    Diffusion-weighted imaging (DWI) is increasingly used to guide the clinical management of patients with breast tumours. However, accurate tumour characterization with DWI and the corresponding apparent diffusion coefficient (ADC) maps are challenging due to their limited resolution. This study aimed to produce super-resolution (SR) ADC images and to assess the clinical utility of these SR images by performing a radiomic analysis for predicting the histologic grade and Ki-67 expression status of breast cancer. To this end, 322 samples of dynamic enhanced magnetic resonance imaging (DCE-MRI) and the corresponding DWI data were collected. A SR generative adversarial (SRGAN) and an enhanced deep SR (EDSR) network along with the bicubic interpolation were utilized to generate SR-ADC images from which radiomic features were extracted. The dataset was randomly separated into a development dataset (n = 222) to establish a deep SR model using DCE-MRI and a validation dataset (n = 100) to improve the resolution of ADC images. This random separation of datasets was performed 10 times, and the results were averaged. The EDSR method was significantly better than the SRGAN and bicubic methods in terms of objective quality criteria. Univariate and multivariate predictive models of radiomic features were established to determine the area under the receiver operating characteristic curve (AUC). Individual features from the tumour SR-ADC images showed a higher performance with the EDSR and SRGAN methods than with the bicubic method and the original images. Multivariate analysis of the collective radiomics showed that the EDSR- and SRGAN-based SR-ADC images performed better than the bicubic method and original images in predicting either Ki-67 expression levels (AUCs of 0.818 and 0.801, respectively) or the tumour grade (AUCs of 0.826 and 0.828, respectively). This work demonstrates that in addition to improving the resolution of ADC images, deep SR networks can also improve tumour image-based diagnosis in breast cancer.
  • Random Reshuffling: Simple Analysis with Vast Improvements

    Mishchenko, Konstantin; Khaled, Ahmed; Richtarik, Peter (arXiv, 2020-06-10) [Preprint]
    Random Reshuffling (RR) is an algorithm for minimizing finite-sum functions that utilizes iterative gradient descent steps in conjunction with data reshuffling. Often contrasted with its sibling Stochastic Gradient Descent (SGD), RR is usually faster in practice and enjoys significant popularity in convex and non-convex optimization. The convergence rate of RR has attracted substantial attention recently and, for strongly convex and smooth functions, it was shown to converge faster than SGD if 1) the stepsize is small, 2) the gradients are bounded, and 3) the number of epochs is large. We remove these 3 assumptions, improve the dependence on the condition number from $\kappa^2$ to $\kappa$ (resp.\ from $\kappa$ to $\sqrt{\kappa}$) and, in addition, show that RR has a different type of variance. We argue through theory and experiments that the new variance type gives an additional justification of the superior performance of RR. To go beyond strong convexity, we present several results for non-strongly convex and non-convex objectives. We show that in all cases, our theory improves upon existing literature. Finally, we prove fast convergence of the Shuffle-Once (SO) algorithm, which shuffles the data only once, at the beginning of the optimization process. Our theory for strongly-convex objectives tightly matches the known lower bounds for both RR and SO and substantiates the common practical heuristic of shuffling once or only a few times. As a byproduct of our analysis, we also get new results for the Incremental Gradient algorithm (IG), which does not shuffle the data at all.
  • Applying Deep-Learning-Based Computer Vision to Wireless Communications: Methodologies, Opportunities, and Challenges

    Tian, Yu; Pan, Gaofeng; Alouini, Mohamed-Slim (arXiv, 2020-06-10) [Preprint]
    Deep learning (DL) has obtained great success in computer vision (CV) field, and the related techniques have been widely used in security, healthcare, remote sensing, etc. On the other hand, visual data is universal in our daily life, which is easily generated by prevailing but low-cost cameras. Therefore, DL-based CV can be explored to obtain and forecast some useful information about the objects, e.g., the number, locations, distribution, motion, etc. Intuitively, DL-based CV can facilitate and improve the designs of wireless communications, especially in dynamic network scenarios. However, so far, it is rare to see such kind of works in the existing literature. Then, the primary purpose of this article is to introduce ideas of applying DL-based CV in wireless communications to bring some novel degrees of freedom for both theoretical researches and engineering applications. To illustrate how DL-based CV can be applied in wireless communications, an example of using DL-based CV to millimeter wave (mmWave) system is given to realize optimal mmWave multiple-input and multiple-output (MIMO) beamforming in mobile scenarios. In this example, we proposed a framework to predict the future beam indices from the previously-observed beam indices and images of street views by using ResNet, 3-dimensional ResNext, and long short term memory network. Experimental results show that our frameworks can achieve much higher accuracy than the baseline method, and visual data can help significantly improve the performance of MIMO beamforming system. Finally, we discuss the opportunities and challenges of applying DL-based CV in wireless communications.
  • Aqua-Fi: Delivering internet underwater using wireless optical networks

    Shihada, Basem; Amin, Osama; Bainbridge, Christopher; Jardak, Seifallah; Alkhazragi, Omar; Ng, Tien Khee; Ooi, Boon S.; Berumen, Michael L.; Alouini, Mohamed-Slim (IEEE Communications Magazine, Institute of Electrical and Electronics Engineers (IEEE), 2020-06-09) [Article]
    In this article, we demonstrate bringing the Internet to underwater environments by deploying a low power and compact underwater optical wireless system, called Aqua-Fi, to support today's Internet applications. Aqua-Fi uses an LED or laser to support bidirectional wide-range communication services with different requirements, low cost, and simple implementation. LEDs introduce robust short distance solutions with low power requirements. However, laser extends the communication distance and improves the transmission rate at the cost of higher power requirements. Throughout this work, we discuss the proposed Aqua-Fi system architecture, limitations, and solutions to improve data rates and deliver reliable communication links.
  • Accurately Solving Physical Systems with Graph Learning

    Shao, Han; Kugelstadt, Tassilo; Pałubicki, Wojciech; Bender, Jan; Pirk, Sören; Michels, Dominik L. (arXiv, 2020-06-06) [Preprint]
    Iterative solvers are widely used to accurately simulate physical systems. These solvers require initial guesses to generate a sequence of improving approximate solutions. In this contribution, we introduce a novel method to accelerate iterative solvers for physical systems with graph networks (GNs) by predicting the initial guesses to reduce the number of iterations. Unlike existing methods that aim to learn physical systems in an end-to-end manner, our approach guarantees long-term stability and therefore leads to more accurate solutions. Furthermore, our method improves the run time performance of traditional iterative solvers. To explore our method we make use of position-based dynamics (PBD) as a common solver for physical systems and evaluate it by simulating the dynamics of elastic rods. Our approach is able to generalize across different initial conditions, discretizations, and realistic material properties. Finally, we demonstrate that our method also performs well when taking discontinuous effects into account such as collisions between individual rods. A video showing dynamic results of our graph learning assisted simulations of elastic rods can be found on the project website available at .

View more