The Visualization Laboratory at KAUST is a fully staffed state-of-the-art facility that offers students, faculty, researchers and university collaborators a unique opportunity to utilize one-of-a-kind visualization, interaction, and computational resources for the exploration and presentation of scientific data. 2D and 3D display environments, highly spatialized and immersive audio, monoscopic and stereoscopic displays, wireless interaction devices, and fully integrated and portable desktop applications are some of the services the laboratory offers. All spaces are fully interconnected with a 10Gb link, and can also be utilized for academic events and research meetings. Audio/video streaming, recording, and playback of research presentations/seminars are available throughout the facility. The facility is available for use by any KAUST member, and the laboratory staff can provide assistance in creating new customized applications with their expertise in computer graphics, human-computer interaction, virtual reality, scientific visualization and sonification. A Special Research Partnership has also been established between KAUST and the CalIT2 Institute at University of California San Diego. The design and implementation of the lab facilities was done in collaboration with UCSD and new research projects can avail the full benefit of this ongoing partnership. Students from KAUST academic programs spend summer internships at UCSD, collaborating with research scientists there and coming back to the lab to apply and implement their newly learned skills. Contact us at vislab@kaust.edu.sa

Recent Submissions

  • Sonification of Animal Tracks as an Alternative Representation of Multi-Dimensional Data: A Northern Elephant Seal Example

    Duarte, Carlos M.; Riker, Paul W.; Srinivasan, Madhusudhanan; Robinson, Patrick W.; Gallo-Reynoso, Juan P.; Costa, Daniel P. (Frontiers Media SA, 2018-04-20)
    Understanding movement of marine megafauna across the ocean is largely based on approaches and models based on analysis of tracks of single animals. While this has led to major progress, the possibility of concerted group dynamics has not been sufficiently examined, possibly due to challenges in exploring massive amounts of data required to this end. Here we report a sonification experiment, where the collective movement of northern elephant seals (Mirounga angustirostris) was explored by coding their group dynamics into sound. Specifically, we converted into sound data derived from a tagging program involving a total of 321 tagged animals tracked over a decade, between 20 February 2004 and 30 May 2014, consisting of an observation period of 90,063 h, composed of 1,027,839 individual positions. The data parameters used to provide the sound are position (longitude) and spread (degree of displacement taken for the active group). These data parameters are mapped to the sonic parameters of frequency (pitch) and amplitude (volume), respectively. Examination of the resulting sound revealed features of motion that translate into specific patterns in space. The serial departure of elephant seals to initiate their trips into waves is clearly reflected in the addition of tonalities, with coherent swimming of the animals conforming a wave reflected in the modulated fluctuations in volume, suggesting coordinated fluctuations in dispersion of the wave. Smooth changes in volume, coordinated with pitch variability, indicate that the animals spread out as they move further away from the colony, with one or a few animals exploring an ocean area away from that explored by the core wave. The shift in volume and pitch also signals at group coordination in initiating the return home. Coordinated initiation of the return to the colony is also clearly revealed by the sonification, as reflected in an increase in volume and pitch of the notes denoting the movement of each animal in a migration wave. This sonification reveals clear patterns of covariation in movement data, which drivers and triggers, whether intrinsic or environmental, cannot be elucidated here but allow to formulate a number of non-trivial questions on the synchronized nature of group behavior of northern elephant seals foraging across the NE Pacific Ocean.
  • Simulation and visualization of the cyclonic storm chapala over the arabian sea: a case study

    Theubl, Thomas; Dasari, Hari Prasad; Hoteit, Ibrahim; Srinivasan, Madhusudhanan (Institute of Electrical and Electronics Engineers (IEEE), 2016-12-01)
    We use the high resolution Weather Research and Forecasting (WRF) model to predict the characteristics of an intense cyclone, Chapala, which formed over the Arabian Sea in October/November 2015. The implemented model consists of two-way interactive nested domains of 9 and 3km. The prediction experiment of the cyclone started on 1200UTC of 26 October 2015 to forecast its landfall and its intensity based on NCEP global model forecasting fields. The results show that the movement of Chapala is well reproduced by our model up to 72 hours, after which track errors become significant. The intensity and cloud features of the extreme event as well as the distribution of hydrometeors is well represented by the model. All the characteristics including eye and eye-wall regions, mesoscale convective systems and distribution of different hydrometers during the lifetime of Chapala are very well simulated. The model output results in several hundred gigabytes of data, we analyze and visualize these data using state of the art computational and visualization software for representing different characteristics of Chapala and to verify the accuracy of the model. We further demonstrate the usefulness of a 3D virtual reality environment and its potential importance in decision-making system development.
  • Adding large EM stack support

    Holst, Glendon; Berg, Stuart; Kare, Kalpana; Magistretti, Pierre J.; Cali, Corrado (Institute of Electrical and Electronics Engineers (IEEE), 2016-12-01)
    Serial section electron microscopy (SSEM) image stacks generated using high throughput microscopy techniques are an integral tool for investigating brain connectivity and cell morphology. FIB or 3View scanning electron microscopes easily generate gigabytes of data. In order to produce analyzable 3D dataset from the imaged volumes, efficient and reliable image segmentation is crucial. Classical manual approaches to segmentation are time consuming and labour intensive. Semiautomatic seeded watershed segmentation algorithms, such as those implemented by ilastik image processing software, are a very powerful alternative, substantially speeding up segmentation times. We have used ilastik effectively for small EM stacks – on a laptop, no less; however, ilastik was unable to carve the large EM stacks we needed to segment because its memory requirements grew too large – even for the biggest workstations we had available. For this reason, we refactored the carving module of ilastik to scale it up to large EM stacks on large workstations, and tested its efficiency. We modified the carving module, building on existing blockwise processing functionality to process data in manageable chunks that can fit within RAM (main memory). We review this refactoring work, highlighting the software architecture, design choices, modifications, and issues encountered.
  • Multi-Scale Coupling Between Monte Carlo Molecular Simulation and Darcy-Scale Flow in Porous Media

    Saad, Ahmed Mohamed; Kadoura, Ahmad Salim; Sun, Shuyu (Elsevier BV, 2016-06-01)
    In this work, an efficient coupling between Monte Carlo (MC) molecular simulation and Darcy-scale flow in porous media is presented. The cell centered finite difference method with non-uniform rectangular mesh were used to discretize the simulation domain and solve the governing equations. To speed up the MC simulations, we implemented a recently developed scheme that quickly generates MC Markov chains out of pre-computed ones, based on the reweighting and reconstruction algorithm. This method astonishingly reduces the required computational times by MC simulations from hours to seconds. To demonstrate the strength of the proposed coupling in terms of computational time efficiency and numerical accuracy in fluid properties, various numerical experiments covering different compressible single-phase flow scenarios were conducted. The novelty in the introduced scheme is in allowing an efficient coupling of the molecular scale and the Darcy's one in reservoir simulators. This leads to an accurate description of thermodynamic behavior of the simulated reservoir fluids; consequently enhancing the confidence in the flow predictions in porous media.
  • PathlinesExplorer — Image-based exploration of large-scale pathline fields

    Nagoor, Omniah H.; Hadwiger, Markus; Srinivasan, Madhusudhanan (Institute of Electrical and Electronics Engineers (IEEE), 2015-10-25)
    PathlinesExplorer is a novel image-based tool, which has been designed to visualize large scale pathline fields on a single computer [7]. PathlinesExplorer integrates explorable images (EI) technique [4] with order-independent transparency (OIT) method [2]. What makes this method different is that it allows users to handle large data on a single workstation. Although it is a view-dependent method, PathlinesExplorer combines both exploration and modification of visual aspects without re-accessing the original huge data. Our approach is based on constructing a per-pixel linked list data structure in which each pixel contains a list of pathline segments. With this view-dependent method, it is possible to filter, color-code, and explore large-scale flow data in real-time. In addition, optimization techniques such as early-ray termination and deferred shading are applied, which further improves the performance and scalability of our approach.
  • Three-dimensional immersive virtual reality for studying cellular compartments in 3D models from EM preparations of neural tissues

    Cali, Corrado; Baghabrah, Jumana; Boges, Daniya; Holst, Glendon; Kreshuk, Anna; Hamprecht, Fred A.; Srinivasan, Madhusudhanan; Lehväslaiho, Heikki; Magistretti, Pierre J. (Wiley-Blackwell, 2015-07-14)
    Advances for application of electron microscopy to serial imaging are opening doors to new ways of analyzing cellular structure. New and improved algorithms and workflows for manual and semiautomated segmentation allow to observe the spatial arrangement of the smallest cellular features with unprecedented detail in full three-dimensions (3D). From larger samples, higher complexity models can be generated; however, they pose new challenges to data management and analysis. Here, we review some currently available solutions and present our approach in detail. We use the fully immersive virtual reality (VR) environment CAVE (cave automatic virtual environment), a room where we are able to project a cellular reconstruction and visualize in 3D, to step into a world created with Blender, a free, fully customizable 3D modeling software with NeuroMorph plug-ins for visualization and analysis of electron microscopy (EM) preparations of brain tissue. Our workflow allows for full and fast reconstructions of volumes of brain neuropil using ilastik, a software tool for semiautomated segmentation of EM stacks. With this visualization environment, we can walk into the model containing neuronal and astrocytic processes to study the spatial distribution of glycogen granules, a major energy source that is selectively stored in astrocytes. The use of CAVE was key to observe a nonrandom distribution of glycogen, and led us to develop tools to quantitatively analyze glycogen clustering and proximity to other subcellular features. This article is protected by copyright. All rights reserved.
  • Cavlectometry: Towards holistic reconstruction of large mirror objects

    Balzer, Jonathan; Acevedo-Feliz, Daniel; Soatto, Stefano; Höfer, Sebastian G.; Hadwiger, Markus; Beyerer, Jürgen (Institute of Electrical and Electronics Engineers (IEEE), 2014-12)
    We introduce a method based on the deflectometry principle for the reconstruction of specular objects exhibiting significant size and geometric complexity. A key feature of our approach is the deployment of an Automatic Virtual Environment (CAVE) as pattern generator. To unfold the full power of this experimental setup, an optical encoding scheme is developed which accounts for the distinctive topology of the CAVE. Furthermore, we devise an algorithm for detecting the object of interest in raw deflectometric images. The segmented foreground is used for single-view reconstruction, the background for estimation of the camera pose, necessary for calibrating the sensor system. Experiments suggest a significant gain of coverage in single measurements compared to previous methods. © 2014 IEEE.
  • Poster: Observing change in crowded data sets in 3D space - Visualizing gene expression in human tissues

    Rogowski, Marcin; Cannistraci, Carlo; Alanis Lobato, Gregorio; Weber, Philip P.; Ravasi, Timothy; Schulze, Jürgen P.; Acevedo-Feliz, Daniel (Institute of Electrical and Electronics Engineers (IEEE), 2013-03)
    We have been confronted with a real-world problem of visualizing and observing change of gene expression between different human tissues. In this paper, we are presenting a universal representation space based on two-dimensional gel electrophoresis as opposed to force-directed layouts encountered most often in similar problems. We are discussing the methods we devised to make observing change more convenient in a 3D virtual reality environment. © 2013 IEEE.
  • Poster: Virtual reality interaction using mobile devices

    Aseeri, Sahar A.; Acevedo-Feliz, Daniel; Schulze, Jürgen P. (Institute of Electrical and Electronics Engineers (IEEE), 2013-03)
    In this work we aim to implement and evaluate alternative approaches for interacting with virtual environments on mobile devices for navigation, object selection and manipulation. Interaction with objects in virtual worlds using traditional input such as current state-of-the-art devices is often difficult and could diminish the immersion and sense of presence when it comes to 3D virtual environment tasks. We have developed new methods to perform different kinds of interactions using a mobile device (e.g. a smartphone) both as input device, performing selection and manipulation of objects, and as output device, utilizing the screen as an extra view (virtual camera or information display). Our hypothesis is that interaction via mobile devices facilitates simple tasks like the ones described within immersive virtual reality systems. We present here our initial implementation and result. © 2013 IEEE.
  • Use of X-ray diffraction, molecular simulations, and spectroscopy to determine the molecular packing in a polymer-fullerene bimolecular crystal

    Miller, Nichole Cates; Cho, Eunkyung; Junk, Matthias J N; Gysel, Roman; Risko, Chad; Kim, Dongwook; Sweetnam, Sean; Miller, Chad E.; Richter, Lee J.; Kline, Regis Joseph; Heeney, Martin J.; McCulloch, Iain A.; Amassian, Aram; Acevedo-Feliz, Daniel; Knox, Christopher; Hansen, Michael Ryan; Dudenko, Dmytro V.; Chmelka, Bradley F.; Toney, Michael F.; Brédas, Jean Luc; McGehee, Michael D. (Wiley-Blackwell, 2012-09-05)
    The molecular packing in a polymer: fullerene bimolecular crystal is determined using X-ray diffraction (XRD), molecular mechanics (MM) and molecular dynamics (MD) simulations, 2D solid-state NMR spectroscopy, and IR absorption spectroscopy. The conformation of the electron-donating polymer is significantly disrupted by the incorporation of the electron-accepting fullerene molecules, which introduce twists and bends along the polymer backbone and 1D electron-conducting fullerene channels. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
  • KAUST Campus and Visualization Laboratory

    Bailey, April Renee; Acevedo-Feliz, Daniel; Cutchin, Steve; Vankov, Stephan (Vimeo, 2012-06-19)
    A visual and textual overview of King Abdullah University of Science and Technology (KAUST) campus in the Kingdom of Saudi Arabia and KAUST Visualization Laboratory (KVL).
  • Democratizing rendering for multiple viewers in surround VR systems

    Schulze, Jürgen P.; Acevedo-Feliz, Daniel; Mangan, John; Prudhomme, Andrew; Nguyen, Phi Khanh; Weber, Philip P. (Institute of Electrical and Electronics Engineers (IEEE), 2012-03)
    We present a new approach for how multiple users' views can be rendered in a surround virtual environment without using special multi-view hardware. It is based on the idea that different parts of the screen are often viewed by different users, so that they can be rendered from their own view point, or at least from a point closer to their view point than traditionally expected. The vast majority of 3D virtual reality systems are designed for one head-tracked user, and a number of passive viewers. Only the head tracked user gets to see the correct view of the scene, everybody else sees a distorted image. We reduce this problem by algorithmically democratizing the rendering view point among all tracked users. Researchers have proposed solutions for multiple tracked users, but most of them require major changes to the display hardware of the VR system, such as additional projectors or custom VR glasses. Our approach does not require additional hardware, except the ability to track each participating user. We propose three versions of our multi-viewer algorithm. Each of them balances image distortion and frame rate in different ways, making them more or less suitable for certain application scenarios. Our most sophisticated algorithm renders each pixel from its own, optimized camera perspective, which depends on all tracked users' head positions and orientations. © 2012 IEEE.
  • KAUST Supercomputing Laboratory

    Bailey, April Renee; Kaushik, Dinesh; Winfer, Andrew (Vimeo, 2011-11-15)
    KAUST has partnered with IBM to establish a Supercomputing Research Center. KAUST is hosting the Shaheen supercomputer, named after the Arabian falcon famed for its swiftness of flight. This 16-rack IBM Blue Gene/P system is equipped with 4 gigabyte memory per node and capable of 222 teraflops, making KAUST campus the site of one of the world’s fastest supercomputers in an academic environment. KAUST is targeting petaflop capability within 3 years.
  • Demonstration of Audio Switcher Functionality in Showcase Room Visualization Lab

    Seldess, Zachary; Bailey, April Renee; Riker, Paul; Yamada, Toshiro (Vimeo, 2011-05-25)
  • Low cost heads-up virtual reality (HUVR) with optical tracking and haptic feedback

    Margolis, Todd; DeFanti, Thomas A.; Dawe, Greg; Prudhomme, Andrew; Schulze, Jurgen P.; Cutchin, Steven (SPIE-Intl Soc Optical Eng, 2011-01-23)
    Researchers at the University of California, San Diego, have created a new, relatively low-cost augmented reality system that enables users to touch the virtual environment they are immersed in. The Heads-Up Virtual Reality device (HUVR) couples a consumer 3D HD flat screen TV with a half-silvered mirror to project any graphic image onto the user's hands and into the space surrounding them. With his or her head position optically tracked to generate the correct perspective view, the user maneuvers a force-feedback (haptic) device to interact with the 3D image, literally 'touching' the object's angles and contours as if it was a tangible physical object. HUVR can be used for training and education in structural and mechanical engineering, archaeology and medicine as well as other tasks that require hand-eye coordination. One of the most unique characteristics of HUVR is that a user can place their hands inside of the virtual environment without occluding the 3D image. Built using open-source software and consumer level hardware, HUVR offers users a tactile experience in an immersive environment that is functional, affordable and scalable.
  • Acquisition of stereo panoramas for display in VR environments

    Ainsworth, Richard A.; Sandin, Daniel J.; Schulze, Jurgen P.; Prudhomme, Andrew; DeFanti, Thomas A.; Srinivasan, Madhusudhanan (SPIE-Intl Soc Optical Eng, 2011-01-23)
    Virtual reality systems are an excellent environment for stereo panorama displays. The acquisition and display methods described here combine high-resolution photography with surround vision and full stereo view in an immersive environment. This combination provides photographic stereo-panoramas for a variety of VR displays, including the StarCAVE, NexCAVE, and CORNEA. The zero parallax point used in conventional panorama photography is also the center of horizontal and vertical rotation when creating photographs for stereo panoramas. The two photographically created images are displayed on a cylinder or a sphere. The radius from the viewer to the image is set at approximately 20 feet, or at the object of major interest. A full stereo view is presented in all directions. The interocular distance, as seen from the viewer's perspective, displaces the two spherical images horizontally. This presents correct stereo separation in whatever direction the viewer is looking, even up and down. Objects at infinity will move with the viewer, contributing to an immersive experience. Stereo panoramas created with this acquisition and display technique can be applied without modification to a large array of VR devices having different screen arrangements and different VR libraries.
  • The future of the CAVE

    DeFanti, Thomas; Acevedo-Feliz, Daniel; Ainsworth, Richard; Brown, Maxine; Cutchin, Steven; Dawe, Gregory; Doerr, Kai-Uwe; Johnson, Andrew; Knox, Christopher; Kooima, Robert; Kuester, Falko; Leigh, Jason; Long, Lance; Otto, Peter; Petrovic, Vid; Ponto, Kevin; Prudhomme, Andrew; Rao, Ramesh; Renambot, Luc; Sandin, Daniel; Schulze, Jurgen; Smarr, Larry; Srinivasan, Madhusudhanan; Weber, Philip; Wickham, Gregory (Walter de Gruyter GmbH, 2011-01-01)
    The CAVE, a walk-in virtual reality environment typically consisting of 4–6 3 m-by-3 m sides of a room made of rear-projected screens, was first conceived and built in 1991. In the nearly two decades since its conception, the supporting technology has improved so that current CAVEs are much brighter, at much higher resolution, and have dramatically improved graphics performance. However, rear-projection-based CAVEs typically must be housed in a 10 m-by-10 m-by-10 m room (allowing space behind the screen walls for the projectors), which limits their deployment to large spaces. The CAVE of the future will be made of tessellated panel displays, eliminating the projection distance, but the implementation of such displays is challenging. Early multi-tile, panel-based, virtual-reality displays have been designed, prototyped, and built for the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia by researchers at the University of California, San Diego, and the University of Illinois at Chicago. New means of image generation and control are considered key contributions to the future viability of the CAVE as a virtual-reality device.
  • KCC1: First Nanoparticle developed by KAUST Catalysis Center

    Basset, Jean-Marie; Bailey, April Renee; Farago, Amy; McElwee, Terence; Polshettiwar, Vivek; Srinivasan, Madhu (2010-08)
    KCC1 is the first Nanoparticle developed by KAUST Catalysis Center. Director of KAUST Catalysis Center, Dr. Jean-Marie Basset, Senior Research Scientist at KCC, Dr. Vivek Polshettiwar, and Dr. Dongkyu Cha of the Advanced Nanofabrication Imaging & Characterization Core Laboratory discuss the details of this recent discovery. This video was produced by KAUST Visualization Laboratory and KAUST Technology Transfer and Innovation - Terence McElwee, Director, Technology Transfer and Innovation - IP@kaust.edu.sa This technology is part of KAUST's technology commercialization program that seeks to stimulate development and commercial use of KAUST-developed technologies. For more information email us at ip@kaust.edu.sa.
  • Multisource reverse-time migration and full-waveform inversion on a GPGPU

    Boonyasiriwat, Chaiwoot; Zhan, Ge; Hadwiger, Markus; Srinivasan, Madhusudhanan; Schuster, Gerard T. (EAGE Publications, 2010-01-01)

View more