09.11.03
Report to the
Dept. of Energy Advanced Scientific Computing Advisory Committee
Title: Project StarGate An End-to-End 10Gbps HPC to User Cyberinfrastructure ANL * Calit2 * LBNL * NICS * ORNL * SDSC
Oak Ridge, TN
SC21: Larry Smarr on The Rise of Supernetwork Data Intensive Computing
Larry Smarr, founding director of Calit2 (now Distinguished Professor Emeritus at the University of California San Diego) and the first director of NCSA, is one of the seminal figures in the U.S. supercomputing community. What began as a personal drive, shared by others, to spur the creation of supercomputers in the U.S. for scientific use, later expanded into a drive to link those supercomputers with high-speed optical networks, and blossomed into the notion of building a distributed, high-performance computing infrastructure – replete with compute, storage and management capabilities – available broadly to the science community.
The document describes the history and development of remote telepresence and virtual reality technologies over several decades. It outlines key projects and innovations including the NSFnet which connected supercomputers in the 1980s, the development of the CAVE virtual reality system in the early 1990s, and more advanced optical network projects like OptIPuter in the 2000s which enabled high-resolution telepresence and collaboration across global research centers.
12.03.13
CENIC 2012 Conference Award Talk
2012 CENIC Innovations in Networking Award for High-Performance Research Applications: Enhancing Mexican/American Research Collaborations.
Title: Bringing Mexico Into the Global LambdaGrid
Palo Alto, CA
07.07.03
Remote Talk from Calit2 to:
Building KAREN Communities for Collaboration Forum
KIWI Advanced Research and Education Network
University of Auckland, Auckland City, New Zealand
Title: Why Researchers are Using Advanced Networks
La Jolla, CA
OptIPuter-A High Performance SOA LambdaGrid Enabling Scientific Applications
07.03.21
IEEE Computer Society Tsutomu Kanai Award Keynote
At the Joint Meeting of the: 8th International Symposium on Autonomous Decentralized Systems
2nd International Workshop on Ad Hoc, Sensor and P2P Networks
11th IEEE International Workshop on Future Trends of Distributed Computing Systems
Title: OptIPuter-A High Performance SOA LambdaGrid Enabling Scientific Applications
Sedona, AZ
Toward a Global Interactive Earth Observing Cyberinfrastructure
The document discusses the need for a new generation of cyberinfrastructure to support interactive global earth observation. It outlines several prototyping projects that are building examples of systems enabling real-time control of remote instruments, remote data access and analysis. These projects are driving the development of an emerging cyber-architecture using web and grid services to link distributed data repositories and simulations.
The Jump to Light Speed - Data Intensive Earth Sciences are Leading the Way t...
05.06.14
Keynote to the 15th Federation of Earth Science Information Partners Assembly Meeting: Linking Data and Information to Decision Makers
Title: The Jump to Light Speed - Data Intensive Earth Sciences are Leading the Way to the International LambdaGrid
San Diego, CA
Opportunities for Advanced Technology in TelecommunicationsLarry Smarr
06.12.07
Invited Talk
37th IEEE Semiconductor Interface Specialists Conference
Catamaran Resort Hotel
Title: Opportunities for Advanced Technology in Telecommunications
San Diego, CA
Calit2 is an experiment in multi-disciplinary collaboration between UC San Diego and UC Irvine. It brings together over 350 faculty to conduct research at the intersection of telecommunications, information technology, and their applications. Calit2 has built extensive infrastructure including dedicated optical networks and wireless testbeds to enable new forms of collaboration and applications like telepresence and large-scale visualization. Its goal is to help invent new models for collaborative research and education that can transform the university and society in the future.
SC21: Larry Smarr on The Rise of Supernetwork Data Intensive ComputingLarry Smarr
Larry Smarr, founding director of Calit2 (now Distinguished Professor Emeritus at the University of California San Diego) and the first director of NCSA, is one of the seminal figures in the U.S. supercomputing community. What began as a personal drive, shared by others, to spur the creation of supercomputers in the U.S. for scientific use, later expanded into a drive to link those supercomputers with high-speed optical networks, and blossomed into the notion of building a distributed, high-performance computing infrastructure – replete with compute, storage and management capabilities – available broadly to the science community.
Remote Telepresence for Exploring Virtual WorldsLarry Smarr
The document describes the history and development of remote telepresence and virtual reality technologies over several decades. It outlines key projects and innovations including the NSFnet which connected supercomputers in the 1980s, the development of the CAVE virtual reality system in the early 1990s, and more advanced optical network projects like OptIPuter in the 2000s which enabled high-resolution telepresence and collaboration across global research centers.
Bringing Mexico Into the Global LambdaGridLarry Smarr
12.03.13
CENIC 2012 Conference Award Talk
2012 CENIC Innovations in Networking Award for High-Performance Research Applications: Enhancing Mexican/American Research Collaborations.
Title: Bringing Mexico Into the Global LambdaGrid
Palo Alto, CA
Why Researchers are Using Advanced NetworksLarry Smarr
07.07.03
Remote Talk from Calit2 to:
Building KAREN Communities for Collaboration Forum
KIWI Advanced Research and Education Network
University of Auckland, Auckland City, New Zealand
Title: Why Researchers are Using Advanced Networks
La Jolla, CA
OptIPuter-A High Performance SOA LambdaGrid Enabling Scientific ApplicationsLarry Smarr
07.03.21
IEEE Computer Society Tsutomu Kanai Award Keynote
At the Joint Meeting of the: 8th International Symposium on Autonomous Decentralized Systems
2nd International Workshop on Ad Hoc, Sensor and P2P Networks
11th IEEE International Workshop on Future Trends of Distributed Computing Systems
Title: OptIPuter-A High Performance SOA LambdaGrid Enabling Scientific Applications
Sedona, AZ
Toward a Global Interactive Earth Observing CyberinfrastructureLarry Smarr
The document discusses the need for a new generation of cyberinfrastructure to support interactive global earth observation. It outlines several prototyping projects that are building examples of systems enabling real-time control of remote instruments, remote data access and analysis. These projects are driving the development of an emerging cyber-architecture using web and grid services to link distributed data repositories and simulations.
The Jump to Light Speed - Data Intensive Earth Sciences are Leading the Way t...Larry Smarr
05.06.14
Keynote to the 15th Federation of Earth Science Information Partners Assembly Meeting: Linking Data and Information to Decision Makers
Title: The Jump to Light Speed - Data Intensive Earth Sciences are Leading the Way to the International LambdaGrid
San Diego, CA
From the Shared Internet to Personal Lightwaves: How the OptIPuter is Transfo...Larry Smarr
The document summarizes how the OptIPuter project is transforming scientific research through user-controlled high-speed optical network connections. It provides examples of how 1-10Gbps connections through projects like National LambdaRail are enabling new forms of collaborative work and access to scientific instruments and global data repositories. The OptIPuter creates an environment where researchers can access remote resources through local "OptIPortals" connected to these high-speed optical networks.
Information Technology Infrastructure Committee (ITIC): Report to the NACLarry Smarr
This document summarizes the December 2013 report from the NASA Advisory Council's Information Technology Infrastructure Committee (ITIC). It discusses NASA's transition to a more agile, collaborative agency that brings together experts from multiple centers to solve problems. The report outlines NASA's vision for a "OneNASA" organization enabled by unified IT tools and infrastructure. It also notes that NASA has begun implementing improved IT governance and developing a framework to coordinate IT investments across centers and missions.
High Performance Cyberinfrastructure is Needed to Enable Data-Intensive Scien...Larry Smarr
11.03.28
Remote Luncheon Presentation from Calit2@UCSD
National Science Board
Expert Panel Discussion on Data Policies
National Science Foundation
Title: High Performance Cyberinfrastructure is Needed to Enable Data-Intensive Science and Engineering
Arlington, Virginia
The document discusses the growing carbon footprint of information and communication technologies (ICT) and efforts to make cyberinfrastructure more energy efficient and environmentally sustainable. Specifically, it mentions that (1) ICT energy usage is growing rapidly and accounts for 2% of global greenhouse gas emissions, (2) universities are working on initiatives like the GreenLight project to reduce ICT energy usage through techniques like dynamic power management, and (3) further research is needed to develop more energy-efficient computing technologies, data center designs, and videoconferencing solutions to reduce the need for travel.
The OptiPuter, Quartzite, and Starlight Projects: A Campus to Global-Scale Te...Larry Smarr
05.03.09
Invited Talk
Optical Fiber Communication Conference (OFC2005)
Title: The OptiPuter, Quartzite, and Starlight Projects: A Campus to Global-Scale Testbed for Optical Technologies Enabling LambdaGrid Computing
Anaheim, CA
Science and Cyberinfrastructure in the Data-Dominated EraLarry Smarr
10.02.22
Invited talk
Symposium #1610, How Computational Science Is Tackling the Grand Challenges Facing Science and Society
Title: Science and Cyberinfrastructure in the Data-Dominated Era
San Diego, CA
How to Terminate the GLIF by Building a Campus Big Data Freeway SystemLarry Smarr
12.10.11
Keynote Lecture
12th Annual Global LambdaGrid Workshop
Title: How to Terminate the GLIF by Building a Campus Big Data Freeway System
Chicago, IL
High Performance Cyberinfrastructure Enables Data-Driven Science in the Globa...Larry Smarr
10.10.28
Invited Speaker
Grand Challenges in Data-Intensive Discovery Conference
San Diego Supercomputer Center, UC San Diego
Title: High Performance Cyberinfrastructure Enables Data-Driven Science in the Globally Networked World
La Jolla, CA
This document provides an update on perfSONAR network measurement tools, the IRIS and DyGIR projects, the Archipelago measurement platform, network services on TransPAC3 and ACE, and the Data Logistics Toolkit. Key points include:
- perfSONAR and OSCARS software will be used to provide monitoring and dynamic circuit services on TransPAC3 and ACE.
- The IRIS and DyGIR projects will develop monitoring and dynamic circuit software packages for international research networks.
- The Archipelago platform conducts large-scale IPv4 topology measurements from over 50 probes worldwide.
- TransPAC3 and ACE will provide high-performance connectivity between regions and dedicated infrastructure for data movement using the
A Campus-Scale High Performance Cyberinfrastructure is Required for Data-Int...Larry Smarr
11.12.12
Seminar Presentation
Princeton Institute for Computational Science and Engineering (PICSciE)
Princeton University
Title: A Campus-Scale High Performance Cyberinfrastructure is Required for Data-Intensive Research
Princeton, NJ
El Barcelona Supercomputing Center (BSC) fue establecido en 2005 y alberga el MareNostrum, uno de los superordenadores más potentes de España. Somos el centro pionero de la supercomputación en España. Nuestra especialidad es la computación de altas prestaciones - también conocida como HPC o High Performance Computing- y nuestra misión es doble: ofrecer infraestructuras y servicio de supercomputación a los científicos españoles y europeos, y generar conocimiento y tecnología para transferirlos a la sociedad. Somos Centro de Excelencia Severo Ochoa, miembros de primer nivel de la infraestructura de investigación europea PRACE (Partnership for Advanced Computing in Europe), y gestionamos la Red Española de Supercomputación (RES). Como centro de investigación, contamos con más de 456 expertos de 45 países, organizados en cuatro grandes áreas de investigación: Ciencias de la computación, Ciencias de la vida, Ciencias de la tierra y aplicaciones computacionales en ciencia e ingeniería.
The document discusses the evolution of computer architectures from early technological achievements like the transistor and integrated circuit. It describes increasing transistor densities following Moore's Law. Future technologies will focus on increasing core counts while decreasing cycle times and voltages. Performance will come from parallelism rather than clock speed increases due to heat limitations. The document outlines challenges in scaling to exascale systems by 2018.
In this deck from the 2019 Stanford HPC Conference, Rob Neely, from Lawrence Livermore National Laboratory presents: Sierra - Science Unleashed.
"This talk will give an overview of Sierra and some of the early science results it has enabled. Sierra is an IBM system harnessing the power of over 17,000 NVIDIA Volta GPUs recently deployed at Lawrence Livermore National Laboratory and is currently ranked as the #2 system on the Top500. Before being turned over for use in the classified mission, Sierra spent months in an “open science campaign” where we got an early glimpse at some of the truly game-changing science this system will unleash – selected results of which will be presented."
Rob Neely is a Computer Scientist and Technical Manager at Lawrence Livermore National Laboratory where he is the Weapon Simulation & Computing Program Coordinator for Computing Environments, and the Associate Division Lead for the Center for Applied Scientific Computing (CASC). He also is the DOE Exascale Computing Project lead for Software Technologies Ecosystem and Delivery. He has been involved in High Performance Computing for his entire 25+ year career.
Learn more: https://computation.llnl.gov/computers/sierra
and
http://hpcadvisorycouncil.com/events/2019/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
The document evaluates Lustre 2.9 and OpenStack for providing isolated POSIX file systems to tenants in OpenStack, finding that Lustre 2.9 allows uid mapping that can isolate tenants while maintaining high performance, and that physical and virtual Lustre routers can route traffic between tenants effectively albeit with some increased east-west traffic with virtual routers.
Streaming exa-scale data over 100Gbps networksbalmanme
This document discusses streaming exascale data over 100Gbps networks. It summarizes a demonstration at SC11 where climate simulation data was transferred from NERSC to ANL and ORNL at 83Gbps using a memory-mapped zero-copy network channel called MemzNet. The demonstration showed efficient transfer of large datasets containing many small files is possible over high-bandwidth networks through parallel streams, decoupling I/O and network operations, and dynamic data channel management. High-performance was achieved by keeping the data channel full through concurrent transfers and leveraging high-speed networking testbeds like ANI.
High Performance Cyberinfrastructure Enabling Data-Driven Science in the Biom...Larry Smarr
11.04.06
Joint Presentation
UCSD School of Medicine Research Council
Larry Smarr, Calit2 & Phil Papadopoulos, SDSC/Calit2
Title: High Performance Cyberinfrastructure Enabling Data-Driven Science in the Biomedical Sciences
In this deck from the 2017 MVAPICH User Group, Adam Moody from Lawrence Livermore National Laboratory presents: MVAPICH: How a Bunch of Buckeyes Crack Tough Nuts.
"High-performance computing is being applied to solve the world's most daunting problems, including researching climate change, studying fusion physics, and curing cancer. MPI is a key component in this work, and as such, the MVAPICH team plays a critical role in these efforts. In this talk, I will discuss recent science that MVAPICH has enabled and describe future research that is planned. I will detail how the MVAPICH team has responded to address past problems and list the requirements that future work will demand."
Watch the video: https://wp.me/p3RLHQ-hp6
40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facilityinside-BigData.com
In this deck from the Swiss HPC Conference, Mark Wilkinson presents: 40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility.
"DiRAC is the integrated supercomputing facility for theoretical modeling and HPC-based research in particle physics, and astrophysics, cosmology, and nuclear physics, all areas in which the UK is world-leading. DiRAC provides a variety of compute resources, matching machine architecture to the algorithm design and requirements of the research problems to be solved. As a single federated Facility, DiRAC allows more effective and efficient use of computing resources, supporting the delivery of the science programs across the STFC research communities. It provides a common training and consultation framework and, crucially, provides critical mass and a coordinating structure for both small- and large-scale cross-discipline science projects, the technical support needed to run and develop a distributed HPC service, and a pool of expertise to support knowledge transfer and industrial partnership projects. The on-going development and sharing of best-practice for the delivery of productive, national HPC services with DiRAC enables STFC researchers to produce world-leading science across the entire STFC science theory program."
Watch the video: https://wp.me/p3RLHQ-k94
Learn more: https://dirac.ac.uk/
and
http://hpcadvisorycouncil.com/events/2019/swiss-workshop/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Programming Trends in High Performance ComputingJuris Vencels
Presented on June 3, 2016 @
University of Latvia, Faculty of Physics and Mathematics
Laboratory for mathematical modelling of environmental and technological processes
Enjoy, like, share, distribute, remix, tweak, credit is not required.
Riding the Light: How Dedicated Optical Circuits are Enabling New ScienceLarry Smarr
The document discusses how dedicated optical circuits are enabling new science through high-bandwidth networks. It provides examples of several projects using dedicated optical networks, such as the OptIPuter project, to enable interactive analysis of large datasets through terabit network connections between supercomputing centers. The document concludes by discussing future ocean observatory networks that will use undersea fiber optics to enable remote interactive imaging and sensing.
Altreonic was spun off in 2008 from Eonic Systems to focus on real-time operating systems using formal techniques. Their OpenComRTOS is a small, network-centric real-time OS that uses CSP concurrency and can scale from 1 to over 10,000 nodes. It provides priority-based communication and fault tolerance and has been implemented on many heterogeneous platforms from DSPs to many-core systems.
Sierra will be LLNL's next advanced technology system and part of the CORAL collaboration between ORNL, ANL, and LLNL. Sierra will replace the current Sequoia system and feature an IBM POWER9 and NVIDIA Volta GPU accelerated architecture with over 125 PFLOPS of peak performance. Benchmark projections show the GPU-accelerated Sierra system is expected to deliver substantial performance gains compared to a CPU-only configuration. Sierra and its follow-on systems will usher in an accelerator-based computing era at LLNL.
Similar to Project StarGate An End-to-End 10Gbps HPC to User Cyberinfrastructure ANL * Calit2 * LBNL * NICS * ORNL * SDSC (20)
The Rise of Supernetwork Data Intensive ComputingLarry Smarr
Invited Remote Lecture to SC21
The International Conference for High Performance Computing, Networking, Storage, and Analysis
St. Louis, Missouri
November 18, 2021
My Remembrances of Mike Norman Over The Last 45 YearsLarry Smarr
Mike Norman has been a leader in computational astrophysics for over 45 years. Some of his influential work includes:
- Cosmic jet simulations in the early 1980s which helped explain phenomena from galactic centers.
- Pioneering the use of adaptive mesh refinement in the 1990s to achieve dynamic load balancing on supercomputers.
- Massive cosmology simulations in the late 2000s with over 100 trillion particles using thousands of processors across multiple supercomputing sites, producing petabytes of data.
- Developing end-to-end workflows in the 2000s to couple supercomputers, high-speed networks, and large visualization systems to enable real-time analysis of extremely large astrophysics simulations.
Metagenics How Do I Quantify My Body and Try to Improve its Health? June 18 2019Larry Smarr
Larry Smarr discusses quantifying his body and health over time through extensive self-tracking. He measures various biomarkers through regular blood tests and analyzes his gut microbiome by sequencing stool samples. This revealed issues like chronic inflammation and an unhealthy microbiome. Smarr then took steps like a restricted eating window and increasing plant diversity in his diet, which reversed metabolic syndrome issues and correlated with shifts in his microbiome ecology. His goal is to continue precisely measuring factors like toxins, hormones, gut permeability and food/supplement impacts to further optimize his health.
Panel: Reaching More Minority Serving InstitutionsLarry Smarr
This document discusses engaging more minority serving institutions (MSIs) in cyberinfrastructure development through regional networks. It provides data showing the importance of MSIs like historically black colleges and universities (HBCUs) in educating underrepresented minority students in STEM fields. Regional networks can help equalize opportunities by assisting MSIs in overcoming barriers to resources through training, networking infrastructure support, and helping institutions obtain necessary staffing and funding. Strategies mentioned include collaborating with MSIs on grants and addressing issues identified in surveys like lack of vision for data use beyond compliance. The goal is to broaden participation in STEAM fields by leveraging the success MSIs have shown in supporting underrepresented students.
Global Network Advancement Group - Next Generation Network-Integrated SystemsLarry Smarr
This document summarizes a presentation on global petascale to exascale workflows for data intensive sciences. It discusses a partnership convened by the GNA-G Data Intensive Sciences Working Group with the mission of meeting challenges faced by data-intensive science programs. Cornerstone concepts that will be demonstrated include integrated network and site resource management, model-driven frameworks for resource orchestration, end-to-end monitoring with machine learning-optimized data transfers, and integrating Qualcomm's GradientGraph with network services to optimize applications and science workflows.
Wireless FasterData and Distributed Open Compute Opportunities and (some) Us...Larry Smarr
This document discusses opportunities for ESnet to support wireless edge computing through developing a strategy around self-guided field laboratories (SGFL). It outlines several potential science use cases that could benefit from wireless and distributed computing capabilities, both in the short term through technologies like 5G, LoRa and Starlink, and longer term through the vision of automated SGFL. The document proposes some initial ideas for deploying and testing wireless edge computing technologies through existing projects to help enable the SGFL vision and further scientific opportunities. It emphasizes that exploring these emerging areas could help drive new science possibilities if done at a reasonable scale.
Wireless FasterData and Distributed Open Compute Opportunities and (some) Us...
Project StarGate An End-to-End 10Gbps HPC to User Cyberinfrastructure ANL * Calit2 * LBNL * NICS * ORNL * SDSC
1. Project StarGate An End-to-End 10Gbps HPC to User Cyberinfrastructure ANL * Calit2 * LBNL * NICS * ORNL * SDSC Report to the Dept. of Energy Advanced Scientific Computing Advisory Committee Oak Ridge, TN November 3, 2009 Dr. Larry Smarr Director, California Institute for Telecommunications and Information Technology Harry E. Gruber Professor, Dept. of Computer Science and Engineering Jacobs School of Engineering, UCSD Twitter: lsmarr
3. Credits Lawrence Berkeley National Laboratory (ESnet) Eli Dart San Diego Supercomputer Center Science application Michael Norman Rick Wagner (coordinator) Network Tom Hutton Oak Ridge National Laboratory Susan Hicks National Institute for Computational Sciences Nathaniel Mendoza Argonne National Laboratory Network/Systems Linda Winkler Loren Jan Wilson Visualization Joseph Insley Eric Olsen Mark Hereld Michael Papka [email_address] Larry Smarr (Overall Concept) Brian Dunne (Networking) Joe Keefe (OptIPortal) Kai Doerr, Falko Kuester (CGLX) ANL * Calit2 * LBNL * NICS * ORNL * SDSC
5. Project StarGate Goals Explore Use of OptIPortals as Petascale Supercomputer “Scalable Workstations” Exploit Dynamic 10 Gbs Circuits on ESnet Connect Hardware Resources at ORNL, ANL, SDSC Show that Data Need Not be Trapped by the Network “Event Horizon” [email_address] Rick Wagner Mike Norman ANL * Calit2 * LBNL * NICS * ORNL * SDSC
6. Why Supercomputer Centers Shouldn’t Be Data Black Holes or Island Universes Results are the Intellectual Property of the Investigator, Not the Center Where it was Computed Petascale HPC Machines Not Ideal for Analysis/Viz Doesn’t Take Advantage of Local CI Resources on Campuses (e.g., Triton) or at other National Facilities (e.g., ANL Eureka) ANL * Calit2 * LBNL * NICS * ORNL * SDSC
7. Opening Up 10Gbps Data Path ORNL/NICS to ANL to SDSC Connectivity provided by ESnet Science Data Network End-to-End Coupling of User with DOE/NSF HPC Facilities
8. StarGate Network & Hardware ALCF DOE Eureka 100 Dual Quad Core Xeon Servers 200 NVIDIA Quadro FX GPUs in 50 Quadro Plex S4 1U enclosures 3.2 TB RAM SDSC NICS Calit2/SDSC OptIPortal1 20 30” (2560 x 1600 pixel) LCD panels 10 NVIDIA Quadro FX 4600 graphics cards > 80 gigapixels 10 Gb/s network throughout NSF TeraGrid Kraken Cray XT5 8,256 Compute Nodes 99,072 Compute Cores 129 TB RAM simulation rendering visualization Science Data Network (SDN) > 10 Gb/s fiber optic network Dynamic VLANs configured using OSCARS ESnet ANL * Calit2 * LBNL * NICS * ORNL * SDSC Challenge: Kraken is not on ESnet
9. StarGate Streaming Rendering ALCF SDSC flPy, a parallel (MPI) tiled image/movie viewer composites the individual movies, and synchronizes the movie playback across the OptIPortal rendering nodes. ESnet Simulation volume is rendered using vl3 , a parallel (MPI) volume renderer utilizing Eureka’s GPUs. The rendering changes views steadily to highlight 3D structure. A media bridge at the border provides secure access to the parallel rendering streams. gs1.intrepid.alcf.anl.gov ALCF Internal 1 The full image is broken into subsets (tiles). The tiles are continuously encoded as a separate movies. 2 3 4 Updated instructions are sent back to the renderer to change views, or load a different dataset. 5 ANL * Calit2 * LBNL * NICS * ORNL * SDSC
10. Test animation of 1/64 of the data volume (1024 3 region) www.mcs.anl.gov/~insley/ENZO/BAO/B4096/enzo-b4096-1024subregion-test.mov ANL * Calit2 * LBNL * NICS * ORNL * SDSC
11. Data Moved ORNL to ANL data transfer nodes 577 time steps ~148TB Peak bandwidth ~2.4Gb/s Disk to disk GridFTP, Multiple Simultaneous Transfers, Each with Multiple TCP Connects Average Aggregate Bandwidth <800mb/s, Using Multiple Transfers Additionally Pre-Transfer: Data was Stored in ORNL HPSS, Had to be Staged to Disk on Data Transfer Nodes One Moved to HPSS Partition, Cant Move Data Back Post-Transfer: Each Time Step was a Tar File, Had to Untar Moving Forward, will Need Direct High-Bandwidth Path from Kraken (NICS) to Eureka (ALCF) ANL * Calit2 * LBNL * NICS * ORNL * SDSC
12. ANL Eureka Graphics Cluster Data Analytics and Visualization Cluster at ALCF (2) Head Nodes, (100) Compute Nodes (2) Nvidia Quadro FX5600 Graphics Cards (2) XEON E5405 2.00 GHz Quad Core Processors 32 GB RAM: (8) 4 Rank, 4GB DIMMS (1) Myricom 10G CX4 NIC (2) 250GB Local Disks; (1) System, (1) Minimal Scratch 32 GFlops per Server ANL * Calit2 * LBNL * NICS * ORNL * SDSC
13. Visualization Pipeline vl3 – Hardware Accelerated Volume Rendering Library 4096 3 Volume on 65 Nodes of Eureka Enzo Reader can Load from Native HDF5 Format Uniform Grid and AMR, Resampled to Uniform grid Locally Run Interactively on Subset of Data On a Local Workstation, 512 3 Subvolume Batch for Generating Animations on Eureka Working Toward Remote Display and Control ANL * Calit2 * LBNL * NICS * ORNL * SDSC
14. vl3 Rendering Performance on Eureka Image Size: 4096x4096 Number of Samples: 4096 Note Data I/O Bottleneck ANL * Calit2 * LBNL * NICS * ORNL * SDSC Data Size Number of Processors/ Graphics Cards Load Time Render/Composite Time 2048 3 17 2min 27sec 9.22 sec 4096 3 129 5min 10sec 4.51 sec 6400 3 (AMR) 129 4min 17sec 13.42sec
15. Next Experiments SC09 - Stream a 4Kx2K Movie From ANL Storage Device to OptIPortable on Show Floor Mike Norman is a 2009 INCITE investigator 6 M SU on Jaguar Supersonic MHD Turbulence Simulations for Star Formation Use Similar Data Path for This to Show Replicability Can DOE Make This New Mode Available to Other Users? ANL * Calit2 * LBNL * NICS * ORNL * SDSC
Editor's Notes
NSF TeraGrid Review January 10, 2006 Charlie Catlett (cec@uchicago.edu) Eureka – the visualization cluster at ALCF Each node has 2 graphics cards 8 processors 32 GB RAM fast interconnect local disk Server FLOPS = 2.0 GHz * 8 cores * 2 flop per clock = 32 GFLOPS
NSF TeraGrid Review January 10, 2006 Charlie Catlett (cec@uchicago.edu) One of its strengths is its speed, and ability to handle large data sets. Number of procs = power of 2 to do rendering + 1 for compositing 2 graphics cards per node, so half as many nodes as listed here Data i/o is clearly the bottleneck Doing an animation of a single time step, data is only loaded once, can be pretty quick