The document provides an overview of the Pacific Research Platform (PRP) and discusses its role in connecting researchers across institutions and enabling new applications. It summarizes the PRP's key components like Science DMZs, Data Transfer Nodes (FIONAs), and use of Kubernetes for container management. Several examples are given of how the PRP facilitates high-performance distributed data analysis, access to remote supercomputers, and sensor networks coupled to real-time computing. Upcoming work on machine learning applications and expanding the PRP internationally is also outlined.
The Pacific Research Platform: Building a Distributed Big-Data Machine-Learni...Larry Smarr
The Pacific Research Platform (PRP) is a distributed big data and machine learning cyberinfrastructure connecting researchers across multiple UC campuses. It was established in 2015 with NSF funding and has since expanded to include other California universities and national/international partners. The PRP provides high-speed networks, storage, and computing resources like GPUs. It has enabled new data-intensive collaborations and significantly accelerated research workflows. The PRP also supports educational initiatives, providing computing resources for data science courses impacting thousands of students.
The Pacific Research Platform: a Science-Driven Big-Data Freeway SystemLarry Smarr
The Pacific Research Platform (PRP) is a multi-institutional partnership that establishes a high-capacity "big data freeway system" spanning the University of California campuses and other research universities in California to facilitate rapid data access and sharing between researchers and institutions. Fifteen multi-campus application teams in fields like particle physics, astronomy, earth sciences, biomedicine, and visualization drive the technical design of the PRP over five years. The goal of the PRP is to extend campus "Science DMZ" networks to allow high-speed data movement between research labs, supercomputer centers, and data repositories across campus, regional
The document discusses the Pacific Research Platform (PRP), a distributed cyberinfrastructure that connects researchers and data across multiple campuses in California and beyond using optical fiber networking. Key points:
- The PRP uses high-speed networking infrastructure like the CENIC network to connect data generators and consumers across 15+ campuses, creating an integrated "big data freeway system".
- It deploys specialized data transfer nodes called FIONAs to enable high-speed transfer of large datasets between sites at near the full network speed.
- Recent additions include using Kubernetes to orchestrate containers across the PRP infrastructure and integrating machine learning resources through the CHASE-CI grant to support data-intensive AI applications.
The Pacific Research Platform: Building a Distributed Big-Data Machine-Learni...Larry Smarr
The document summarizes the Pacific Research Platform (PRP) which connects researchers across multiple universities with high-speed networks and computing resources for big data and machine learning applications. Key points:
- PRP connects 15 universities with optical networks, distributed storage devices (FIONAs), and over 350 GPUs for data analysis and AI training.
- It allows researchers to rapidly share and analyze large datasets, with one example reducing a workflow from 19 days to 52 minutes.
- Other projects using PRP resources include climate modeling, astrophysics simulations, and machine learning courses involving thousands of students.
Internet & Climate Change: Cyberinfrastructure for a Carbon-Constrained WorldLarry Smarr
- Internet and information technologies (ICT) can play a key role in addressing climate change by enabling efficiency gains across multiple sectors that could reduce greenhouse gas emissions up to 5 times more than ICT's own carbon footprint.
- University campuses can serve as living laboratories for testing green ICT solutions and infrastructure to reduce emissions from buildings, transportation, electricity generation and distribution.
- Advances in machine learning and brain-inspired computing will be necessary to develop low-power exascale supercomputers needed to fully model and simulate climate systems.
National Federated Compute Platforms: The Pacific Research PlatformLarry Smarr
The Pacific Research Platform (PRP) is a multi-institution hypercluster that connects science DMZs across 25 partner campuses using FIONA data transfer nodes and 10-100Gbps networks. PRP adopted Kubernetes and Rook to orchestrate petabytes of distributed storage and GPUs for data science applications. A CHASE-CI grant added machine learning capabilities. PRP is working to federate with the Open Science Grid and become a prototype for a future National Research Platform connecting regional networks.
High Performance Cyberinfrastructure is Needed to Enable Data-Intensive Scien...Larry Smarr
11.03.28
Remote Luncheon Presentation from Calit2@UCSD
National Science Board
Expert Panel Discussion on Data Policies
National Science Foundation
Title: High Performance Cyberinfrastructure is Needed to Enable Data-Intensive Science and Engineering
Arlington, Virginia
An Integrated Science Cyberinfrastructure for Data-Intensive ResearchLarry Smarr
This document summarizes Dr. Larry Smarr's vision for an integrated science cyberinfrastructure to support data-intensive research. It discusses the exponential growth of digital data and need for dedicated high-bandwidth networks and data repositories. Specific examples are provided of initiatives at UCSD, regional optical networks connecting research institutions, and national projects like the Open Science Grid and Cancer Genomics Hub that are creating cyberinfrastructure to enable data-intensive scientific discovery.
The Pacific Research Platform:a Science-Driven Big-Data Freeway SystemLarry Smarr
The Pacific Research Platform will create a regional "Big Data Freeway System" along the West Coast to support science. It will connect major research institutions with high-speed optical networks, allowing them to share vast amounts of data and computational resources. This will enable new forms of collaborative, data-intensive research for fields like particle physics, astronomy, biomedicine, and earth sciences. The first phase aims to establish a basic networked infrastructure, with later phases advancing capabilities to 100Gbps and beyond with security and distributed technologies.
The document discusses the growing carbon footprint of information and communication technologies (ICT) and efforts to make cyberinfrastructure more energy efficient and environmentally sustainable. Specifically, it mentions that (1) ICT energy usage is growing rapidly and accounts for 2% of global greenhouse gas emissions, (2) universities are working on initiatives like the GreenLight project to reduce ICT energy usage through techniques like dynamic power management, and (3) further research is needed to develop more energy-efficient computing technologies, data center designs, and videoconferencing solutions to reduce the need for travel.
Toward a Global Interactive Earth Observing CyberinfrastructureLarry Smarr
The document discusses the need for a new generation of cyberinfrastructure to support interactive global earth observation. It outlines several prototyping projects that are building examples of systems enabling real-time control of remote instruments, remote data access and analysis. These projects are driving the development of an emerging cyber-architecture using web and grid services to link distributed data repositories and simulations.
The Pacific Research Platform (PRP) is a multi-institutional cyberinfrastructure project that connects researchers across California and beyond to share large datasets. It spans the 10 University of California campuses, major private research universities, supercomputer centers, and some out-of-state universities. Fifteen multi-campus research teams in fields like physics, astronomy, earth sciences, biomedicine, and multimedia will drive the technical needs of the PRP over five years. The goal is to create a "big data freeway" to allow high-speed sharing of data between research labs, supercomputers, and repositories across multiple networks without performance loss over long distances.
The Jump to Light Speed - Data Intensive Earth Sciences are Leading the Way t...Larry Smarr
05.06.14
Keynote to the 15th Federation of Earth Science Information Partners Assembly Meeting: Linking Data and Information to Decision Makers
Title: The Jump to Light Speed - Data Intensive Earth Sciences are Leading the Way to the International LambdaGrid
San Diego, CA
High Performance Cyberinfrastructure for Data-Intensive ResearchLarry Smarr
This document summarizes a lecture given by Dr. Larry Smarr on high performance cyberinfrastructure for data-intensive research. The summary discusses:
1) The need for dedicated high-bandwidth networks separate from the shared internet to enable big data research due to the increasing volume of digital scientific data.
2) Extensions being made to networks like CENIC in California to provide campus "Big Data Freeways" connecting instruments, computing resources, and remote facilities.
3) The use of networks like HPWREN to provide high-performance wireless access for data-intensive applications in rural areas like astronomy, wildfire detection, and more.
Opening Keynote Lecture
15th Annual ON*VECTOR International Photonics Workshop
Calit2’s Qualcomm Institute
University of California, San Diego
February 29, 2016
The document summarizes Dr. Larry Smarr's presentation on the Pacific Research Platform (PRP) and its role in working toward a national research platform. It describes how PRP has connected research teams and devices across multiple UC campuses for over 15 years. It also details PRP's innovations like Flash I/O Network Appliances (FIONAs) and use of Kubernetes to manage distributed resources. Finally, it outlines opportunities to further integrate PRP with the Open Science Grid and expand the platform internationally through partnerships.
- The Pacific Research Platform (PRP) interconnects campus DMZs across multiple institutions to provide high-speed connectivity for data-intensive research.
- The PRP utilizes specialized data transfer nodes called FIONAs that provide disk-to-disk transfer speeds of 10-100Gbps.
- Early applications of the PRP include distributing telescope data between UC campuses, connecting particle physics experiments to computing resources, and enabling real-time wildfire sensor data analysis.
A Campus-Scale High Performance Cyberinfrastructure is Required for Data-Int...Larry Smarr
11.12.12
Seminar Presentation
Princeton Institute for Computational Science and Engineering (PICSciE)
Princeton University
Title: A Campus-Scale High Performance Cyberinfrastructure is Required for Data-Intensive Research
Princeton, NJ
Towards a High-Performance National Research Platform Enabling Digital ResearchLarry Smarr
The document summarizes Dr. Larry Smarr's keynote presentation on enabling a high-performance national research platform. It describes how multi-institutional research increasingly relies on access to large datasets, requiring new cyberinfrastructure. The Pacific Research Platform provides high-bandwidth networking between universities to support research collaborations across disciplines. The next steps involve scaling this model into a national and global platform. The presentation highlights how the PRP enables various scientific applications and drives innovation through improved data transfer capabilities and distributed computing resources.
Creating a Big Data Machine Learning Platform in CaliforniaLarry Smarr
Big Data Tech Forum: Big Data Enabling Technologies and Applications
San Diego Chinese American Science and Engineering Association (SDCASEA)
Sanford Consortium
La Jolla, CA
December 2, 2017
High Performance Cyberinfrastructure Enabling Data-Driven Science Supporting ...Larry Smarr
11.05.13
Invited Presentation
Sanford Consortium for Regenerative Medicine
Salk Institute, La Jolla
Larry Smarr, Calit2 & Phil Papadopoulos, SDSC/Calit2
Title: High Performance Cyberinfrastructure Enabling Data-Driven Science Supporting Stem Cell Research
Analyzing Large Earth Data Sets: New Tools from the OptiPuter and LOOKING Pro...Larry Smarr
The document discusses two projects, OptIPuter and LOOKING, that aim to analyze large earth data sets using optical networking and grid technologies. OptIPuter extends grid middleware to dedicated optical circuits for earth and medical sciences. LOOKING builds on OptIPuter to provide real-time control of ocean observatories through web and grid services integrated over optical networks. Both projects represent efforts to develop cyberinfrastructure for interactive analysis of remote earth science data and instruments.
CHASE-CI: A Distributed Big Data Machine Learning PlatformLarry Smarr
This document summarizes a talk given by Professor Ken Kreutz-Delgado on distributed machine learning platforms and brain-inspired computing. It discusses the Pacific Research Platform (PRP) which connects multiple universities and research institutions. The PRP uses FIONA appliances and Kubernetes to distribute storage and processing. A new NSF grant will add GPUs across 10 campuses for training AI algorithms on big data. The talk envisions connecting the PRP with clouds of GPUs and non-von Neumann processors like IBM's TrueNorth chip. Calit2's Pattern Recognition Lab uses different processors including TrueNorth to explore machine learning algorithms.
The Rise of Supernetwork Data Intensive ComputingLarry Smarr
Invited Remote Lecture to SC21
The International Conference for High Performance Computing, Networking, Storage, and Analysis
St. Louis, Missouri
November 18, 2021
My Remembrances of Mike Norman Over The Last 45 YearsLarry Smarr
Mike Norman has been a leader in computational astrophysics for over 45 years. Some of his influential work includes:
- Cosmic jet simulations in the early 1980s which helped explain phenomena from galactic centers.
- Pioneering the use of adaptive mesh refinement in the 1990s to achieve dynamic load balancing on supercomputers.
- Massive cosmology simulations in the late 2000s with over 100 trillion particles using thousands of processors across multiple supercomputing sites, producing petabytes of data.
- Developing end-to-end workflows in the 2000s to couple supercomputers, high-speed networks, and large visualization systems to enable real-time analysis of extremely large astrophysics simulations.
Metagenics How Do I Quantify My Body and Try to Improve its Health? June 18 2019Larry Smarr
Larry Smarr discusses quantifying his body and health over time through extensive self-tracking. He measures various biomarkers through regular blood tests and analyzes his gut microbiome by sequencing stool samples. This revealed issues like chronic inflammation and an unhealthy microbiome. Smarr then took steps like a restricted eating window and increasing plant diversity in his diet, which reversed metabolic syndrome issues and correlated with shifts in his microbiome ecology. His goal is to continue precisely measuring factors like toxins, hormones, gut permeability and food/supplement impacts to further optimize his health.
Panel: Reaching More Minority Serving InstitutionsLarry Smarr
This document discusses engaging more minority serving institutions (MSIs) in cyberinfrastructure development through regional networks. It provides data showing the importance of MSIs like historically black colleges and universities (HBCUs) in educating underrepresented minority students in STEM fields. Regional networks can help equalize opportunities by assisting MSIs in overcoming barriers to resources through training, networking infrastructure support, and helping institutions obtain necessary staffing and funding. Strategies mentioned include collaborating with MSIs on grants and addressing issues identified in surveys like lack of vision for data use beyond compliance. The goal is to broaden participation in STEAM fields by leveraging the success MSIs have shown in supporting underrepresented students.
Global Network Advancement Group - Next Generation Network-Integrated SystemsLarry Smarr
This document summarizes a presentation on global petascale to exascale workflows for data intensive sciences. It discusses a partnership convened by the GNA-G Data Intensive Sciences Working Group with the mission of meeting challenges faced by data-intensive science programs. Cornerstone concepts that will be demonstrated include integrated network and site resource management, model-driven frameworks for resource orchestration, end-to-end monitoring with machine learning-optimized data transfers, and integrating Qualcomm's GradientGraph with network services to optimize applications and science workflows.
Wireless FasterData and Distributed Open Compute Opportunities and (some) Us...Larry Smarr
This document discusses opportunities for ESnet to support wireless edge computing through developing a strategy around self-guided field laboratories (SGFL). It outlines several potential science use cases that could benefit from wireless and distributed computing capabilities, both in the short term through technologies like 5G, LoRa and Starlink, and longer term through the vision of automated SGFL. The document proposes some initial ideas for deploying and testing wireless edge computing technologies through existing projects to help enable the SGFL vision and further scientific opportunities. It emphasizes that exploring these emerging areas could help drive new science possibilities if done at a reasonable scale.
The Asia Pacific and Korea Research Platforms: An Overview Jeonghoon MoonLarry Smarr
This document provides an overview of Asia Pacific and Korea research platforms. It discusses the Asia Pacific Research Platform working group in APAN, including its objectives to promote HPC ecosystems and engage members. It describes the Asi@Connect project which provides high-capacity internet connectivity for research across Asia-Pacific. It also discusses the Korea Research Platform and efforts to expand it to 25 national research institutes in Korea. New related projects on smart hospitals, agriculture, and environment are mentioned. The conclusion discusses enhancing APAN and the Korea Research Platform and expanding into new areas like disaster and AI education.
Panel: Reaching More Minority Serving InstitutionsLarry Smarr
This document discusses engaging more minority serving institutions (MSIs) in the National Research Platform (NRP). It provides data showing that MSIs serve a disproportionate number of underrepresented minority students and are important producers of STEM graduates from these groups. The NRP can help broaden participation in STEAM fields by providing MSIs access to advanced cyberinfrastructure resources, new learning modalities, and opportunities for collaborative research between MSIs and other institutions. Regional networks also have a role to play in helping MSIs overcome barriers and attracting them to collaborative grants. The goal is to tear down walls between research and teaching and reinvent the university experience for more inclusive learning and innovation.
Panel: The Global Research Platform: An OverviewLarry Smarr
The document provides an overview of the Global Research Platform (GRP), an international collaborative partnership creating a distributed environment for data-intensive global science. The GRP facilitates high-performance data gathering, analytics, transport up to terabits per second, computing, and storage to support large-scale global science cyberinfrastructure ecosystems. It aims to orchestrate research across multiple domains using international testbeds for investigating new technologies related to data-intensive science. Examples of instruments generating exabytes of data that would benefit include the Korea Superconducting Tokamak, the High Luminosity LHC, genomics, the SKA radio telescope, and the Vera Rubin Observatory.
LLM powered contract compliance application which uses Advanced RAG method Self-RAG and Knowledge Graph together for the first time.
It provides highest accuracy for contract compliance recorded so far for Oil and Gas Industry.
How We Added Replication to QuestDB - JonTheBeachjavier ramirez
Building a database that can beat industry benchmarks is hard work, and we had to use every trick in the book to keep as close to the hardware as possible. In doing so, we initially decided QuestDB would scale only vertically, on a single instance.
A few years later, data replication —for horizontally scaling reads and for high availability— became one of the most demanded features, especially for enterprise and cloud environments. So, we rolled up our sleeves and made it happen.
Today, QuestDB supports an unbounded number of geographically distributed read-replicas without slowing down reads on the primary node, which can ingest data at over 4 million rows per second.
In this talk, I will tell you about the technical decisions we made, and their trade offs. You'll learn how we had to revamp the whole ingestion layer, and how we actually made the primary faster than before when we added multi-threaded Write Ahead Logs to deal with data replication. I'll also discuss how we are leveraging object storage as a central part of the process. And of course, I'll show you a live demo of high-performance multi-region replication in action.
Amazon DocumentDB(MongoDB와 호환됨)는 빠르고 안정적이며 완전 관리형 데이터베이스 서비스입니다. Amazon DocumentDB를 사용하면 클라우드에서 MongoDB 호환 데이터베이스를 쉽게 설치, 운영 및 규모를 조정할 수 있습니다. Amazon DocumentDB를 사용하면 MongoDB에서 사용하는 것과 동일한 애플리케이션 코드를 실행하고 동일한 드라이버와 도구를 사용하는 것을 실습합니다.
Cómo hemos implementado semántica de "Exactly Once" en nuestra base de datos ...javier ramirez
Los sistemas distribuidos son difíciles. Los sistemas distribuidos de alto rendimiento, más. Latencias de red, mensajes sin confirmación de recibo, reinicios de servidores, fallos de hardware, bugs en el software, releases problemáticas, timeouts... hay un montón de motivos por los que es muy difícil saber si un mensaje que has enviado se ha recibido y procesado correctamente en destino. Así que para asegurar mandas el mensaje otra vez.. y otra... y cruzas los dedos para que el sistema del otro lado tenga tolerancia a los duplicados.
QuestDB es una base de datos open source diseñada para alto rendimiento. Nos queríamos asegurar de poder ofrecer garantías de "exactly once", deduplicando mensajes en tiempo de ingestión. En esta charla, te cuento cómo diseñamos e implementamos la palabra clave DEDUP en QuestDB, permitiendo deduplicar y además permitiendo Upserts en datos en tiempo real, añadiendo solo un 8% de tiempo de proceso, incluso en flujos con millones de inserciones por segundo.
Además, explicaré nuestra arquitectura de log de escrituras (WAL) paralelo y multithread. Por supuesto, todo esto te lo cuento con demos, para que veas cómo funciona en la práctica.
Amazon Aurora 클러스터를 초당 수백만 건의 쓰기 트랜잭션으로 확장하고 페타바이트 규모의 데이터를 관리할 수 있으며, 사용자 지정 애플리케이션 로직을 생성하거나 여러 데이터베이스를 관리할 필요 없이 Aurora에서 관계형 데이터베이스 워크로드를 단일 Aurora 라이터 인스턴스의 한도 이상으로 확장할 수 있는 Amazon Aurora Limitless Database를 소개합니다.
Delhi @ℂall @Girls ꧁❤ 9711199012 ❤꧂Glamorous sonam Mehra Top Model Safe
PRP, NRP, GRP & the Path Forward
1. “PRP, NRP, GRP,
& the Path Forward”
Presentation
2nd National Research Platform Workshop
Bozeman, MT
August 6, 2018
Dr. Larry Smarr
Director, California Institute for Telecommunications and Information Technology
Harry E. Gruber Professor,
Dept. of Computer Science and Engineering
Jacobs School of Engineering, UCSD
http://lsmarr.calit2.net
1
2. ESnet’s ScienceDMZ Accelerates Science Research:
DOE & NSF Partnering on Science Engagement and Technology Adoption
Science
DMZ
Data Transfer
Nodes
(DTN/FIONA)
Network
Architecture
(zero friction)
Performance
Monitoring
(perfSONAR)
ScienceDMZ Coined in 2010 by ESnet
Basis of PRP Architecture and Design
http://fasterdata.es.net/science-dmz/
DOE
NSF
NSF CC* program (2012+) Funded Deployment
of ScienceDMZ on 200 Univ. campuses
www.nsf.gov/funding/pgm_summ.jsp?pims_id=504748
Slide From Inder Monga, ESnet
See Talk by
Eli Dart &
Deep Dive #2
Tuesday
3. (GDC)
Logical Next Step: The Pacific Research Platform Networks Campus DMZs
to Create a Regional End-to-End Science-Driven “Big Data Superhighway” System
NSF CC*DNI Grant
$5M 10/2015-10/2020
PI: Larry Smarr, UC San Diego Calit2
Co-PIs:
• Camille Crittenden, UC Berkeley CITRIS,
• Tom DeFanti, UC San Diego Calit2/QI,
• Philip Papadopoulos, UCSD SDSC,
• Frank Wuerthwein, UCSD Physics and SDSC
Letters of Commitment from:
• 50 Researchers from 15 Campuses
• 32 IT/Network Organization Leaders
NSF Program Officer: Amy Walton
Source: John Hess, CENIC
4. PRP National-Scale Experimental Distributed Pilot:
Using CENIC & Internet2 to Connect Early-Adopter Quilt Regional R&E Networks
Announced May 8, 2018
Internet2 Global Summit
See
NRP Pilot
Monday;
Scaling
Tuesday
Original PRP
CENIC/PW Link
Extended PRP
Testbed
NSF CENIC Link
5. PRP Science DMZ Data Transfer Nodes (DTNs) -
Flash I/O Network Appliances (FIONAs)
UCSD Designed FIONAs
To Solve the Disk-to-Disk
Data Transfer Problem
at Full Speed
on 10G, 40G and
100G Networks
FIONAS—10/40G, $8,000
Phil Papadopoulos, SDSC &
Tom DeFanti, Joe Keefe & John Graham, Calit2
FIONette—1G, $250
Five Racked FIONAs at Calit2:
• Each Contains:
• Dual 12-Core CPUs
• 96GB RAM
• 1TB SSD
• 2 10GbE interfaces
• Total ~$10,500
• With 8 GPUs
• total ~$18,500
Report on
3-Day FIONA
Hands-On Workshop
For EPSCoR, MSI &
EPSCoR Deep Dive #3
Monday;
EPSCoR Talk Tuesday
6. GPN Becomes the First Multi-State Regional Network
to Peer with the PRP
Between the PRP-Contributed PWave DTN in Los Angeles
To GPN FIONA in UMC
Before PRP 0.8 Gbps, In May Seeing 3.7Gbs Over PRP, Now 11 Gbps
Source: John Hess, CENIC and George Rob III, UMissouri
May 30, 2018
See James Deaton
NRP Pilot Monday
7. Game Changer: Using Kubernetes
to Manage Containers Across the PRP
“Kubernetes is a way of stitching together
a collection of machines into, basically, a big computer,”
--Craig Mcluckie, Google
and now CEO and Founder of Heptio
"Everything at Google runs in a container."
--Joe Beda,Google
“Kubernetes has emerged as
the container orchestration engine of choice
for many cloud providers including
Google, AWS, Rackspace, and Microsoft,
and is now being used in HPC and Science DMZs.
--John Graham, Calit2/QI UC San Diego
Amazingly, I Didn’t
Mention Kubernetes
Last Year
Kubernetes
Tutorial
Sunday
8. Rook is Ceph Cloud-Native Object Storage
‘Inside’ Kubernetes
https://rook.io/
Source: John Graham, Calit2/QI
Kubernetes
Tutorial
Sunday
9. 40G 160TB
40G 160TB HPWREN
100G NVMe 6.4TB
FIONA8
2.5 FIONA8s
100G Epyc NVMe
100G Gold NVMe
July 2018 John Graham, UCSD
100G NVMe 6.4TB
Caltech*
40G 160TB
UCAR
FIONA8
FIONA8
3 FIONA8s
Calti2/UCI
FIONA8
FIONA8
>50 FIONA2s
FIONA8
FIONA8
6 FIONA8s
sdx-controller
2x40G 160TB HPWREN
Calit2/QI*/SIO
100G Gold FIONA8
SDSC
40G 160TB
UCR 40G 160TB
USC*
2x40G 160TB
UCLA
40G 160TB
Stanford U
40G 160TB
UCSB
100G NVMe 6.4TB
40G 160TB
UCSC*
40G 160TB
U Hawaii
PRP is Deploying Distributed Petabytes of Storage for Posting/Staging Data
at $10/TB per Year by Leveraging our Base of Installed FIONAs
10G FIONA$1K
40G 160TB HPWREN
100G NVMe 6.4TB
2 FIONA4s
SDSU*
Kubernetes Centos7
Rook/Ceph - Block/Object/FS
Swift API compatible with
SDSC, AWS, and Rackspace
Alex Szalay
Deep Dive #4
Monday
Rob Gardner
Tuesday
Dima
Mishin
Sunday
10. Operational Metrics: Containerized Trace Route Tool
Allows Realtime Visualization of Status of Network Links
All Kubernetes Nodes on PRP
Source: Dmitry Mishin(SDSC),
John Graham (Calit2)Presets
This node graph shows UCR
as the source of the flow
to the mesh
11. Operational Metrics: Containerized perfSONAR MaDDash Dashboards
For Realtime Measurements of PRP Number of Paths and Packet Loss
Source: Dmitry Mishin(SDSC),
John Graham (Calit2)
12. Quilt Members Have Built
Their Own perfSONAR MaDDash Inspired by PRP
http://quiltmesh.onenet.net/maddash-webui/
Source: Jen Leasure, Quilt
Aug. 4, 2018
13. Expanding to the Global Research Platform (GRP)
Via CENIC/Pacific Wave, Internet2, and International Links
PRP/
CENIC/PW
PRP’s Current
International
Partners
Korea Shows Distance is Not the Barrier
to Above 5Gb/s Disk-to-Disk Performance
Netherlands
Guam
Australia
Korea
Japan
Singapore
International-
Scale
Measurement
Technologies/
Techniques
Tuesday
14. PRP’s First 2.5 Years:
Connecting Multi-Campus Application Teams and Devices
Earth
Sciences
See Following
Panel: Science Drivers for NRP
15. PRP Science Application Class #1:
Providing High Performance Access to Distributed Data Analysis
16. Data Transfer Rates From 40 Gbps DTN in UCSD Physics Building,
Across Campus on PRISM DMZ, Then to Chicago’s Fermilab Over CENIC/ESnet
Based on This Success,
Würthwein Will Upgrade 40G DTN to 100G
For Bandwidth Tests & Kubernetes Integration
With OSG, Caltech, and UCSC
Source: Frank Würthwein, OSG, UCSD/SDSC, PRP
17. PRP Distributed Tier-2 Cache
Across Caltech & UCSD-Thousands of Flows Sustaining >10Gbps!
Cache
Server
Cache
Server…
Redirect
or
Cache
Server
Cache
Server…
Redirect
or
UCSD Caltech
Redirector Top Level Cache
Global Data Federation of CMS
Provisioned pilot systems:
PRP UCSD: 9 x 12 SATA Disk of 2TB
@ 10Gbps for Each System
PRP Caltech: 2 x 30 SATA Disk of 6TB
@ 40Gbps for Each System
Source: Frank Würthwein, OSG, UCSD/SDSC, PRP; Havey Newman, Caltech
18. Collaboration Opportunity with OSG/PRP/I2
on Distributed Storage
1.8PB1.2PB1.6PB
210TB
Total data volume pulled last year
is dominated by 4 caches.
OSG Is Operating a Distributed Caching CI.
At Present, 4 Caches Provide Significant Use
PRP Kubernetes Infrastructure Could Either
Grow Existing Caches by Adding Servers,
or by Adding Additional Locations
StashCache Users include:
LIGO
DES
Source: Frank Würthwein, OSG, UCSD/SDSC, PRP
See Talk
on OSG/PRP/I2
Tuesday
19. PRP Science Application Class #2:
Providing High Performance Access to Remote Supercomputers
20. Distributed Computation on PRP
Coupling SDSU Cluster and SDSC Comet Using Kubernetes Containers
25 years
Developed and executed MPI-based PRP Kubernetes Cluster execution
[CO2,aq] 100 Year Simulation
4 days
75 years
100 years
• 0.5 km x 0.5 km x 17.5 m
• Three sandstone layers
separated by two shale
layers
Simulating the Injection of CO2
in Brine-Saturated Reservoirs:
Poroelastic & Pressure-Velocity
Fields Solved In Parallel With MPI
Using Domain Decomposition
Across Containers
Source: Chris Paolini and Jose Castillo, SDSU
See Talk by
Chris Paolini
Sunday
21. Speeding Downloads Using 100 Gbps PRP Link Over CENIC
Couples UC Santa Cruz Astrophysics Cluster to LBNL NERSC Supercomputer
CENIC 2018
Innovations in
Networking
Award for
Research
Applications
NSF-Funded Cyberengineer
Shaw Dong @UCSC
Receiving FIONA
Feb 7, 2017
22. The Great Plains Network
Has Many Campuses With Active Projects at SDSC
GPN Map Source: James Deaton, GPN Shawn Strande, SDSC
23. PRP Science Application Class #3:
Providing High Perf. Access to SensorNets Coupled to Realtime Computing
24. Church Fire, San Diego CA
Alert SD&ECameras/HPWREN
October 21, 2017
New PRP Application:
Coupling Wireless Wildfire Sensors to Computing
Thomas Fire, Ventura, CA
Firemap Tool, WIFIRE
December 10, 2017
CENIC 2018
Innovations in Networking Award
for Experimental Applications
See HPWREN
Deep Dive #1
Tuesday
25. Once a Wildfire is Spotted, PRP Brings High-Resolution Weather Data
to Fire Modeling Workflows in WIFIRE
Real-Time
Meteorological Sensors
Weather Forecast
Landscape data
WIFIRE Firemap
Fire Perimeter
Work Flow
PRP
Source: Ilkay Altintas, SDSC
26. Fiber Optic Network Streams Images From
UC San Diego Jaffe Lab (SIO) Scripps Plankton Microscope Camera
27. Over 1 Billion Images So Far!
Requires Machine Learning for Automated Image Analysis and Classification
Phytoplankton: Diatoms
Zooplankton: Copepods
Zooplankton: Larvaceans
Source: Jules Jaffe, SIO
”We are using the FIONAs for image processing...
this includes doing Particle Tracking Velocimetry
that is very computationally intense.”-Jules Jaffe
28. Adding Machine Learning to PRP:
Left & Right Brain Computing: Arithmetic vs. Pattern Recognition
Adapted from D-Wave
29. New NSF CHASE-CI Grant Creates a Community Cyberinfrastructure:
Adding a Machine Learning Layer Built on Top of the Pacific Research Platform
Caltech
UCB
UCI UCR
UCSD
UCSC
Stanford
MSU
UCM
SDSU
NSF Grant for High Speed “Cloud” of 256 GPUs
For 30 ML Faculty & Their Students at 10 Campuses
for Training AI Algorithms on Big Data
See Venkat Vishwanath,
Deep Dive #4
Tuesday
30. FIONA8: Adding GPUs to FIONAs
Supports Data Science Machine Learning
Multi-Tenant Containerized GPU JupyterHub
Running Kubernetes / CoreOS
Eight Nvidia GTX-1080 Ti GPUs
~$13K
32GB RAM, 3TB SSD, 40G & Dual 10G ports
Source: John Graham, Calit2
31. 48 GPUs for
OSG Applications
UCSD Adding >350 Game GPUs to Data Sciences Cyberinfrastructure -
Devoted to Data Analytics and Machine Learning
SunCAVE 70 GPUs
WAVE + Vroom 48 GPUs
FIONA with
8-Game GPUs
95 GPUs
for Students
CHASE-CI Grant Provides
96 GPUs at UCSD
for Training AI Algorithms on Big Data
Plus 288 64-bit GPUs
On SDSC’s Comet
32. Next Step: Using Kubernetes to Surround the PRP Machine Learning Platform
With Clouds of CPUs, GPUs and Non-Von Neumann Processors
CHASE-CI
64-TrueNorth
Cluster
64-bit GPUs
4352x NVIDIA Tesla V100 GPUs
See Talks by
NSF Clouds,
Google, Amazon
Microsoft Installs Altera FPGAs
into Bing Servers &
384 into TACC for Academic Access
33. Calit2 Has Established Labs On Both UC San Diego and UC Irvine Campuses
For Exploring Machine Learning on von Neumann and NvN Processors
Charless Fowlkes, Director
Ken Kreutz Delgado, Director
34. Our Support:
• US National Science Foundation (NSF) awards
CNS 0821155, CNS-1338192, CNS-1456638, CNS-1730158,
ACI-1540112, & ACI-1541349
• University of California Office of the President CIO
• UCSD Chancellor’s Integrated Digital Infrastructure Program
• UCSD Next Generation Networking initiative
• Calit2 and Calit2 Qualcomm Institute
• CENIC, PacificWave and StarLight
• DOE ESnet