- The Pacific Research Platform (PRP) interconnects campus DMZs across multiple institutions to provide high-speed connectivity for data-intensive research.
- The PRP utilizes specialized data transfer nodes called FIONAs that provide disk-to-disk transfer speeds of 10-100Gbps.
- Early applications of the PRP include distributing telescope data between UC campuses, connecting particle physics experiments to computing resources, and enabling real-time wildfire sensor data analysis.
Towards a High-Performance National Research Platform Enabling Digital ResearchLarry Smarr
The document summarizes Dr. Larry Smarr's keynote presentation on enabling a high-performance national research platform. It describes how multi-institutional research increasingly relies on access to large datasets, requiring new cyberinfrastructure. The Pacific Research Platform provides high-bandwidth networking between universities to support research collaborations across disciplines. The next steps involve scaling this model into a national and global platform. The presentation highlights how the PRP enables various scientific applications and drives innovation through improved data transfer capabilities and distributed computing resources.
Peering The Pacific Research Platform With The Great Plains NetworkLarry Smarr
The Pacific Research Platform (PRP) connects research institutions across the western United States with high-speed networks to enable data-intensive science collaborations. Key points:
- The PRP connects 15 campuses across California and links to the Great Plains Network, allowing researchers to access remote supercomputers, share large datasets, and collaborate on projects like analyzing data from the Large Hadron Collider.
- The PRP utilizes Science DMZ architectures with dedicated data transfer nodes called FIONAs to achieve high-speed transfer of large files. Kubernetes is used to manage distributed storage and computing resources.
- Early applications include distributed climate modeling, wildfire science, plankton imaging, and cancer genomics. The PR
The Pacific Research Platform (PRP) is a multi-institutional cyberinfrastructure project that connects researchers across California and beyond to share large datasets. It spans the 10 University of California campuses, major private research universities, supercomputer centers, and some out-of-state universities. Fifteen multi-campus research teams in fields like physics, astronomy, earth sciences, biomedicine, and multimedia will drive the technical needs of the PRP over five years. The goal is to create a "big data freeway" to allow high-speed sharing of data between research labs, supercomputers, and repositories across multiple networks without performance loss over long distances.
The Pacific Research Platform: Building a Distributed Big-Data Machine-Learni...Larry Smarr
The Pacific Research Platform (PRP) is a distributed big data and machine learning cyberinfrastructure connecting researchers across multiple UC campuses. It was established in 2015 with NSF funding and has since expanded to include other California universities and national/international partners. The PRP provides high-speed networks, storage, and computing resources like GPUs. It has enabled new data-intensive collaborations and significantly accelerated research workflows. The PRP also supports educational initiatives, providing computing resources for data science courses impacting thousands of students.
The Pacific Research Platform: Building a Distributed Big Data Machine Learni...Larry Smarr
This document summarizes Dr. Larry Smarr's invited talk about the Pacific Research Platform (PRP) given at the San Diego Supercomputer Center in April 2019. The PRP is building a distributed big data machine learning supercomputer by connecting high-performance computing and data resources across multiple universities in California and beyond using high-speed networks. It provides researchers with petascale computing power, distributed storage, and tools like Kubernetes to enable collaborative data-intensive science across institutions.
The Pacific Research Platform: a Science-Driven Big-Data Freeway SystemLarry Smarr
The Pacific Research Platform (PRP) is a multi-institutional partnership that establishes a high-capacity "big data freeway system" spanning the University of California campuses and other research universities in California to facilitate rapid data access and sharing between researchers and institutions. Fifteen multi-campus application teams in fields like particle physics, astronomy, earth sciences, biomedicine, and visualization drive the technical design of the PRP over five years. The goal of the PRP is to extend campus "Science DMZ" networks to allow high-speed data movement between research labs, supercomputer centers, and data repositories across campus, regional
Opening Keynote Lecture
15th Annual ON*VECTOR International Photonics Workshop
Calit2’s Qualcomm Institute
University of California, San Diego
February 29, 2016
Towards a High-Performance National Research Platform Enabling Digital ResearchLarry Smarr
The document summarizes Dr. Larry Smarr's keynote presentation on enabling a high-performance national research platform. It describes how multi-institutional research increasingly relies on access to large datasets, requiring new cyberinfrastructure. The Pacific Research Platform provides high-bandwidth networking between universities to support research collaborations across disciplines. The next steps involve scaling this model into a national and global platform. The presentation highlights how the PRP enables various scientific applications and drives innovation through improved data transfer capabilities and distributed computing resources.
Peering The Pacific Research Platform With The Great Plains NetworkLarry Smarr
The Pacific Research Platform (PRP) connects research institutions across the western United States with high-speed networks to enable data-intensive science collaborations. Key points:
- The PRP connects 15 campuses across California and links to the Great Plains Network, allowing researchers to access remote supercomputers, share large datasets, and collaborate on projects like analyzing data from the Large Hadron Collider.
- The PRP utilizes Science DMZ architectures with dedicated data transfer nodes called FIONAs to achieve high-speed transfer of large files. Kubernetes is used to manage distributed storage and computing resources.
- Early applications include distributed climate modeling, wildfire science, plankton imaging, and cancer genomics. The PR
The Pacific Research Platform (PRP) is a multi-institutional cyberinfrastructure project that connects researchers across California and beyond to share large datasets. It spans the 10 University of California campuses, major private research universities, supercomputer centers, and some out-of-state universities. Fifteen multi-campus research teams in fields like physics, astronomy, earth sciences, biomedicine, and multimedia will drive the technical needs of the PRP over five years. The goal is to create a "big data freeway" to allow high-speed sharing of data between research labs, supercomputers, and repositories across multiple networks without performance loss over long distances.
The Pacific Research Platform: Building a Distributed Big-Data Machine-Learni...Larry Smarr
The Pacific Research Platform (PRP) is a distributed big data and machine learning cyberinfrastructure connecting researchers across multiple UC campuses. It was established in 2015 with NSF funding and has since expanded to include other California universities and national/international partners. The PRP provides high-speed networks, storage, and computing resources like GPUs. It has enabled new data-intensive collaborations and significantly accelerated research workflows. The PRP also supports educational initiatives, providing computing resources for data science courses impacting thousands of students.
The Pacific Research Platform: Building a Distributed Big Data Machine Learni...Larry Smarr
This document summarizes Dr. Larry Smarr's invited talk about the Pacific Research Platform (PRP) given at the San Diego Supercomputer Center in April 2019. The PRP is building a distributed big data machine learning supercomputer by connecting high-performance computing and data resources across multiple universities in California and beyond using high-speed networks. It provides researchers with petascale computing power, distributed storage, and tools like Kubernetes to enable collaborative data-intensive science across institutions.
The Pacific Research Platform: a Science-Driven Big-Data Freeway SystemLarry Smarr
The Pacific Research Platform (PRP) is a multi-institutional partnership that establishes a high-capacity "big data freeway system" spanning the University of California campuses and other research universities in California to facilitate rapid data access and sharing between researchers and institutions. Fifteen multi-campus application teams in fields like particle physics, astronomy, earth sciences, biomedicine, and visualization drive the technical design of the PRP over five years. The goal of the PRP is to extend campus "Science DMZ" networks to allow high-speed data movement between research labs, supercomputer centers, and data repositories across campus, regional
Opening Keynote Lecture
15th Annual ON*VECTOR International Photonics Workshop
Calit2’s Qualcomm Institute
University of California, San Diego
February 29, 2016
This document discusses several projects related to connecting research institutions through high-speed networks:
1) The Pacific Research Platform connects campuses in California through a "big data superhighway" funded by NSF from 2015-2020.
2) CHASE-CI adds machine learning capabilities for researchers across 10 campuses in California using NSF-funded GPU resources.
3) A pilot project is using CENIC and Internet2 to connect regional research networks on a national scale, funded by NSF from 2018-2019.
The document discusses the Pacific Research Platform (PRP), a distributed cyberinfrastructure that connects researchers and data across multiple campuses in California and beyond using optical fiber networking. Key points:
- The PRP uses high-speed networking infrastructure like the CENIC network to connect data generators and consumers across 15+ campuses, creating an integrated "big data freeway system".
- It deploys specialized data transfer nodes called FIONAs to enable high-speed transfer of large datasets between sites at near the full network speed.
- Recent additions include using Kubernetes to orchestrate containers across the PRP infrastructure and integrating machine learning resources through the CHASE-CI grant to support data-intensive AI applications.
The Pacific Research Platform Enables Distributed Big-Data Machine-LearningLarry Smarr
The Pacific Research Platform enables distributed big data machine learning by connecting scientific instruments, sensors, and supercomputers across California and the United States with high-speed optical networks. Key components include FIONA data transfer nodes that allow fast disk-to-disk transfers near the theoretical maximum, Kubernetes to orchestrate distributed computing resources, and the Nautilus hypercluster which aggregates thousands of CPU cores and GPUs into a unified platform. This infrastructure has accelerated many scientific workflows and supported cutting-edge research in fields such as astronomy, oceanography, climate science, and particle physics.
The Pacific Research Platform:a Science-Driven Big-Data Freeway SystemLarry Smarr
The Pacific Research Platform will create a regional "Big Data Freeway System" along the West Coast to support science. It will connect major research institutions with high-speed optical networks, allowing them to share vast amounts of data and computational resources. This will enable new forms of collaborative, data-intensive research for fields like particle physics, astronomy, biomedicine, and earth sciences. The first phase aims to establish a basic networked infrastructure, with later phases advancing capabilities to 100Gbps and beyond with security and distributed technologies.
Pacific Research Platform Science DriversLarry Smarr
The document discusses the vision and progress of the Pacific Research Platform (PRP) in creating a "big data freeway" across the West Coast to enable data-intensive science. It outlines how the PRP builds on previous NSF and DOE networking investments to provide dedicated high-performance computing resources, like GPU clusters and Jupyter hubs, connected by high-speed networks at multiple universities. Several science driver teams are highlighted, including particle physics, astronomy, microbiology, earth sciences, and visualization, that will leverage PRP resources for large-scale collaborative data analysis projects.
Distributed Cyberinfrastructure to Support Big Data Machine LearningLarry Smarr
Panel on the Future of Machine Learning
California Institute for Telecommunications and Information Technology
University of California, Irvine
May 24, 2018
The document discusses Internet2, an advanced networking consortium that operates a 15,000 mile fiber optic network for research and education. It provides very high speed connectivity and collaboration technologies to facilitate large data sharing and frictionless research. Examples are given of life sciences projects utilizing Internet2's high-speed network for genomic research and agricultural applications involving terabytes of satellite and sensor data. The network is expanding to include cloud computing resources and supercomputing centers to enable global-scale distributed scientific computing and collaboration.
The document summarizes Dr. Larry Smarr's presentation on the Pacific Research Platform (PRP) and its role in working toward a national research platform. It describes how PRP has connected research teams and devices across multiple UC campuses for over 15 years. It also details PRP's innovations like Flash I/O Network Appliances (FIONAs) and use of Kubernetes to manage distributed resources. Finally, it outlines opportunities to further integrate PRP with the Open Science Grid and expand the platform internationally through partnerships.
CHASE-CI: A Distributed Big Data Machine Learning PlatformLarry Smarr
This document summarizes a talk given by Professor Ken Kreutz-Delgado on distributed machine learning platforms and brain-inspired computing. It discusses the Pacific Research Platform (PRP) which connects multiple universities and research institutions. The PRP uses FIONA appliances and Kubernetes to distribute storage and processing. A new NSF grant will add GPUs across 10 campuses for training AI algorithms on big data. The talk envisions connecting the PRP with clouds of GPUs and non-von Neumann processors like IBM's TrueNorth chip. Calit2's Pattern Recognition Lab uses different processors including TrueNorth to explore machine learning algorithms.
Creating a Big Data Machine Learning Platform in CaliforniaLarry Smarr
Big Data Tech Forum: Big Data Enabling Technologies and Applications
San Diego Chinese American Science and Engineering Association (SDCASEA)
Sanford Consortium
La Jolla, CA
December 2, 2017
Building the Pacific Research Platform: Supernetworks for Big Data ScienceLarry Smarr
The document summarizes Dr. Larry Smarr's presentation on building the Pacific Research Platform (PRP) to enable big data science across research universities on the West Coast. The PRP provides 100-1000 times more bandwidth than today's internet to support research fields from particle physics to climate change. In under 2 years, the prototype PRP has connected researchers and datasets across California through optical networks and is now expanding nationally and globally. The next steps involve adding machine learning capabilities to the PRP through GPU clusters to enable new discoveries from massive datasets.
Machine Learning in Healthcare DiagnosticsLarry Smarr
Machine learning and artificial intelligence are rapidly transforming healthcare and medicine. Advances in genetic sequencing have enabled the mapping of human and microbial genomes at low costs. Researchers are using machine learning to analyze genomic and microbiome data to better understand health and disease. Non-von Neumann brain-inspired computing architectures are being developed for machine learning applications and could accelerate medical research and diagnostics. These technologies may help create personalized health coaching and move medicine from reactive sickcare to proactive healthcare.
This document discusses several projects related to connecting research institutions through high-speed networks:
1) The Pacific Research Platform connects campuses in California through a "big data superhighway" funded by NSF from 2015-2020.
2) CHASE-CI adds machine learning capabilities for researchers across 10 campuses in California using NSF-funded GPU resources.
3) A pilot project is using CENIC and Internet2 to connect regional research networks on a national scale, funded by NSF from 2018-2019.
The document discusses the Pacific Research Platform (PRP), a distributed cyberinfrastructure that connects researchers and data across multiple campuses in California and beyond using optical fiber networking. Key points:
- The PRP uses high-speed networking infrastructure like the CENIC network to connect data generators and consumers across 15+ campuses, creating an integrated "big data freeway system".
- It deploys specialized data transfer nodes called FIONAs to enable high-speed transfer of large datasets between sites at near the full network speed.
- Recent additions include using Kubernetes to orchestrate containers across the PRP infrastructure and integrating machine learning resources through the CHASE-CI grant to support data-intensive AI applications.
The Pacific Research Platform Enables Distributed Big-Data Machine-LearningLarry Smarr
The Pacific Research Platform enables distributed big data machine learning by connecting scientific instruments, sensors, and supercomputers across California and the United States with high-speed optical networks. Key components include FIONA data transfer nodes that allow fast disk-to-disk transfers near the theoretical maximum, Kubernetes to orchestrate distributed computing resources, and the Nautilus hypercluster which aggregates thousands of CPU cores and GPUs into a unified platform. This infrastructure has accelerated many scientific workflows and supported cutting-edge research in fields such as astronomy, oceanography, climate science, and particle physics.
The Pacific Research Platform:a Science-Driven Big-Data Freeway SystemLarry Smarr
The Pacific Research Platform will create a regional "Big Data Freeway System" along the West Coast to support science. It will connect major research institutions with high-speed optical networks, allowing them to share vast amounts of data and computational resources. This will enable new forms of collaborative, data-intensive research for fields like particle physics, astronomy, biomedicine, and earth sciences. The first phase aims to establish a basic networked infrastructure, with later phases advancing capabilities to 100Gbps and beyond with security and distributed technologies.
Pacific Research Platform Science DriversLarry Smarr
The document discusses the vision and progress of the Pacific Research Platform (PRP) in creating a "big data freeway" across the West Coast to enable data-intensive science. It outlines how the PRP builds on previous NSF and DOE networking investments to provide dedicated high-performance computing resources, like GPU clusters and Jupyter hubs, connected by high-speed networks at multiple universities. Several science driver teams are highlighted, including particle physics, astronomy, microbiology, earth sciences, and visualization, that will leverage PRP resources for large-scale collaborative data analysis projects.
Distributed Cyberinfrastructure to Support Big Data Machine LearningLarry Smarr
Panel on the Future of Machine Learning
California Institute for Telecommunications and Information Technology
University of California, Irvine
May 24, 2018
The document discusses Internet2, an advanced networking consortium that operates a 15,000 mile fiber optic network for research and education. It provides very high speed connectivity and collaboration technologies to facilitate large data sharing and frictionless research. Examples are given of life sciences projects utilizing Internet2's high-speed network for genomic research and agricultural applications involving terabytes of satellite and sensor data. The network is expanding to include cloud computing resources and supercomputing centers to enable global-scale distributed scientific computing and collaboration.
The document summarizes Dr. Larry Smarr's presentation on the Pacific Research Platform (PRP) and its role in working toward a national research platform. It describes how PRP has connected research teams and devices across multiple UC campuses for over 15 years. It also details PRP's innovations like Flash I/O Network Appliances (FIONAs) and use of Kubernetes to manage distributed resources. Finally, it outlines opportunities to further integrate PRP with the Open Science Grid and expand the platform internationally through partnerships.
CHASE-CI: A Distributed Big Data Machine Learning PlatformLarry Smarr
This document summarizes a talk given by Professor Ken Kreutz-Delgado on distributed machine learning platforms and brain-inspired computing. It discusses the Pacific Research Platform (PRP) which connects multiple universities and research institutions. The PRP uses FIONA appliances and Kubernetes to distribute storage and processing. A new NSF grant will add GPUs across 10 campuses for training AI algorithms on big data. The talk envisions connecting the PRP with clouds of GPUs and non-von Neumann processors like IBM's TrueNorth chip. Calit2's Pattern Recognition Lab uses different processors including TrueNorth to explore machine learning algorithms.
Creating a Big Data Machine Learning Platform in CaliforniaLarry Smarr
Big Data Tech Forum: Big Data Enabling Technologies and Applications
San Diego Chinese American Science and Engineering Association (SDCASEA)
Sanford Consortium
La Jolla, CA
December 2, 2017
Building the Pacific Research Platform: Supernetworks for Big Data ScienceLarry Smarr
The document summarizes Dr. Larry Smarr's presentation on building the Pacific Research Platform (PRP) to enable big data science across research universities on the West Coast. The PRP provides 100-1000 times more bandwidth than today's internet to support research fields from particle physics to climate change. In under 2 years, the prototype PRP has connected researchers and datasets across California through optical networks and is now expanding nationally and globally. The next steps involve adding machine learning capabilities to the PRP through GPU clusters to enable new discoveries from massive datasets.
Machine Learning in Healthcare DiagnosticsLarry Smarr
Machine learning and artificial intelligence are rapidly transforming healthcare and medicine. Advances in genetic sequencing have enabled the mapping of human and microbial genomes at low costs. Researchers are using machine learning to analyze genomic and microbiome data to better understand health and disease. Non-von Neumann brain-inspired computing architectures are being developed for machine learning applications and could accelerate medical research and diagnostics. These technologies may help create personalized health coaching and move medicine from reactive sickcare to proactive healthcare.
Distributed Cyberinfrastructure to Support Big Data Machine LearningLarry Smarr
Panel on the Future of Machine Learning
California Institute for Telecommunications and Information Technology
University of California, Irvine
May 24, 2018
Berkeley cloud computing meetup may 2020Larry Smarr
The Pacific Research Platform (PRP) is a high-bandwidth global private "cloud" connected to commercial clouds that provides researchers with distributed computing resources. It links Science DMZs at universities across California and beyond using a high-performance network. The PRP utilizes Data Transfer Nodes called FIONAs to transfer data at near full network speeds. It has adopted Kubernetes to orchestrate software containers across its resources. The PRP provides petabytes of distributed storage and hundreds of GPUs for machine learning. It allows researchers to perform data-intensive science across multiple universities much faster than possible individually.
High Performance Cyberinfrastructure for Data-Intensive ResearchLarry Smarr
This document summarizes a lecture given by Dr. Larry Smarr on high performance cyberinfrastructure for data-intensive research. The summary discusses:
1) The need for dedicated high-bandwidth networks separate from the shared internet to enable big data research due to the increasing volume of digital scientific data.
2) Extensions being made to networks like CENIC in California to provide campus "Big Data Freeways" connecting instruments, computing resources, and remote facilities.
3) The use of networks like HPWREN to provide high-performance wireless access for data-intensive applications in rural areas like astronomy, wildfire detection, and more.
Global Research Platforms: Past, Present, FutureLarry Smarr
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise boosts blood flow, releases endorphins, and promotes changes in the brain which help regulate emotions and stress levels.
Looking Back, Looking Forward NSF CI Funding 1985-2025Larry Smarr
This document provides an overview of the development of national research platforms (NRPs) from 1985 to the present, with a focus on the Pacific Research Platform (PRP). It describes the evolution of the PRP from early NSF-funded supercomputing centers to today's distributed cyberinfrastructure utilizing optical networking, containers, Kubernetes, and distributed storage. The PRP now connects over 15 universities across the US and internationally to enable data-intensive science and machine learning applications across multiple domains. Going forward, the document discusses plans to further integrate regional networks and partner with new NSF-funded initiatives to develop the next generation of NRPs through 2025.
A California-Wide Cyberinfrastructure for Data-Intensive ResearchLarry Smarr
The document discusses creating a California-wide cyberinfrastructure for data-intensive research. It outlines efforts to connect all UC campuses and other research institutions across California with high-speed optical networks. This would create a "big data plane" to share large datasets. Several campuses have received NSF grants to upgrade their networks and implement Science DMZ architectures with 10-100Gbps connections to CENIC. Connecting these resources would provide researchers access to high-performance computing, large scientific instruments, and datasets. This would support collaborative big data science across disciplines like physics, climate modeling, genomics and microscopy.
The document provides an overview of the Pacific Research Platform (PRP) and discusses its role in connecting researchers across institutions and enabling new applications. It summarizes the PRP's key components like Science DMZs, Data Transfer Nodes (FIONAs), and use of Kubernetes for container management. Several examples are given of how the PRP facilitates high-performance distributed data analysis, access to remote supercomputers, and sensor networks coupled to real-time computing. Upcoming work on machine learning applications and expanding the PRP internationally is also outlined.
The Pacific Research Platform: Building a Distributed Big-Data Machine-Learni...Larry Smarr
The document summarizes the Pacific Research Platform (PRP) which connects researchers across multiple universities with high-speed networks and computing resources for big data and machine learning applications. Key points:
- PRP connects 15 universities with optical networks, distributed storage devices (FIONAs), and over 350 GPUs for data analysis and AI training.
- It allows researchers to rapidly share and analyze large datasets, with one example reducing a workflow from 19 days to 52 minutes.
- Other projects using PRP resources include climate modeling, astrophysics simulations, and machine learning courses involving thousands of students.
The Pacific Research Platform: A Regional-Scale Big Data Analytics Cyberinfra...Larry Smarr
The document discusses the Pacific Research Platform (PRP), a regional big data cyberinfrastructure connecting researchers across California universities. PRP provides high-speed networks and data transfer nodes to enable sharing of large datasets for projects like medical imaging, cryo-electron microscopy, and machine learning. Recent grants are expanding PRP to add GPUs and non-von Neumann processors to support these computationally intensive applications.
Similar to Toward a National Research Platform (19)
The Rise of Supernetwork Data Intensive ComputingLarry Smarr
Invited Remote Lecture to SC21
The International Conference for High Performance Computing, Networking, Storage, and Analysis
St. Louis, Missouri
November 18, 2021
My Remembrances of Mike Norman Over The Last 45 YearsLarry Smarr
Mike Norman has been a leader in computational astrophysics for over 45 years. Some of his influential work includes:
- Cosmic jet simulations in the early 1980s which helped explain phenomena from galactic centers.
- Pioneering the use of adaptive mesh refinement in the 1990s to achieve dynamic load balancing on supercomputers.
- Massive cosmology simulations in the late 2000s with over 100 trillion particles using thousands of processors across multiple supercomputing sites, producing petabytes of data.
- Developing end-to-end workflows in the 2000s to couple supercomputers, high-speed networks, and large visualization systems to enable real-time analysis of extremely large astrophysics simulations.
Metagenics How Do I Quantify My Body and Try to Improve its Health? June 18 2019Larry Smarr
Larry Smarr discusses quantifying his body and health over time through extensive self-tracking. He measures various biomarkers through regular blood tests and analyzes his gut microbiome by sequencing stool samples. This revealed issues like chronic inflammation and an unhealthy microbiome. Smarr then took steps like a restricted eating window and increasing plant diversity in his diet, which reversed metabolic syndrome issues and correlated with shifts in his microbiome ecology. His goal is to continue precisely measuring factors like toxins, hormones, gut permeability and food/supplement impacts to further optimize his health.
Panel: Reaching More Minority Serving InstitutionsLarry Smarr
This document discusses engaging more minority serving institutions (MSIs) in cyberinfrastructure development through regional networks. It provides data showing the importance of MSIs like historically black colleges and universities (HBCUs) in educating underrepresented minority students in STEM fields. Regional networks can help equalize opportunities by assisting MSIs in overcoming barriers to resources through training, networking infrastructure support, and helping institutions obtain necessary staffing and funding. Strategies mentioned include collaborating with MSIs on grants and addressing issues identified in surveys like lack of vision for data use beyond compliance. The goal is to broaden participation in STEAM fields by leveraging the success MSIs have shown in supporting underrepresented students.
Global Network Advancement Group - Next Generation Network-Integrated SystemsLarry Smarr
This document summarizes a presentation on global petascale to exascale workflows for data intensive sciences. It discusses a partnership convened by the GNA-G Data Intensive Sciences Working Group with the mission of meeting challenges faced by data-intensive science programs. Cornerstone concepts that will be demonstrated include integrated network and site resource management, model-driven frameworks for resource orchestration, end-to-end monitoring with machine learning-optimized data transfers, and integrating Qualcomm's GradientGraph with network services to optimize applications and science workflows.
Wireless FasterData and Distributed Open Compute Opportunities and (some) Us...Larry Smarr
This document discusses opportunities for ESnet to support wireless edge computing through developing a strategy around self-guided field laboratories (SGFL). It outlines several potential science use cases that could benefit from wireless and distributed computing capabilities, both in the short term through technologies like 5G, LoRa and Starlink, and longer term through the vision of automated SGFL. The document proposes some initial ideas for deploying and testing wireless edge computing technologies through existing projects to help enable the SGFL vision and further scientific opportunities. It emphasizes that exploring these emerging areas could help drive new science possibilities if done at a reasonable scale.
Prototype Implementation of Non-Volatile Memory Support for RISC-V Keystone E...LenaYu2
Handling confidential information has become an increasingly important concern among many areas of society. However, current computing environments have been still vulnerable to various threats, and we should think they are untrusted.
Trusted Execution Environments (TEEs) have attracted attention because they can execute a program in a trusted environment constructed on an untrusted platform.
Particularly, the RISC-V Keystone is one of the interesting TEEs since it is a flexibly customizable and fully open-source platform. On the other hand, as same as other TEEs, it must also delegate I/O processing, such as file accesses, to a host OS, resulting in the expensive overhead. For this problem, we thought utilizing byte-addressable non-volatile memory (NVM) modules is a useful solution to handle persistent data objects for TEEs.
In this paper, we introduce a prototype implementation of NVM support for the Keystone. Additionally, we evaluate it on the Freedom U500 built on a VC707 FPGA dev kit.
https://ken.ieice.org/ken/paper/20210720TC4K/
CYTOCHROME P-450 BASED DRUG INTERACTION.pptxPRAMESHPANWAR1
Cytochrome P450 (CYP) enzymes are a large family of heme-containing enzymes found primarily in the liver. They play a critical role in the metabolism of a wide variety of substances, including drugs, toxins, and endogenous compounds such as hormones and fatty acids. The name "P450" comes from the absorption peak at 450 nm when the enzyme is bound to carbon monoxide. These enzymes facilitate oxidation reactions, which often make substances more water-soluble and easier to excrete from the body.
CYP enzymes are involved in numerous drug interactions due to their ability to metabolize medications. These interactions can lead to altered drug levels, resulting in either reduced efficacy or increased toxicity. Key CYP enzymes include CYP3A4, CYP2D6, CYP2C9, CYP2C19, and CYP1A2, each responsible for the metabolism of different drugs.
But in this slide share, we only study the drug interaction of the cytochrome P450 enzyme.
Understanding the function and interactions of CYP enzymes is essential in pharmacology to ensure safe and effective drug therapy.
It also includes the mechanisms of drug interaction, i.e., enzyme inhibition and enzyme induction, with proper examples and explained in easy language.
I hope you find it useful.
Thank you so much..
Types of Garden (Mughal and Buddhist style)saloniswain225
Garden is the place where, flower blooming on a plant ,aesthetic things are present like Topiary, Hedges, Arches and many more. Whereas, Botanical garden is an educational institution for scientific research as well as gathering information about different culture. Such as, Hindu, Mughal , Buddhist style.
This an presentation about electrostatic force. This topic is from class 8 Force and Pressure lesson from ncert . I think this might be helpful for you. In this presentation there are 4 content they are Introduction, types, examples and demonstration. The demonstration should be done by yourself
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
ALTERNATIVE ANIMAL TOXICITY STUDY .pptxSAMIR PANDA
Alternatives animal testing are development and implementation of test methods that avoid the use of live animals.
Human biochemistry, physiology, pharmacology, and endocrinology and toxicology has been derived from animal models.10-100 millions of animals are using for experimentation in a year.
Animals used experimentation distributed among zebra- fish to primates.
Vast majority of animals are sacrificed at end of research programme.The use of animals can be further subdivided according to the degree of suffering
Minor animal suffering:- observing animals in behavioral studies, single blood sampling, Immunization without adjutants, etc.
Moderate animal suffering:- repeated blood sampling, recovery from general anesthesia, etc.
ScieNCE grade 08 Lesson 1 and 2 NLC.pptxJoanaBanasen1
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it..............
Detecting visual-media-borne disinformation: a summary of latest advances at ...VasileiosMezaris
We present very briefly some of the most important and latest (June 2024) advances in detecting visual-media-borne disinformation, based on the research work carried out at the Intelligent Digital Transformation Laboratory (IDT Lab) of CERTH-ITI.
Tackling hard problems: On the evolution of operations researchLaura Albert
As we think about what impact we would like operations research (OR) to have on the world, it can be helpful to look to the past for guidance and inspiration. This talk overviews the early stages of operations research becoming a discipline and academic field of study following World War II. In this talk, I will introduce “fun facts” about OR history, including the piece of OR history that inspired a scene in the film Good Will Hunting. I will also discuss early attempts to define the field of operations research, drawing upon the writings of Philip McCord Morse. The young field of OR experienced some growing pains, when some leaders in the field expressed their concerns about the demise and possible death of OR. Ultimately, OR flourished in the following decades. A theme of the talk is that various efforts taken to tackle hard problems defined the field of OR, opened up fruitful areas for exploration, and guided the evolution of OR.
Tackling hard problems: On the evolution of operations research
Toward a National Research Platform
1. “Toward a National Research Platform”
Invited Presentation
CENIC 2018
Monterey, CA
March 7, 2018
Dr. Larry Smarr
Director, California Institute for Telecommunications and Information Technology
Harry E. Gruber Professor,
Dept. of Computer Science and Engineering
Jacobs School of Engineering, UCSD
http://lsmarr.calit2.net
1
2. Abstract
Scientific exploration and discovery are enabled by increasingly specialized information technology infrastructure. The
requirements of a wide array of scientific research domains provide challenges that regularly exceed the capabilities of
existing infrastructure, necessitating an orders-of-magnitude capability jump over the commodity Internet and conventional
PCs. The Department of Energy’s (DOE) ESnet DMZ model has been adopted by the National Science Foundation (NSF),
funding five years of Campus Cyberinfrastructure projects at over 100 U.S. campuses to support collaborative, Big Data
science. A large-scale regional prototype of a purpose-built DMZ CI has been created - the Pacific Research Platform
(PRP) - which was funded in 2015 by a five-year $5-million NSF grant. The PRP interconnects campus DMZs and is
driven by direct engagements with sophisticated, cyberinfrastructure-knowledgeable science teams chosen from a wide
range of data-intensive disciplines, including particle physics, astronomy, biomedical sciences, earth sciences, and
scalable data visualization. This partnership of more than 25 institutions, including four National Science Foundation,
Department of Energy, and NASA supercomputer centers, is routinely providing 10-100 Gbps disk-to-disk bandwidth
between researchers located at different campuses, and remote computers, scientific instruments, and data repositories.
The optical network is terminated by PC end-points designed to enable distributed and collaborative Big Data analysis. To
begin a community discussion on how the PRP could be scaled to a National Research Platform (NRP), the First NRP
Workshop was held at Montana State University in August 2017. Its report, generated by the 150 people in attendance, is
on the website pacificresearchplatform.org. In this session, findings and recommendations from the August workshop will
be reviewed, and new recommendations will be elicited from the audience with the intent of accelerating the path forward.
3. Thirty Years After NSF Adopts DOE Supercomputer Center Model
NSF Adopts DOE ESnet’s Science DMZ for High Performance Applications
• A Science DMZ integrates 4 key concepts into a unified whole:
– A network architecture designed for high-performance applications,
with the science network distinct from the general-purpose network
– The use of dedicated systems as data transfer nodes (DTNs)
– Performance measurement and network testing systems that are
regularly used to characterize and troubleshoot the network
– Security policies and enforcement mechanisms that are tailored for
high performance science environments
http://fasterdata.es.net/science-dmz/
Science DMZ
Coined 2010
The DOE ESnet Science DMZ and the NSF “Campus Bridging” Taskforce Report Formed the Basis
for the NSF Campus Cyberinfrastructure Network Infrastructure and Engineering (CC-NIE) Program
“Firewalls in DMZs?”
Monday @ CENIC 2018
“Running perfSONAR
on Routers and Switches”
Monday @ CENIC 2018
4. Based on Community Input and on ESnet’s Science DMZ Concept,
NSF Has Funded Over 100 Campuses to Build DMZs
Red 2012 CC-NIE Awardees
Yellow 2013 CC-NIE Awardees
Green 2014 CC*IIE Awardees
Blue 2015 CC*DNI Awardees
Purple Multiple Time Awardees
Source: NSF
“The Knights of Cyberinfrastructure”
Monday @ CENIC 2018
5. (GDC)
Logical Next Step: The Pacific Research Platform Networks Campus DMZs
to Create a Regional End-to-End Science-Driven “Big Data Superhighway” System
NSF CC*DNI Grant
$5M 10/2015-10/2020
PI: Larry Smarr, UC San Diego Calit2
Co-PIs:
• Camille Crittenden, UC Berkeley CITRIS,
• Tom DeFanti, UC San Diego Calit2/QI,
• Philip Papadopoulos, UCSD SDSC,
• Frank Wuerthwein, UCSD Physics and SDSC
Letters of Commitment from:
• 50 Researchers from 15 Campuses
• 32 IT/Network Organization Leaders
NSF Program Officer: Amy Walton
Source: John Hess, CENIC
6. • FIONAs PCs [a.k.a ESnet DTNs]:
– ~$8,000 Big Data PC with:
– 1 CPUs
– 10/40 Gbps Network Interface Cards
– 3 TB SSDs or 100+ TB Disk Drive
– Extensible for Higher Performance to:
– +NVMe SSDs for 100Gbps Disk-to-Disk
– +Up to 8 GPUs [4M GPU Core Hours/Week]
– +Up to 160 TB Disks for Data Posting
– +Up to 38 Intel CPUs
– $700 10Gpbs FIONAs Being Tested
• FIONettes are $270 FIONAs
– 1Gbps NIC With USB-3 for Flash Storage or SSD
Big Data Science Data Transfer Nodes (DTNs)-
Flash I/O Network Appliances (FIONAs)
FIONette—1G, $250
Phil Papadopoulos, SDSC &
Tom DeFanti, Joe Keefe & John Graham, Calit2
PRP FIONA Workshop
Sat/Sun @ CENIC 2018
Key Innovation: UCSD Designed FIONAs To Solve the Disk-to-Disk
Data Transfer Problem at Full Speed on 10/40/100G Networks
FIONAS—10/40G, $8,000
7. FIONAs on the PRP and Partners
• ~40 FIONAs are on the PRP as GridFTP (MaDDash) + perfSONAR Systems
– PRP Partners: 10 UCs, Caltech, USC, SDSC, UW, UIC
– Plus U Utah, Montana State, U Chicago, Clemson U, U Hawaii, NCAR, Guam
– Plus Internationals: Uv Amsterdam, KISTI (Korea)
• Many States and Regionals Building FIONAs and Creating MaDDashes
– FIONA Build Specs on PRP Website
– Weekly Engineering Calls with Notes Going to 60+ Technical Participants
More requests for FIONA Workshops Than We Can Handle:
Indiana U/APAN, FRGP, LEARN (TX), NORDUnet/SURFnet
8. We Measure Disk-to-Disk Throughput with 10GB File Transfer
4 Times Per Day in Both Directions for All PRP Sites
January 29, 2016
From Start of Monitoring 12 DTNs
to 24 DTNs Connected at 10-40G
in 1 ½ Years
July 21, 2017
Source: John Graham, Calit2/QI
9. We Aggressively Use Kubernetes
to Manage Containers Across the PRP
“Kubernetes is a way of stitching together
a collection of machines into, basically, a big computer,”
--Craig Mcluckie, Google
and now CEO and Founder of Heptio
"Everything at Google runs in a container."
--Joe Beda,Google
“Kubernetes has emerged as
the container orchestration engine of choice
for many cloud providers including
Google, AWS, Rackspace, and Microsoft,
and is now being used in HPC and Science DMZs.
--John Graham, Calit2/QI UC San Diego
10. Rook is Ceph Cloud-Native Object Storage
‘Inside’ Kubernetes
https://rook.io/
Source: John Graham, Calit2/QI
“The Knights of Cyberinfrastructure”
Monday @ CENIC 2018
11. FIONA8
FIONA8
100G Epyc NVMe
Nautilus - A Multi-Tenant Containerized PRP HyperCluster for Big Data Applications
Running Kubernetes with Rook/Ceph Cloud Native Storage and GPUs for Machine Learning
40G SSD 3T
100G NVMe 6.4T
SDSU
100G Gold NVMe
March 2018 John
Graham, Calit2/QI
100G NVMe 6.4T
Caltech
40G SSD
UCAR
FIONA8
UCI
FIONA8
FIONA8
FIONA8
FIONA8
FIONA8
FIONA8
FIONA8
FIONA8
sdx-controller
controller-0
Calit2
100G Gold FIONA8
SDSC
40G SSD
UCR 40G SSD
USC
40G SSD
UCLA
40G SSD
Stanford
40G SSD
UCSB
100G NVMe 6.4T
40G SSD
UCSC
40G SSD
Hawaii
Rook/Ceph - Block/Object/FS
Swift API compatible with
SDSC, AWS, and Rackspace
Kubernetes
Centos7
“Nautilus HyperCluster”
Tues @ CENIC 2018
12. FIONA8
FIONA8
100G Epyc NVMe
40G 160TB
100G NVMe 6.4T
SDSU
100G Gold NVMe
March 2018 John Graham, UCSD
100G NVMe 6.4T
Caltech
40G 160TB
UCAR
FIONA8
UCI
FIONA8
FIONA8
FIONA8
FIONA8
FIONA8
FIONA8
FIONA8
FIONA8
sdx-controller
controller-0
Calit2
100G Gold FIONA8
SDSC
40G 160TB
UCR 40G 160TB
USC
40G 160TB
UCLA
40G 160TB
Stanford
40G 160TB
UCSB
100G NVMe 6.4T
40G 160TB
UCSC
40G 160TB
Hawaii
Rook/Ceph - Block/Object/FS
Swift API compatible with
SDSC, AWS and Rackspace
Kubernetes
Centos7
Running Kubernetes/Rook/Ceph On PRP
Allows Us to Deploy a Distributed PB+ of Storage for Posting Science Data
13. Increasing Participation Through
PRP Science Engagement Workshops
Source: Camille Crittenden, UC Berkeley
UC San Diego
UC Merced
UC Davis UC Berkeley
“Scaling Science Engagement”
Wednesday @ CENIC 2018
14. PRP’s First 2 Years:
Connecting Multi-Campus Application Teams and Devices
Earth
Sciences
15. Data Transfer Rates From 40 Gbps DTN in UCSD Physics Building,
Across Campus on PRISM DMZ, Then to Chicago’s Fermilab Over CENIC/ESnet
Source: Frank Wuerthwein, UCSD, SDSC
“High-Performance Class Systems
& Large Instruments”
Monday @ CENIC 2018
Based on This Success,
Will Upgrade 40G DTN to 100G
For Bandwidth Tests & Kubernetes
to OSG, Caltech, and UCSC
16. PRP Over CENIC
Couples UC Santa Cruz Astrophysics Cluster to LBNL NERSC Supercomputer
CENIC 2018
Innovations in
Networking
Award for
Research
Applications
17. 100 Gbps FIONA at UCSC Allows for Downloads to the UCSC Hyades Cluster
from the LBNL NERSC Supercomputer for DESI Science Analysis
300 images per night.
100MB per raw image
120GB per night
250 images per night.
530MB per raw image
800GB per night
Source: Peter Nugent, LBNL
Professor of Astronomy, UC Berkeley
Precursors to
LSST and NCSA
NSF-Funded Cyberengineer
Shaw Dong @UCSC
Receiving FIONA
Feb 7, 2017
“The Knights of Cyberinfrastructure”
Monday @ CENIC 2018
18. Distributed Computation on PRP Nautilus HyperCluster
Coupling SDSU Cluster and SDSC Comet Using Kubernetes Containers
25 years
Developed and executed MPI-based PRP Kubernetes Cluster execution
[CO2,aq] 100 Year Simulation
4 days
75 years
100 years
• 0.5 km x 0.5 km x 17.5 m
• Three sandstone layers
separated by two shale
layers
Simulating the Injection of CO2
in Brine-Saturated Reservoirs:
Poroelastic & Pressure-Velocity
Fields Solved In Parallel With MPI
Using Domain Decomposition
Across Containers
Source: Chris Paolini and Jose Castillo, SDSU
19. 40G FIONAs
20x40G PRP-connected
WAVE@UC San Diego
PRP Now Enables
Distributed Virtual Reality
PRP
WAVE @UC Merced
Transferring 5 CAVEcam Images from UCSD to UC Merced:
2 Gigabytes now takes 2 Seconds (8 Gb/sec)
“Riding the WAVE”
Monday @ CENIC 2018
20. The Prototype PRP Has Attracted
New Application Drivers
Scott Sellars, Marty Ralph
Center for Western Weather
and Water Extremes
Frank Vernon, Graham Kent, & Ilkay Altintas, Wildfires
Jules Jaffe – Undersea Microscope
Tom Levy
At-Risk Cultural Heritage
21. PRP Links At-Risk Cultural Heritage and Archaeology Datasets
at UCB, UCLA, UCM and UCSD with CAVEkiosks
48 Megapixel CAVEkiosk
UCSD Library
48 Megapixel CAVEkiosk
UCB Library
24 Megapixel CAVEkiosk
UCM Library
UC President Napolitano's Research Catalyst Award to UC San Diego (Tom Levy),
UC Berkeley (Benjamin Porter), UC Merced (Nicola Lercari) and UCLA (Willeke Wendrich)
“Visualizing and Networking
for Cultural Heritage”
Monday @ CENIC 2018
22. Church Fire, San Diego CA
Alert SD&ECameras/HPWREN
October 21, 2017
New PRP Application:
Coupling Wireless Wildfire Sensors to Computing
“WIFIRE’s Firemap”
Tuesday @ CENIC 2018
Thomas Fire, Ventura, CA
Firemap Tool, WIFIRE
December 10, 2017
CENIC 2018
Innovations in Networking Award
for Experimental Applications
23. temperature
relative humidity
fuel moisture
fuel temperature
data logger
barometric pressure
Pan-tilt-zoom camera
support
equipment
3D ultrasonic
anemometer
solar
radiation
tipping
rainbucket
anemometer
Mount Laguna Meterological Sensor Instrumentation Provides
Real-Time Data Flows Over HPWREN to PRP-Connected Servers
Source: Hans-Werner Braun, SDSC
24. HPWREN-Connected SoCal Weather Stations:
Giving High-Resolution Weather Data in San Diego County
All Connected by
HPWREN Wireless Internet
25. PRP/CENIC Backbone Sets Stage for 2018 Expansion
of HPWREN Wireless Connectivity Into Orange and Riverside Counties
• PRP CENIC 100G
Links UCSD, SDSU &
UCI HPWREN
Servers
– FIONAs Endpoints
– Data Redundancy
– Disaster Recovery
– High Availability
– Kubernetes Handles
Software Containers
and Data
• Potential Future UCR
CENIC Anchor
UCR
UCI
UCSD
SDSU
Source: Frank Vernon,
Hans Werner Braun HPWREN
UCI Antenna Dedicated
June 27, 2017
“Wireless Extensions
of R&E Networks”
Monday @ CENIC 2018
26. Once a Wildfire is Spotted, PRP Brings High-Resolution Weather Data
to Fire Modeling Workflows in WIFIRE
Real-Time
Meteorological Sensors
Weather Forecast
Landscape data
WIFIRE Firemap
Fire Perimeter
Work Flow
PRP
Source: Ilkay Altintas, SDSC
27. Some Machine Learning Case Studies
To Improve on WIFIRE
• Smoke and fire perimeter detection based on imagery
• Prediction of Santa Ana and fire conditions specific to location
• Prediction of fuel build up based on fire and weather history
• NLP for understanding local conditions based on radio communications
• Deep learning on multi-spectra imagery for high resolution fuel maps
• Classification project to generate more accurate fuel maps (using Planet Labs satellite data)
All Require Periodic,
Dynamic, and
Programmatic
Access to Data!
Source: Ilkay Altintas, SDSC; Co-PI CHASE-CI
28. Director: F. Martin Ralph Website: cw3e.ucsd.edu
Big Data Collaboration with:
Source: Scott Sellers, CW3E
Collaboration on Atmospheric Water in the West
Between UC San Diego and UC Irvine
Director, Soroosh Sorooshian, UCSD Website http://chrs.web.uci.edu
29. Calit2’s FIONA
SDSC’s COMET
Calit2’s FIONA
Pacific Research Platform (10-100 Gb/s)
GPUsGPUs
Complete workflow time: 20 days20 hrs20 Minutes!
UC, Irvine UC, San Diego
Major Speedup in Scientific Work Flow
Using the PRP
Source: Scott Sellers, CW3E
30. Using Machine Learning to Determine
the Precipitation Object Starting Locations
*Sellars et al., 2017 (in prep)
31. UC San Diego Jaffe Lab (SIO) Scripps Plankton Camera
Off the SIO Pier with Fiber Optic Network
32. Over 300 Million Images So Far!
Requires Machine Learning for Automated Image Analysis and Classification
Phytoplankton: Diatoms
Zooplankton: Copepods
Zooplankton: Larvaceans
Source: Jules Jaffe, SIO
”We are using the FIONAs for image processing...
this includes doing Particle Tracking Velocimetry
that is very computationally intense.”-Jules Jaffe
33. New NSF CHASE-CI Grant Creates a Community Cyberinfrastructure:
Adding a Machine Learning Layer Built on Top of the Pacific Research Platform
Caltech
UCB
UCI UCR
UCSD
UCSC
Stanford
MSU
UCM
SDSU
NSF Grant for High Speed “Cloud” of 256 GPUs
For 30 ML Faculty & Their Students at 10 Campuses
for Training AI Algorithms on Big Data
“Hyperconverged 10-Campus
Machine Learning Cluster”
Tuesday @ CENIC 2018
NSF Program Officer: Mimi McClure
34. PRP Hosted
The First National Research Platform Workshop on August 7-8, 2017
Co-Chairs:
Larry Smarr, Calit2
& Jim Bottum, Internet2
150 Attendees
Announced in I2 Closing Keynote:
Larry Smarr “Toward a National Big Data Superhighway”
on Wednesday, April 26, 2017
35. PRP is Partnering with the Advanced CyberInfrastructure –
Research and Education Facilitators (ACI-REF) NSF Grant to Explore Extension
PRP Connected
ACI-REF has also spawned the 28-
member Campus Research
Computing consortium (CaRC),
funded by the NSF as a Research
Coordination Network (RCN).
CaRC is dedicated to sharing best
practices, expertise, and
resources, enabling the
advancement of campus- based
research computing activities
around the nation.
Jim Bottum, Principal Investigator
ACI-REF
CaRC
36. Expanding to the Global Research Platform
Via CENIC/Pacific Wave, Internet2, and International Links
PRP
PRP’s Current
International
Partners
Korea Shows Distance is Not the Barrier
to Above 5Gb/s Disk-to-Disk Performance
Netherlands
Guam
Australia
Korea
Japan
Singapore
37. First NRP Workshop
Recommendations
• Specifically, We Recommend That NSF:
– Continue funding its Campus Cyberinfrasturcture program, including more
opportunities for campus cyber-engineers and cyber teams.
– Issue a call for proposals to address the tough technical and sociopolitical issues that
will need to be addressed in an NRP; in fact, this could be a excellent opportunity for
DOE and NSF to work together on a joint program.
– Engage science engagement facilitators on campuses to collaborate with each other
across campuses in support of building the NRP.
– Support more regional Science DMZs to be formed from existing CC* grants to
campuses. Multi-campus networking organizations should be encouraged to take the
initiative to create regional DMZ proposals, including campuses that have not
previously received NSF CC* grants. Parallelism is the key to scaling rather than
central control.
“Scaling Approaches to the NRP”
Wednesday @ CENIC 2018
38. The Second National Research Platform Workshop
Bozeman, MT August 6-7, 2018
A follow-up FIONA workshop
will be held as a lead into
the 2nd NRP workshop in Bozeman,
starting August 2nd.
The program is being developed
by Jerry Sheehan, in coordination
with Richard Alo (JSU) and will focus on
networking engineers and faculty
interested in expanding
the breadth of the NRP network.
While the workshop will be open to
the community, there is a specific focus
on EPSCoR affiliated
and minority serving institutions.
Co-Chairs:
Larry Smarr, Calit2
Inder Monga, ESnet
Ana Hunsinger, Internet2
Local Host: Jerry Sheehan, MSU
“Navajo Broadband”
Tuesday @ CENIC 2018
PRP FIONA Workshop
Sat/Sun @ CENIC 2018
39. Our Support:
• US National Science Foundation (NSF) awards
CNS 0821155, CNS-1338192, CNS-1456638, CNS-1730158,
ACI-1540112, & ACI-1541349
• University of California Office of the President CIO
• UCSD Chancellor’s Integrated Digital Infrastructure Program
• UCSD Next Generation Networking initiative
• Calit2 and Calit2 Qualcomm Institute
• CENIC, PacificWave and StarLight
• DOE ESnet