Panel Presentation - Tom DeFanti with Larry Smarr and Frank Wuerthwein - Naut...Larry Smarr
The document discusses the Nautilus distributed community cyberinfrastructure cluster which provides over 1,100 GPUs and 5 petabytes of storage distributed across research institutions in the US. It is supported by NSF grants and aims to enhance and sustain its resources while expanding its user community and supporting research in areas like machine learning, computational media, and science. The cluster sees heavy usage from projects in fields such as high energy physics, astronomy, wildfires and more.
Pacific Research Platform Science DriversLarry Smarr
The document discusses the vision and progress of the Pacific Research Platform (PRP) in creating a "big data freeway" across the West Coast to enable data-intensive science. It outlines how the PRP builds on previous NSF and DOE networking investments to provide dedicated high-performance computing resources, like GPU clusters and Jupyter hubs, connected by high-speed networks at multiple universities. Several science driver teams are highlighted, including particle physics, astronomy, microbiology, earth sciences, and visualization, that will leverage PRP resources for large-scale collaborative data analysis projects.
The document discusses the Pacific Research Platform (PRP), a distributed cyberinfrastructure that connects researchers and data across multiple campuses in California and beyond using optical fiber networking. Key points:
- The PRP uses high-speed networking infrastructure like the CENIC network to connect data generators and consumers across 15+ campuses, creating an integrated "big data freeway system".
- It deploys specialized data transfer nodes called FIONAs to enable high-speed transfer of large datasets between sites at near the full network speed.
- Recent additions include using Kubernetes to orchestrate containers across the PRP infrastructure and integrating machine learning resources through the CHASE-CI grant to support data-intensive AI applications.
The Pacific Research Platform (PRP) is a multi-institutional cyberinfrastructure project that connects researchers across California and beyond to share large datasets. It spans the 10 University of California campuses, major private research universities, supercomputer centers, and some out-of-state universities. Fifteen multi-campus research teams in fields like physics, astronomy, earth sciences, biomedicine, and multimedia will drive the technical needs of the PRP over five years. The goal is to create a "big data freeway" to allow high-speed sharing of data between research labs, supercomputers, and repositories across multiple networks without performance loss over long distances.
Panel Presentation - Tom DeFanti with Larry Smarr and Frank Wuerthwein - Naut...Larry Smarr
The document discusses the Nautilus distributed community cyberinfrastructure cluster which provides over 1,100 GPUs and 5 petabytes of storage distributed across research institutions in the US. It is supported by NSF grants and aims to enhance and sustain its resources while expanding its user community and supporting research in areas like machine learning, computational media, and science. The cluster sees heavy usage from projects in fields such as high energy physics, astronomy, wildfires and more.
Pacific Research Platform Science DriversLarry Smarr
The document discusses the vision and progress of the Pacific Research Platform (PRP) in creating a "big data freeway" across the West Coast to enable data-intensive science. It outlines how the PRP builds on previous NSF and DOE networking investments to provide dedicated high-performance computing resources, like GPU clusters and Jupyter hubs, connected by high-speed networks at multiple universities. Several science driver teams are highlighted, including particle physics, astronomy, microbiology, earth sciences, and visualization, that will leverage PRP resources for large-scale collaborative data analysis projects.
The document discusses the Pacific Research Platform (PRP), a distributed cyberinfrastructure that connects researchers and data across multiple campuses in California and beyond using optical fiber networking. Key points:
- The PRP uses high-speed networking infrastructure like the CENIC network to connect data generators and consumers across 15+ campuses, creating an integrated "big data freeway system".
- It deploys specialized data transfer nodes called FIONAs to enable high-speed transfer of large datasets between sites at near the full network speed.
- Recent additions include using Kubernetes to orchestrate containers across the PRP infrastructure and integrating machine learning resources through the CHASE-CI grant to support data-intensive AI applications.
The Pacific Research Platform (PRP) is a multi-institutional cyberinfrastructure project that connects researchers across California and beyond to share large datasets. It spans the 10 University of California campuses, major private research universities, supercomputer centers, and some out-of-state universities. Fifteen multi-campus research teams in fields like physics, astronomy, earth sciences, biomedicine, and multimedia will drive the technical needs of the PRP over five years. The goal is to create a "big data freeway" to allow high-speed sharing of data between research labs, supercomputers, and repositories across multiple networks without performance loss over long distances.
Looking Back, Looking Forward NSF CI Funding 1985-2025Larry Smarr
This document provides an overview of the development of national research platforms (NRPs) from 1985 to the present, with a focus on the Pacific Research Platform (PRP). It describes the evolution of the PRP from early NSF-funded supercomputing centers to today's distributed cyberinfrastructure utilizing optical networking, containers, Kubernetes, and distributed storage. The PRP now connects over 15 universities across the US and internationally to enable data-intensive science and machine learning applications across multiple domains. Going forward, the document discusses plans to further integrate regional networks and partner with new NSF-funded initiatives to develop the next generation of NRPs through 2025.
- The Pacific Research Platform (PRP) interconnects campus DMZs across multiple institutions to provide high-speed connectivity for data-intensive research.
- The PRP utilizes specialized data transfer nodes called FIONAs that provide disk-to-disk transfer speeds of 10-100Gbps.
- Early applications of the PRP include distributing telescope data between UC campuses, connecting particle physics experiments to computing resources, and enabling real-time wildfire sensor data analysis.
Global Research Platforms: Past, Present, FutureLarry Smarr
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise boosts blood flow, releases endorphins, and promotes changes in the brain which help regulate emotions and stress levels.
The Pacific Research Platform: Building a Distributed Big-Data Machine-Learni...Larry Smarr
The document summarizes the Pacific Research Platform (PRP) which connects researchers across multiple universities with high-speed networks and computing resources for big data and machine learning applications. Key points:
- PRP connects 15 universities with optical networks, distributed storage devices (FIONAs), and over 350 GPUs for data analysis and AI training.
- It allows researchers to rapidly share and analyze large datasets, with one example reducing a workflow from 19 days to 52 minutes.
- Other projects using PRP resources include climate modeling, astrophysics simulations, and machine learning courses involving thousands of students.
The Pacific Research Platform:a Science-Driven Big-Data Freeway SystemLarry Smarr
The Pacific Research Platform will create a regional "Big Data Freeway System" along the West Coast to support science. It will connect major research institutions with high-speed optical networks, allowing them to share vast amounts of data and computational resources. This will enable new forms of collaborative, data-intensive research for fields like particle physics, astronomy, biomedicine, and earth sciences. The first phase aims to establish a basic networked infrastructure, with later phases advancing capabilities to 100Gbps and beyond with security and distributed technologies.
Berkeley cloud computing meetup may 2020Larry Smarr
The Pacific Research Platform (PRP) is a high-bandwidth global private "cloud" connected to commercial clouds that provides researchers with distributed computing resources. It links Science DMZs at universities across California and beyond using a high-performance network. The PRP utilizes Data Transfer Nodes called FIONAs to transfer data at near full network speeds. It has adopted Kubernetes to orchestrate software containers across its resources. The PRP provides petabytes of distributed storage and hundreds of GPUs for machine learning. It allows researchers to perform data-intensive science across multiple universities much faster than possible individually.
Peering The Pacific Research Platform With The Great Plains NetworkLarry Smarr
The Pacific Research Platform (PRP) connects research institutions across the western United States with high-speed networks to enable data-intensive science collaborations. Key points:
- The PRP connects 15 campuses across California and links to the Great Plains Network, allowing researchers to access remote supercomputers, share large datasets, and collaborate on projects like analyzing data from the Large Hadron Collider.
- The PRP utilizes Science DMZ architectures with dedicated data transfer nodes called FIONAs to achieve high-speed transfer of large files. Kubernetes is used to manage distributed storage and computing resources.
- Early applications include distributed climate modeling, wildfire science, plankton imaging, and cancer genomics. The PR
Creating a Big Data Machine Learning Platform in CaliforniaLarry Smarr
Big Data Tech Forum: Big Data Enabling Technologies and Applications
San Diego Chinese American Science and Engineering Association (SDCASEA)
Sanford Consortium
La Jolla, CA
December 2, 2017
Opening Keynote Lecture
15th Annual ON*VECTOR International Photonics Workshop
Calit2’s Qualcomm Institute
University of California, San Diego
February 29, 2016
A California-Wide Cyberinfrastructure for Data-Intensive ResearchLarry Smarr
The document discusses creating a California-wide cyberinfrastructure for data-intensive research. It outlines efforts to connect all UC campuses and other research institutions across California with high-speed optical networks. This would create a "big data plane" to share large datasets. Several campuses have received NSF grants to upgrade their networks and implement Science DMZ architectures with 10-100Gbps connections to CENIC. Connecting these resources would provide researchers access to high-performance computing, large scientific instruments, and datasets. This would support collaborative big data science across disciplines like physics, climate modeling, genomics and microscopy.
The Pacific Research Platform: Building a Distributed Big-Data Machine-Learni...Larry Smarr
The Pacific Research Platform (PRP) is a distributed big data and machine learning cyberinfrastructure connecting researchers across multiple UC campuses. It was established in 2015 with NSF funding and has since expanded to include other California universities and national/international partners. The PRP provides high-speed networks, storage, and computing resources like GPUs. It has enabled new data-intensive collaborations and significantly accelerated research workflows. The PRP also supports educational initiatives, providing computing resources for data science courses impacting thousands of students.
Looking Back, Looking Forward NSF CI Funding 1985-2025Larry Smarr
This document provides an overview of the development of national research platforms (NRPs) from 1985 to the present, with a focus on the Pacific Research Platform (PRP). It describes the evolution of the PRP from early NSF-funded supercomputing centers to today's distributed cyberinfrastructure utilizing optical networking, containers, Kubernetes, and distributed storage. The PRP now connects over 15 universities across the US and internationally to enable data-intensive science and machine learning applications across multiple domains. Going forward, the document discusses plans to further integrate regional networks and partner with new NSF-funded initiatives to develop the next generation of NRPs through 2025.
- The Pacific Research Platform (PRP) interconnects campus DMZs across multiple institutions to provide high-speed connectivity for data-intensive research.
- The PRP utilizes specialized data transfer nodes called FIONAs that provide disk-to-disk transfer speeds of 10-100Gbps.
- Early applications of the PRP include distributing telescope data between UC campuses, connecting particle physics experiments to computing resources, and enabling real-time wildfire sensor data analysis.
Global Research Platforms: Past, Present, FutureLarry Smarr
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise boosts blood flow, releases endorphins, and promotes changes in the brain which help regulate emotions and stress levels.
The Pacific Research Platform: Building a Distributed Big-Data Machine-Learni...Larry Smarr
The document summarizes the Pacific Research Platform (PRP) which connects researchers across multiple universities with high-speed networks and computing resources for big data and machine learning applications. Key points:
- PRP connects 15 universities with optical networks, distributed storage devices (FIONAs), and over 350 GPUs for data analysis and AI training.
- It allows researchers to rapidly share and analyze large datasets, with one example reducing a workflow from 19 days to 52 minutes.
- Other projects using PRP resources include climate modeling, astrophysics simulations, and machine learning courses involving thousands of students.
The Pacific Research Platform:a Science-Driven Big-Data Freeway SystemLarry Smarr
The Pacific Research Platform will create a regional "Big Data Freeway System" along the West Coast to support science. It will connect major research institutions with high-speed optical networks, allowing them to share vast amounts of data and computational resources. This will enable new forms of collaborative, data-intensive research for fields like particle physics, astronomy, biomedicine, and earth sciences. The first phase aims to establish a basic networked infrastructure, with later phases advancing capabilities to 100Gbps and beyond with security and distributed technologies.
Berkeley cloud computing meetup may 2020Larry Smarr
The Pacific Research Platform (PRP) is a high-bandwidth global private "cloud" connected to commercial clouds that provides researchers with distributed computing resources. It links Science DMZs at universities across California and beyond using a high-performance network. The PRP utilizes Data Transfer Nodes called FIONAs to transfer data at near full network speeds. It has adopted Kubernetes to orchestrate software containers across its resources. The PRP provides petabytes of distributed storage and hundreds of GPUs for machine learning. It allows researchers to perform data-intensive science across multiple universities much faster than possible individually.
Peering The Pacific Research Platform With The Great Plains NetworkLarry Smarr
The Pacific Research Platform (PRP) connects research institutions across the western United States with high-speed networks to enable data-intensive science collaborations. Key points:
- The PRP connects 15 campuses across California and links to the Great Plains Network, allowing researchers to access remote supercomputers, share large datasets, and collaborate on projects like analyzing data from the Large Hadron Collider.
- The PRP utilizes Science DMZ architectures with dedicated data transfer nodes called FIONAs to achieve high-speed transfer of large files. Kubernetes is used to manage distributed storage and computing resources.
- Early applications include distributed climate modeling, wildfire science, plankton imaging, and cancer genomics. The PR
Creating a Big Data Machine Learning Platform in CaliforniaLarry Smarr
Big Data Tech Forum: Big Data Enabling Technologies and Applications
San Diego Chinese American Science and Engineering Association (SDCASEA)
Sanford Consortium
La Jolla, CA
December 2, 2017
Opening Keynote Lecture
15th Annual ON*VECTOR International Photonics Workshop
Calit2’s Qualcomm Institute
University of California, San Diego
February 29, 2016
A California-Wide Cyberinfrastructure for Data-Intensive ResearchLarry Smarr
The document discusses creating a California-wide cyberinfrastructure for data-intensive research. It outlines efforts to connect all UC campuses and other research institutions across California with high-speed optical networks. This would create a "big data plane" to share large datasets. Several campuses have received NSF grants to upgrade their networks and implement Science DMZ architectures with 10-100Gbps connections to CENIC. Connecting these resources would provide researchers access to high-performance computing, large scientific instruments, and datasets. This would support collaborative big data science across disciplines like physics, climate modeling, genomics and microscopy.
The Pacific Research Platform: Building a Distributed Big-Data Machine-Learni...Larry Smarr
The Pacific Research Platform (PRP) is a distributed big data and machine learning cyberinfrastructure connecting researchers across multiple UC campuses. It was established in 2015 with NSF funding and has since expanded to include other California universities and national/international partners. The PRP provides high-speed networks, storage, and computing resources like GPUs. It has enabled new data-intensive collaborations and significantly accelerated research workflows. The PRP also supports educational initiatives, providing computing resources for data science courses impacting thousands of students.
Similar to Getting Started Using the National Research Platform (20)
The Rise of Supernetwork Data Intensive ComputingLarry Smarr
Invited Remote Lecture to SC21
The International Conference for High Performance Computing, Networking, Storage, and Analysis
St. Louis, Missouri
November 18, 2021
My Remembrances of Mike Norman Over The Last 45 YearsLarry Smarr
Mike Norman has been a leader in computational astrophysics for over 45 years. Some of his influential work includes:
- Cosmic jet simulations in the early 1980s which helped explain phenomena from galactic centers.
- Pioneering the use of adaptive mesh refinement in the 1990s to achieve dynamic load balancing on supercomputers.
- Massive cosmology simulations in the late 2000s with over 100 trillion particles using thousands of processors across multiple supercomputing sites, producing petabytes of data.
- Developing end-to-end workflows in the 2000s to couple supercomputers, high-speed networks, and large visualization systems to enable real-time analysis of extremely large astrophysics simulations.
Metagenics How Do I Quantify My Body and Try to Improve its Health? June 18 2019Larry Smarr
Larry Smarr discusses quantifying his body and health over time through extensive self-tracking. He measures various biomarkers through regular blood tests and analyzes his gut microbiome by sequencing stool samples. This revealed issues like chronic inflammation and an unhealthy microbiome. Smarr then took steps like a restricted eating window and increasing plant diversity in his diet, which reversed metabolic syndrome issues and correlated with shifts in his microbiome ecology. His goal is to continue precisely measuring factors like toxins, hormones, gut permeability and food/supplement impacts to further optimize his health.
Panel: Reaching More Minority Serving InstitutionsLarry Smarr
This document discusses engaging more minority serving institutions (MSIs) in cyberinfrastructure development through regional networks. It provides data showing the importance of MSIs like historically black colleges and universities (HBCUs) in educating underrepresented minority students in STEM fields. Regional networks can help equalize opportunities by assisting MSIs in overcoming barriers to resources through training, networking infrastructure support, and helping institutions obtain necessary staffing and funding. Strategies mentioned include collaborating with MSIs on grants and addressing issues identified in surveys like lack of vision for data use beyond compliance. The goal is to broaden participation in STEAM fields by leveraging the success MSIs have shown in supporting underrepresented students.
Global Network Advancement Group - Next Generation Network-Integrated SystemsLarry Smarr
This document summarizes a presentation on global petascale to exascale workflows for data intensive sciences. It discusses a partnership convened by the GNA-G Data Intensive Sciences Working Group with the mission of meeting challenges faced by data-intensive science programs. Cornerstone concepts that will be demonstrated include integrated network and site resource management, model-driven frameworks for resource orchestration, end-to-end monitoring with machine learning-optimized data transfers, and integrating Qualcomm's GradientGraph with network services to optimize applications and science workflows.
Wireless FasterData and Distributed Open Compute Opportunities and (some) Us...Larry Smarr
This document discusses opportunities for ESnet to support wireless edge computing through developing a strategy around self-guided field laboratories (SGFL). It outlines several potential science use cases that could benefit from wireless and distributed computing capabilities, both in the short term through technologies like 5G, LoRa and Starlink, and longer term through the vision of automated SGFL. The document proposes some initial ideas for deploying and testing wireless edge computing technologies through existing projects to help enable the SGFL vision and further scientific opportunities. It emphasizes that exploring these emerging areas could help drive new science possibilities if done at a reasonable scale.
The Asia Pacific and Korea Research Platforms: An Overview Jeonghoon MoonLarry Smarr
This document provides an overview of Asia Pacific and Korea research platforms. It discusses the Asia Pacific Research Platform working group in APAN, including its objectives to promote HPC ecosystems and engage members. It describes the Asi@Connect project which provides high-capacity internet connectivity for research across Asia-Pacific. It also discusses the Korea Research Platform and efforts to expand it to 25 national research institutes in Korea. New related projects on smart hospitals, agriculture, and environment are mentioned. The conclusion discusses enhancing APAN and the Korea Research Platform and expanding into new areas like disaster and AI education.
This slide deck is a deep dive the Salesforce latest release - Summer 24, by the famous Stephen Stanley. He has examined the release notes very carefully, and summarised them for the Wellington Salesforce user group, virtual meeting June 27 2024.
Dev Dives: Mining your data with AI-powered Continuous DiscoveryUiPathCommunity
Want to learn how AI and Continuous Discovery can uncover impactful automation opportunities? Watch this webinar to find out more about UiPath Discovery products!
Watch this session and:
👉 See the power of UiPath Discovery products, including Process Mining, Task Mining, Communications Mining, and Automation Hub
👉 Watch the demo of how to leverage system data, desktop data, or unstructured communications data to gain deeper understanding of existing processes
👉 Learn how you can benefit from each of the discovery products as an Automation Developer
🗣 Speakers:
Jyoti Raghav, Principal Technical Enablement Engineer @UiPath
Anja le Clercq, Principal Technical Enablement Engineer @UiPath
⏩ Register for our upcoming Dev Dives July session: Boosting Tester Productivity with Coded Automation and Autopilot™
👉 Link: https://bit.ly/Dev_Dives_July
This session was streamed live on June 27, 2024.
Check out all our upcoming Dev Dives 2024 sessions at:
🚩 https://bit.ly/Dev_Dives_2024
Test Management as Chapter 5 of ISTQB Foundation. Topics covered are Test Organization, Test Planning and Estimation, Test Monitoring and Control, Test Execution Schedule, Test Strategy, Risk Management, Defect Management
An Introduction to All Data Enterprise IntegrationSafe Software
Are you spending more time wrestling with your data than actually using it? You’re not alone. For many organizations, managing data from various sources can feel like an uphill battle. But what if you could turn that around and make your data work for you effortlessly? That’s where FME comes in.
We’ve designed FME to tackle these exact issues, transforming your data chaos into a streamlined, efficient process. Join us for an introduction to All Data Enterprise Integration and discover how FME can be your game-changer.
During this webinar, you’ll learn:
- Why Data Integration Matters: How FME can streamline your data process.
- The Role of Spatial Data: Why spatial data is crucial for your organization.
- Connecting & Viewing Data: See how FME connects to your data sources, with a flash demo to showcase.
- Transforming Your Data: Find out how FME can transform your data to fit your needs. We’ll bring this process to life with a demo leveraging both geometry and attribute validation.
- Automating Your Workflows: Learn how FME can save you time and money with automation.
Don’t miss this chance to learn how FME can bring your data integration strategy to life, making your workflows more efficient and saving you valuable time and resources. Join us and take the first step toward a more integrated, efficient, data-driven future!
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/efficiency-unleashed-the-next-gen-nxp-i-mx-95-applications-processor-for-embedded-vision-a-presentation-from-nxp-semiconductors/
James Prior, Senior Product Manager at NXP Semiconductors, presents the “Efficiency Unleashed: The Next-gen NXP i.MX 95 Applications Processor for Embedded Vision” tutorial at the May 2024 Embedded Vision Summit.
Machine vision is the most obvious way to help humans live better, enabling hundreds of applications spanning security, monitoring, inspection and more. Modern edge processors need private on-device and scalable hybrid machine learning capabilities to offer enough longevity to stay relevant in industrial and commercial IoT markets. In this talk, Prior presents the upcoming i.MX 95 family of applications processors.
The i.MX 95 features a new, self-developed neural processing unit from NXP—the eIQ Neutron NPU. Designed to scale from today’s conventional neural networks to tomorrow’s transformer-based models, the eIQ Neutron NPU scalable architecture delivers edge AI capabilities at high efficiency with award-winning tools, combined with chip-level security and privacy features. The i.MX 95 applications processor family features powerful processing and vision capabilities combined with safety, security and expandable high-speed interfaces.
Cassandra to ScyllaDB: Technical Comparison and the Path to SuccessScyllaDB
What can you expect when migrating from Cassandra to ScyllaDB? This session provides a jumpstart based on what we’ve learned from working with your peers across hundreds of use cases. Discover how ScyllaDB’s architecture, capabilities, and performance compares to Cassandra’s. Then, hear about your Cassandra to ScyllaDB migration options and practical strategies for success, including our top do’s and don’ts.
Day 4 - Excel Automation and Data ManipulationUiPathCommunity
👉 Check out our full 'Africa Series - Automation Student Developers (EN)' page to register for the full program: https://bit.ly/Africa_Automation_Student_Developers
In this fourth session, we shall learn how to automate Excel-related tasks and manipulate data using UiPath Studio.
📕 Detailed agenda:
About Excel Automation and Excel Activities
About Data Manipulation and Data Conversion
About Strings and String Manipulation
💻 Extra training through UiPath Academy:
Excel Automation with the Modern Experience in Studio
Data Manipulation with Strings in Studio
👉 Register here for our upcoming Session 5/ June 25: Making Your RPA Journey Continuous and Beneficial: https://community.uipath.com/events/details/uipath-lagos-presents-session-5-making-your-automation-journey-continuous-and-beneficial/
Move Auth, Policy, and Resilience to the PlatformChristian Posta
Developer's time is the most crucial resource in an enterprise IT organization. Too much time is spent on undifferentiated heavy lifting and in the world of APIs and microservices much of that is spent on non-functional, cross-cutting networking requirements like security, observability, and resilience.
As organizations reconcile their DevOps practices into Platform Engineering, tools like Istio help alleviate developer pain. In this talk we dig into what that pain looks like, how much it costs, and how Istio has solved these concerns by examining three real-life use cases. As this space continues to emerge, and innovation has not slowed, we will also discuss the recently announced Istio sidecar-less mode which significantly reduces the hurdles to adopt Istio within Kubernetes or outside Kubernetes.
What is an RPA CoE? Session 4 – CoE ScalingDianaGray10
How to scale a COE to meet organizational missions.
Topics covered:
• What is the original focal area?
• How to expand the COE globally.
• Is a centralized or decentralized model better for scaling?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
Corporate Open Source Anti-Patterns: A Decade LaterScyllaDB
A little over a decade ago, I gave a talk on corporate open source anti-patterns, vowing that I would return in ten years to give an update. Much has changed in the last decade: open source is pervasive in infrastructure software, with many companies (like our hosts!) having significant open source components from their inception. But just as open source has changed, the corporate anti-patterns around open source have changed too: where the challenges of the previous decade were all around how to open source existing products (and how to engage with existing communities), the challenges now seem to revolve around how to thrive as a business without betraying the community that made it one in the first place. Open source remains one of humanity's most important collective achievements and one that all companies should seek to engage with at some level; in this talk, we will describe the changes that open source has seen in the last decade, and provide updated guidance for corporations for ways not to do it!
The "Zen" of Python Exemplars - OTel Community DayPaige Cruz
The Zen of Python states "There should be one-- and preferably only one --obvious way to do it." OpenTelemetry is the obvious choice for traces but bad news for Pythonistas when it comes to metrics because both Prometheus and OpenTelemetry offer compelling choices. Let's look at all of the ways you can tie metrics and traces together with exemplars whether you're working with OTel metrics, Prom metrics, Prom-turned-OTel metrics, or OTel-turned-Prom metrics!
The document discusses testing throughout the software development life cycle. It describes different software development models including sequential, incremental, and iterative models. It also covers different test levels from component and integration testing to system and acceptance testing. The document discusses different types of testing including functional and non-functional testing. It also covers topics like maintenance testing and triggers for additional testing when changes are made. Also covers concepts of Agile including DevOps, Shift Left Approach, TDD, BDD, ATDD, Retrospective and Process Improvement
For senior executives, successfully managing a major cyber attack relies on your ability to minimise operational downtime, revenue loss and reputational damage.
Indeed, the approach you take to recovery is the ultimate test for your Resilience, Business Continuity, Cyber Security and IT teams.
Our Cyber Recovery Wargame prepares your organisation to deliver an exceptional crisis response.
Event date: 19th June 2024, Tate Modern
TrustArc Webinar - Your Guide for Smooth Cross-Border Data Transfers and Glob...TrustArc
Global data transfers can be tricky due to different regulations and individual protections in each country. Sharing data with vendors has become such a normal part of business operations that some may not even realize they’re conducting a cross-border data transfer!
The Global CBPR Forum launched the new Global Cross-Border Privacy Rules framework in May 2024 to ensure that privacy compliance and regulatory differences across participating jurisdictions do not block a business's ability to deliver its products and services worldwide.
To benefit consumers and businesses, Global CBPRs promote trust and accountability while moving toward a future where consumer privacy is honored and data can be transferred responsibly across borders.
This webinar will review:
- What is a data transfer and its related risks
- How to manage and mitigate your data transfer risks
- How do different data transfer mechanisms like the EU-US DPF and Global CBPR benefit your business globally
- Globally what are the cross-border data transfer regulations and guidelines
Metadata Lakes for Next-Gen AI/ML - DatastratoZilliz
As data catalogs evolve to meet the growing and new demands of high-velocity, unstructured data, we see them taking a new shape as an emergent and flexible way to activate metadata for multiple uses. This talk discusses modern uses of metadata at the infrastructure level for AI-enablement in RAG pipelines in response to the new demands of the ecosystem. We will also discuss Apache (incubating) Gravitino and its open source-first approach to data cataloging across multi-cloud and geo-distributed architectures.
ThousandEyes New Product Features and Release Highlights: June 2024
Getting Started Using the National Research Platform
1. “Getting Started Using
the National Research Platform”
SoX Monthly Workshop
Remote Presentation
September 29, 2023
Dr. Larry Smarr
Founding Director Emeritus, California Institute for Telecommunications and Information Technology;
Distinguished Professor Emeritus, Dept. of Computer Science and Engineering
Jacobs School of Engineering, UCSD
http://lsmarr.calit2.net
2. The December 2022 Pacific Research Platform Video
Highlights 3 Different Applications Out of 1000 Nautilus Namespace Projects
Pacific Research Platform Video:
https://pacificresearchplatform.org/media/pacific-research-platform-video/
3. NSF CC*DNI Grant
$7.3M 10/2015-10/2020
Extended 2 Years – Ended 10/2022
(GDC)
2015-2022: The Pacific Research Platform NSF Grants
Were Built on the CENIC Regional Optical Network
Source: John Hess, CENIC
Supercomputer
Centers
4. 2015-2022: UCSD Designs PRP Data Transfer Nodes (DTNs) --
Flash I/O Network Appliances (FIONAs)
FIONAs Solved the Disk-to-Disk Data Transfer Problem
at Near Full Speed on Best-Effort 10G, 40G and 100G
FIONAs Designed by UCSD’s Phil Papadopoulos, John Graham,
Joe Keefe, and Tom DeFanti
Add Up to 8 Nvidia GPUs Per 2U FIONA
To Add Machine Learning Capability
Up to 240TB Storage
https://nationalresearchplatform.org/fiona/
5. 2018/2019: PRP Game Changer!
Using Google’s Kubernetes to Orchestrate Containers Across the PRP
User
Applications
Clouds
Containers
7. 2023: NRP’s Nautilus is a Multi-Institution Hypercluster
Which Creates a “Cyberinfrastructure Commons”
~200 FIONAs on 30 Partner Campuses
Networked Together at 10-100Gbps
Sept. 29, 2023
8. The PRPNRP Has Emphasized
Expanding Diversity and Inclusion
The Expansion from PRP in 2015 to the NRP in 2023:
– 6 States Now 45 States
– 19 Campuses Now 135 Campuses
– 9 Minority Serving Institutions Now 24 MSIs
– 2 NSF EPCoR States Now 20 EPSCoR States, 2 Territories, and Wash DC
9. The Key Role of Regional Optical Network Meetings
to Engage More Campuses in Using NRP
www.thequilt.net/quilt-circle/snapshot-scaling-a-national-research-platform/
See https://nationalresearchplatform.org/events/fourth-national-research-platform-4nrp/
for slides and videos of all 4NRP presentations
Jen Leasure
President & CEO,
The Quilt
10. Non-MSI
Institutions
Minority Serving
Institutions
EPSCoR
Institutions
NRP’s Nautilus Hypercluster Is Hosted
On Campuses Across the United States Interconnected By Quilt and I2
238 GPUs over CENIC
CSUSB + SDSU
88 GPUs over CENIC
UCI + UCR + UCM + UCSC + UCSB
511 GPUs over CENIC
UCSD + SDSC
21 GPUs over MREN
UIC
184 GPUs over GPN
U. Nebraska-L
8 GPUs over FLR
FAMU
12 GPUs over NYSERNet
NYU
21 GPUs over SoX
Clemson U
9 GPUs over GPN
U S. Dakota + SD State
4 GPUs via
Albuquerque GigaPoP
U New Mexico
12 GPUs over NYSERNet
U Delaware
2 GPUs over OARnet
CWRU
1 GPU over CENIC/PW
U Hawaii
1 GPU over CENIC/PW
U Guam
144 GPUs over NEREN
MGHPCC
8 GPUs over GPN
U Oklahoma
16 GPUs over GPN
U Missouri
11. 12 Campuses Connected by SoX, FLR, or MARIA
Have 1 or More Users Who Have Logged Onto Nautilus
Total: 52 Nautilus Users
12. 6 Campuses Connected by SoX, FLR, or MARIA
Have Created 1 or More Nautilus Namespaces
Total: 17 Nautilus Namespaces
13. In the Last 12 Months, Only 2 Campuses Connected by SoX, FLR, or MARIA
Have Used Nautilus GPUs or CPUs
14. The NRP Web Site Helps Users
Get Started Using NRP’s Nautilus
https://nationalresearchplatform.org/nautilus/
15. California State University San Bernardino is an Excellent Example
of How to Help Your Faculty and Students Use NRP
www.csusb.edu/academic-technologies-innovation/xreal-lab-and-high-performance-computing/high-performance-computing
Their Campus HPC Program
Enabled CSUSB Faculty & Students
to Use More NRP GPU-Hours
In the Last 12 Months
Than 8 of the 10 UC Campuses!
16. The CSUSB HPC Team Is the Major Reason
For CSUSB Having the Largest NRP Utilization of 23 CSUs
• Dr. Sam Sudhakar
– Chief Financial Officer and Vice President, Finance, Technology, & Operations
• Gerard Au
– Chief Information Officer
• Dr. Bradford Owen
– AVP for Faculty Development, Chief Academic Technologies Officer
• Dr. Dung Vu
– HPC Consultant, Analyst/Programmer
• James MacDonell
– HPC Consultant, Information Security Analyst
• Prof. Youngsu Kim
– HPC Faculty Fellow, Asst. Prof. of Mathematics
17. A Key Reason CSUSB Has The Largest CSU Nautilus Usage:
They Installed and Publicized the JupyterHub “Easy Button”
https://csusb-jupyter.nrp-nautilus.io/hub/login
Slide Adapted from
Prof. Youngsu Kim
Over 150 Total Users!
18. CSUSB Provides Human and On-Line Support
For Faculty, Students, and Staff to Easily Use JupyterHub to Access NRP
www.csusb.edu/faculty-center-for-excellence/idat/high-performance-computing/jupyterhub-nrp
19. University of Missouri’s Grant Scott
Has Built a Model Program for Data Intensive Computing and ML/AI Using NRP
https://engineering.missouri.edu/2023/engineers-share-expertise-around-nautilus-with-great-plains-network/
20. University of Missouri is Leveraging Jupyter Hubs on Nautilus
for STEM & AI/ML Experiential Learning
Has 16 GPUs On-Site Now -
Adding 44 with a New CC* Award
21. Customized Jupyter Hubs on Nautilus
Support Multiple Courses and Certificates
– Computer Science Undergraduate Classes
– Computer Science HPC Classes
– Undergraduate Data Science
– HPC Emphasis Graduate Data Science
– State & Federal Government Training Programs
22. UCSD Recently Submitted an NSF Proposal
With FAMU and Mizzou’s Scott Grant Driving Outreach
PI Tom DeFanti, UCSD
“Florida A&M University (FAMU) will host equipment,
train CISE campus researchers and students,
and design custom outreach mechanisms
for the Southeast U.S.”
23. NRP Support and Community:
• US National Science Foundation (NSF) awards and subawards to UCSD
--CNS-1456638, CNS-1730158, CNS-2100237, CNS-2120019
--ACI-1540112, ACI-1541349, OAC-1826967, OAC-2029306, OAC-2112167
• DOD DURIP awards to UCSD
• UC Office of the President, Calit2 and Calit2’s UCSD Qualcomm Institute
• San Diego Supercomputer Center and UCSD’s Research IT and Instructional IT
• CENIC, Pacific Wave/PNWGP, StarLight/MREN, The Quilt, Great Plains Network,
NYSERNet, Open Science Grid, Internet2, DOE ESnet, NCAR/UCAR & Wyoming
Supercomputing Center, AWS, Google, Microsoft, Cisco, Juniper, Arista