SlideShare a Scribd company logo
Information Technology Infrastructure
          Committee (ITIC)

           Report to the NAC

             March 8, 2012

              Larry Smarr
               Chair ITIC
Committee Members



Membership
 Dr. Larry Smarr, Director- California Institute of Telecommunications and
  Information Technology
 Mr. Alan Paller, Research Director- SANS Institute
 Dr. Robert Grossman, Professor- University of Chicago
 Dr. Charles Holmes, Retired- NASA
 Dr. Alexander Szalay, Professor- Johns Hopkins University
 Ms. Karen Harper (Exec Sec), IT Workforce Manager, Office of Chief
  Information Officer, NASA



                    Committee Met March 6-7, 2012:
                     Fact Finding at Johns Hopkins
                          FACA at NASA HQ
Finding #1


♦ To enable new scientific discoveries, in a fiscally constrained
 environment, NASA must develop more productive IT
 infrastructure through “frugal innovation” and “agile
 development”
  •   Easy to use as “flickr”
  •   Elastic to demand
  •   Continuous improvement
  •   More capacity for fixed investment
  •   Adaptable to changing requirements of multiple missions
  •   Built-in security that doesn’t hinder deployment
NASA is Falling Behind Federal
            and Non-Federal Institutions


           Big Data      10G              GPU          Hybrid
              CI         100G            Clusters        HPC
Non-Fed   Google/MS/     GLIF/I2/   Japan TSUBAME2       China
           Amazon        CENIC         4224 GPUs       #2 Fastest
                                         2.4 PF           5 PF
                                                        MC/GPU
NSF        Gordon         GENI             TAAC       Blue Waters*
                        Next Gen         512 GPUs       MC/GPU
                         Internet                        12 PF
DOE        Magellan       ANI              ANL        NG Jaguar*
                         ARRA            256 GPUs      MC/GPU
                         100Gb                          20 PF
NASA       Nebula,     Goddard to         Ames          Pleiades
           Testbed     Ames 10G         136 GPU           MC
                                     2 x 64 at Ames       1PF
                                         & GFSC

                       * Later in 2012
SMD is a Growing NASA HPC User Community




                                                  projected
        Source: Tsengdar Lee, Mike Little, NASA
Leading Edge is Moving to Hybrid Processors:
Requiring Major Software Innovations




                                       SEATTLE, WA -- (Marketwire) -- 11/14/2011 -- SC11 --




“With Titan’s arrival, fundamental changes to computer architectures
will challenge researchers from every scientific discipline.”
Partnering Opportunities with NSF:
      SDSC’s Gordon-Dedicated Dec. 5, 2011
♦Data-Intensive Supercomputer Based on
 SSD Flash Memory and Virtual Shared Memory SW
 • Emphasizes MEM and IOPS over FLOPS
 • Supernode has Virtual Shared Memory:
    − 2 TB RAM Aggregate
    − 8 TB SSD Aggregate
 • Total Machine = 32 Supernodes
    − 4 PB Disk Parallel File System >100 GB/s I/O
♦System Designed to Accelerate Access
 to Massive Datasets being Generated in
 Many Fields of Science, Engineering, Medicine,
 and Social Science

                Source: Mike Norman, Allan Snavely SDSC
Gordon Bests Previous
Mega I/O per Second by 25x
Rapid Evolution of 10GbE Port Prices
  Makes Campus-Scale 10Gbps CI Affordable
    • Port Pricing is Falling
    • Density is Rising – Dramatically
    • Cost of 10GbE Approaching Cluster HPC Interconnects
$80K/port
Chiaro
(60 Max)



                 $ 5K
                 Force 10
                 (40 max)                                     ~$1000
                                                              (300+ Max)

                                     $ 500
                                     Arista                   $ 400
                                     48 ports                 Arista
                                                              48 ports
2005              2007                 2009            2010




            Source: Philip Papadopoulos, SDSC/Calit2
10G Optical Switch is the Center of a
                Switched Big Data Analysis Resource
10Gbps
            OptIPuter                            UCSD
                                                  RCI                Radical Change Enabled by
                                 Co-Lo
                                                                       Arista 7508 10G Switch
                             5                                            384 10G Capable
                                   8                        CENIC/
                                             2
                  32                                         NLR
   Triton                                          4

                                                                                Existing
                                                        8
                                                                               Commodity
  Trestles 32                            2                                      Storage
  100 TF          12                                                             1/3 PB

                                             40128
                   8
   Dash
                                             Oasis Procurement (RFP)
                                                                             2000 TB
                                                                            > 50 GB/s
                       128         • Phase0: > 8GB/s Sustained Today
  Gordon                           • Phase I: > 50 GB/sec for Lustre (May 2011)
                                    :Phase II: >100 GB/s (Feb 2012)

                         Source: Philip Papadopoulos, SDSC/Calit2
The Next Step for Data-Intensive Science:
Pioneering the HPC Cloud
Maturing Cloud Computing Capabilities:
                              Beyond Nebula

♦ Nebula Testing Conclusions
  • Good first step to evaluate needs for science-driven cloud applications
  • AWS Performance is better than Nebula
  • Nebula cannot achieve economy of scale of AWS
  • NASA cannot invest sufficient funding to compete with AWS ability to move up
    the learning curve
  • JPL is carrying out tests of AWS for mission data analysis
♦ Next Step: a NASA Cloud Test-bed
  •   Evaluate improvements to cloud software stacks for S&E applications
  •   Provide assistance to NASA cloud software developers
  •   Assist appropriate NASA S&E users in migrating to clouds
  •   Operate cloud instances at ARC and GSFC
♦ Limited Funding under HEC Program as a Technology Development
  • Liaison with other NASA centers and external S&E cloud users
  • Partnership between OCIO and SMD


                       Source: Tsengdar Lee, Mike Little, NASA
Partnering Opportunities with Universities
            John Hopkins University DataScope

♦ Private Science Cloud for Sustained Analysis of PB Data Sets
  •   Built for Under $1M
  •   6.5PB of Storage, 500 Gbytes/sec Sequential BW
  •   Disk IO + SSDs Streaming Data into an Array of GPUs
  •   Connected to Starlight at 100G (May 2012)
♦ Some Form of a Scalable Cloud Solution Inevitable
  • Who will Operate it, What Business Model, What Scale?
  • How does the On/Off Ramp Work?
♦ Science has Different Tradeoffs than eCommerce:
  •   Astronomy,
  •   Space Science,
  •   Turbulence,
  •   Earth Science,
  •   Genomics,
  •   Large HPC Simulations Analysis

                        Source: Alex Szalay, JHU
NASA Corporate Wide Area Network Backbone
(Dedicated 10G between ARC/GFSC Supercomputing Centers)

                                                       GRC




                               CHI                                 DC                 HQ




  JPL                  ARC                  HEC ONLY




                                         Dedicated 10Gbps                      GSFC   LARC


 DFRC
                                                                               ATL



                       BAY



                                                                                             SSC

                                                                        MSFC
                             DAL
        SONET OC-192
        SONET OC-48
                                                                                                   MAF
                                                             JSC
        SONET OC-12
                                                                               KSC
         SONET OC-3




                                   WSC   WSTF


                         Source: Tsengdar Lee, Mike Little, NASA
Partnering Opportunities with DOE:
ARRA Stimulus Investment for DOE ESnet



    National-Scale 100Gbps Network Backbone




    Source: Presentation to ESnet Policy Board
Aims of DOE ESnet 100G National Backbone


♦ To stimulate the market for 100G

♦ To build pre-production 100G network, linking DOE
  supercomputing facilities with international peering points

♦ To build a national-scale test bed for disruptive research

♦ ESnet is interested in Federal partnerships that accelerate
  scientific discovery and leverage our particular capabilities




              Source: Presentation to ESnet Policy Board
Global Partnering Opportunities:
 The Global Lambda Integrated Facility
Research Innovation Labs Linked by 10Gps Dedicated Networks




          www.glif.is/publications/maps/GLIF_5-11_World_2k.jpg
NAC Committee on IT Infrastructure
    Recommendation #1

♦ Recommendation: To enable NASA to gain experience on
  emerging leading-edge IT technologies such as:
      − Data-Intensive Cyberinfrastructure,
      − 100 Gbps Networking,
      − GPU Clusters, and
      − Hybrid HPC Architectures,
   we recommendation that NASA aggressively pursue partnerships
   with other Federal agencies, specifically NSF and DOE, as well as
   public/private opportunities.
   We believe joint agency program calls for end users to develop
   innovative applications will help keep NASA at the leading edge of
   capabilities and enable training of NASA staff to support NASA
   researchers as these technologies become mainstream.
NAC Committee on IT Infrastructure
    Recommendation #1 (Continued)

♦ Major Reasons for the Recommendation: NASA has fallen behind
  the leading edge, compared to other Federal agencies and
  international centers, in key emerging information and networking
  technologies. In a budget constrained fiscal environment, it is
  unlikely that NASA will be able to catch up by internal efforts.
  Partnering, as was historically done in HPCC, seems an attractive
  option.

♦ Consequences of No Action on the Recommendation: Within a
  few more years, the gap between NASA internally driven efforts and
  the U.S. and global best-of-breed will become a gap to large to
  bridge. This will severely undercut NASA’s ability to make progress
  on a number of critical application arenas.
Finding #2


♦ SMD Data Resides in a Highly Distributed Servers
  • Many Data Storage and Analysis Sites Are Outside NASA Centers
  • Access to Entire Research Community Essential
     − Over Half Science Publications are From Using Data Archives
     − Secondary Storage Needed in Cloud with High Bandwidth and User Portal
  • Education and Public Outreach of Data Rapidly Expanding
     − Images for Public Relations
     − Apps for Smart Phones
     − Crowdsouring                                  Education

                                                    Research
                                                         PI
                                                    Community

                                                 Public Outreach
Majority of Hubble Space Telescope
   Scientific Publications Come From Data Archives
                                             900
                                               0
                                                       Unassigned
                                                                d
                                             800
                                               0       Par ally Archival
                                                              y        l
                                                       Archival
                                                              l
                                             700
                                               0
                                                       Not Archival
                                                         t        l
In 2011 there

                 Papers Published Annually
                                         y
                                             600
                                               0
  were over
 1060 papers                    d            500
                                               0

 written using                               400
                                               0

data archived
                      s



                                             300
                                               0
   at MAST
                                             200
                                               0

                                             100
                                               0

                                               0
                                                1995
                                                   5                  2000
                                                                         0   2005
                                                                                5   2010
                                                                                       0
Multi-Mission Data Archives at STSI
           Will Continue to Grow - Doubling by 2018
                Cumulative Petabyte Over 20 Years
    1000
               JWST                                 Projected
    800
               JWST                                             JWST
    600        S&IT
    400        Other
b
e
T
a
y




    200
s
r
t




      0
       1994 1997 2000 2003 2006 2009 2012 2015 2018
Solar Dynamics Observatory
           4096x4096 AIA Camera – 57, 600 Images/Day
                JSOC is Archiving ~5TB/day From 6 Cameras
                    Leads to over 1 Petabyte per year!




 March 6, 2012 X5.4 Flare from
 Sunspot AR1429 Captured by
the Solar Dynamics Observatory
             (SDO)
in the 171 Angstrom Wavelength

    Credit: NASA/SDO/AIA
Public Outreach Through Multiple Modes
            Simultaneous with Data to Research Scientists




  3D Sun App-10s of Thousands
Installed on Android in Last 30 Days
NASA Space Images Are Widely Viewed by Public




http://news.nationalgeographic.com/news/2012/03/120306-dark-matter-galaxies-mystery-space-
32 of the 200+ Apps in the Apple iStore that
     Return from a Search on “NASA”
Crowdsourcing Science: Galaxy Zoo and Moon
      Zoo Bring the Public into Scientific Discovery




     More than 250,000 people have taken part in Galaxy Zoo so far.
In the 14 months the site was up Galaxy Zoo 2 users helped us make over
60,000,000 classifications. Over the past year, volunteers from the original
Galaxy Zoo project created the world's largest database of galaxy shapes.
                             www.galaxyzoo.org
NASA Earth Sciences Images Are Brought
 Together by Earth Observatory Web Site




       http://earthobservatory.nasa.gov/
NASA Earth Observatory Integrates Global Variables:
              Time Evolution Over Last Decade




http://earthobservatory.nasa.gov
EOS-DIS Data Products Distribution
Approaching ½ Billion/Year!
31 31
      The Virtual Observatory


♦ The VO is foremost a data discovery, access, and integration
  facility
♦ International collaboration on metadata standards, data models,
  and protocols
  • Image, spectrum, time series data
  • Catalogs, databases
  • Transient event notices
  • Software and services
  • Distributed computing (authentication,                     authorization,
    process management)
  • Application inter-communication
♦ International Virtual Observatory Alliance established in 2001,
  patterned on WorldWideWeb Consortium (W3C)




                              Robert Hanisch, NAC ITIC @ JHU               5 March 2012
32 32




      SAMP




Robert Hanisch, NAC ITIC @ JHU   5 March 2012
NSF’s Ocean Observatory Initiative Cyberinfrastructure
Supports Science, Education, and Public Outreach




    Source: Matthew Arrott, Calit2 Program Manager for OOI CI
OOI CI is Built on Dedicated Optical Networks
         and Federal Agency & Commercial Clouds
  Source: John Orcutt,
Matthew Arrott, SIO/Calit2
NAC Committee on IT Infrastructure
    DRAFT* Recommendation #2

♦ Recommendation: NASA should formally review the existing
  national data cyberinfrastructure supporting access to data
  repositories for NASA SMD missions. A comparison with best-of-
  breed practices within NASA and at other Federal agencies should
  be made.
♦ We request a briefing on this review to a joint meeting of the NAC IT
  Infrastructure, Science, and Education committees within one year of
  this recommendation. The briefing should contain recommendations
  for a NASA data-intensive cyberinfrastructure to support science
  discovery by both mission teams, remote researchers, and for
  education and public outreach appropriate to the growth driven by
  current and future SMD missions.

   * To be completed after a joint meeting of ITIC, Science,
     and Education Committees in July 2012 and the final
    recommendation submitted to July 2012 NAC meeting
NAC Committee on IT Infrastructure
    Recommendation #2 (continued)

♦ Major Reasons for the Recommendation: NASA data repository
  and analysis facilities for SMD missions are distributed across NASA
  centers and throughout U.S. universities and research facilities.
   • There is considerable variation in the sophistication of the integrated
     cyberinfrastructure supporting scientific discovery, educational reuse, and
     public outreach across SMD subdivisions.
   • The rapid rise in the last decade of “mining data archives” by groups other
     than those funded by specific missions implies a need for a national-scale
     cyberinfrastructure architecture that can allow for free-flow of data to
     where it is needed.
   • Other agencies, specifically NSF’s Ocean Observatories Initiative
     Cyberinfrastructure program, should be used as a benchmark for NASA’s
     data-intensive architecture.
♦ Consequences of No Action on the Recommendation: The
  science , education, and public outreach potential of NASA’s
  investment in SMD space missions will not be realized .

More Related Content

Report to the NAC

  • 1. Information Technology Infrastructure Committee (ITIC) Report to the NAC March 8, 2012 Larry Smarr Chair ITIC
  • 2. Committee Members Membership  Dr. Larry Smarr, Director- California Institute of Telecommunications and Information Technology  Mr. Alan Paller, Research Director- SANS Institute  Dr. Robert Grossman, Professor- University of Chicago  Dr. Charles Holmes, Retired- NASA  Dr. Alexander Szalay, Professor- Johns Hopkins University  Ms. Karen Harper (Exec Sec), IT Workforce Manager, Office of Chief Information Officer, NASA Committee Met March 6-7, 2012: Fact Finding at Johns Hopkins FACA at NASA HQ
  • 3. Finding #1 ♦ To enable new scientific discoveries, in a fiscally constrained environment, NASA must develop more productive IT infrastructure through “frugal innovation” and “agile development” • Easy to use as “flickr” • Elastic to demand • Continuous improvement • More capacity for fixed investment • Adaptable to changing requirements of multiple missions • Built-in security that doesn’t hinder deployment
  • 4. NASA is Falling Behind Federal and Non-Federal Institutions Big Data 10G GPU Hybrid CI 100G Clusters HPC Non-Fed Google/MS/ GLIF/I2/ Japan TSUBAME2 China Amazon CENIC 4224 GPUs #2 Fastest 2.4 PF 5 PF MC/GPU NSF Gordon GENI TAAC Blue Waters* Next Gen 512 GPUs MC/GPU Internet 12 PF DOE Magellan ANI ANL NG Jaguar* ARRA 256 GPUs MC/GPU 100Gb 20 PF NASA Nebula, Goddard to Ames Pleiades Testbed Ames 10G 136 GPU MC 2 x 64 at Ames 1PF & GFSC * Later in 2012
  • 5. SMD is a Growing NASA HPC User Community projected Source: Tsengdar Lee, Mike Little, NASA
  • 6. Leading Edge is Moving to Hybrid Processors: Requiring Major Software Innovations SEATTLE, WA -- (Marketwire) -- 11/14/2011 -- SC11 -- “With Titan’s arrival, fundamental changes to computer architectures will challenge researchers from every scientific discipline.”
  • 7. Partnering Opportunities with NSF: SDSC’s Gordon-Dedicated Dec. 5, 2011 ♦Data-Intensive Supercomputer Based on SSD Flash Memory and Virtual Shared Memory SW • Emphasizes MEM and IOPS over FLOPS • Supernode has Virtual Shared Memory: − 2 TB RAM Aggregate − 8 TB SSD Aggregate • Total Machine = 32 Supernodes − 4 PB Disk Parallel File System >100 GB/s I/O ♦System Designed to Accelerate Access to Massive Datasets being Generated in Many Fields of Science, Engineering, Medicine, and Social Science Source: Mike Norman, Allan Snavely SDSC
  • 8. Gordon Bests Previous Mega I/O per Second by 25x
  • 9. Rapid Evolution of 10GbE Port Prices Makes Campus-Scale 10Gbps CI Affordable • Port Pricing is Falling • Density is Rising – Dramatically • Cost of 10GbE Approaching Cluster HPC Interconnects $80K/port Chiaro (60 Max) $ 5K Force 10 (40 max) ~$1000 (300+ Max) $ 500 Arista $ 400 48 ports Arista 48 ports 2005 2007 2009 2010 Source: Philip Papadopoulos, SDSC/Calit2
  • 10. 10G Optical Switch is the Center of a Switched Big Data Analysis Resource 10Gbps OptIPuter UCSD RCI Radical Change Enabled by Co-Lo Arista 7508 10G Switch 5 384 10G Capable 8 CENIC/ 2 32 NLR Triton 4 Existing 8 Commodity Trestles 32 2 Storage 100 TF 12 1/3 PB 40128 8 Dash Oasis Procurement (RFP) 2000 TB > 50 GB/s 128 • Phase0: > 8GB/s Sustained Today Gordon • Phase I: > 50 GB/sec for Lustre (May 2011) :Phase II: >100 GB/s (Feb 2012) Source: Philip Papadopoulos, SDSC/Calit2
  • 11. The Next Step for Data-Intensive Science: Pioneering the HPC Cloud
  • 12. Maturing Cloud Computing Capabilities: Beyond Nebula ♦ Nebula Testing Conclusions • Good first step to evaluate needs for science-driven cloud applications • AWS Performance is better than Nebula • Nebula cannot achieve economy of scale of AWS • NASA cannot invest sufficient funding to compete with AWS ability to move up the learning curve • JPL is carrying out tests of AWS for mission data analysis ♦ Next Step: a NASA Cloud Test-bed • Evaluate improvements to cloud software stacks for S&E applications • Provide assistance to NASA cloud software developers • Assist appropriate NASA S&E users in migrating to clouds • Operate cloud instances at ARC and GSFC ♦ Limited Funding under HEC Program as a Technology Development • Liaison with other NASA centers and external S&E cloud users • Partnership between OCIO and SMD Source: Tsengdar Lee, Mike Little, NASA
  • 13. Partnering Opportunities with Universities John Hopkins University DataScope ♦ Private Science Cloud for Sustained Analysis of PB Data Sets • Built for Under $1M • 6.5PB of Storage, 500 Gbytes/sec Sequential BW • Disk IO + SSDs Streaming Data into an Array of GPUs • Connected to Starlight at 100G (May 2012) ♦ Some Form of a Scalable Cloud Solution Inevitable • Who will Operate it, What Business Model, What Scale? • How does the On/Off Ramp Work? ♦ Science has Different Tradeoffs than eCommerce: • Astronomy, • Space Science, • Turbulence, • Earth Science, • Genomics, • Large HPC Simulations Analysis Source: Alex Szalay, JHU
  • 14. NASA Corporate Wide Area Network Backbone (Dedicated 10G between ARC/GFSC Supercomputing Centers) GRC CHI DC HQ JPL ARC HEC ONLY Dedicated 10Gbps GSFC LARC DFRC ATL BAY SSC MSFC DAL SONET OC-192 SONET OC-48 MAF JSC SONET OC-12 KSC SONET OC-3 WSC WSTF Source: Tsengdar Lee, Mike Little, NASA
  • 15. Partnering Opportunities with DOE: ARRA Stimulus Investment for DOE ESnet National-Scale 100Gbps Network Backbone Source: Presentation to ESnet Policy Board
  • 16. Aims of DOE ESnet 100G National Backbone ♦ To stimulate the market for 100G ♦ To build pre-production 100G network, linking DOE supercomputing facilities with international peering points ♦ To build a national-scale test bed for disruptive research ♦ ESnet is interested in Federal partnerships that accelerate scientific discovery and leverage our particular capabilities Source: Presentation to ESnet Policy Board
  • 17. Global Partnering Opportunities: The Global Lambda Integrated Facility Research Innovation Labs Linked by 10Gps Dedicated Networks www.glif.is/publications/maps/GLIF_5-11_World_2k.jpg
  • 18. NAC Committee on IT Infrastructure Recommendation #1 ♦ Recommendation: To enable NASA to gain experience on emerging leading-edge IT technologies such as: − Data-Intensive Cyberinfrastructure, − 100 Gbps Networking, − GPU Clusters, and − Hybrid HPC Architectures, we recommendation that NASA aggressively pursue partnerships with other Federal agencies, specifically NSF and DOE, as well as public/private opportunities. We believe joint agency program calls for end users to develop innovative applications will help keep NASA at the leading edge of capabilities and enable training of NASA staff to support NASA researchers as these technologies become mainstream.
  • 19. NAC Committee on IT Infrastructure Recommendation #1 (Continued) ♦ Major Reasons for the Recommendation: NASA has fallen behind the leading edge, compared to other Federal agencies and international centers, in key emerging information and networking technologies. In a budget constrained fiscal environment, it is unlikely that NASA will be able to catch up by internal efforts. Partnering, as was historically done in HPCC, seems an attractive option. ♦ Consequences of No Action on the Recommendation: Within a few more years, the gap between NASA internally driven efforts and the U.S. and global best-of-breed will become a gap to large to bridge. This will severely undercut NASA’s ability to make progress on a number of critical application arenas.
  • 20. Finding #2 ♦ SMD Data Resides in a Highly Distributed Servers • Many Data Storage and Analysis Sites Are Outside NASA Centers • Access to Entire Research Community Essential − Over Half Science Publications are From Using Data Archives − Secondary Storage Needed in Cloud with High Bandwidth and User Portal • Education and Public Outreach of Data Rapidly Expanding − Images for Public Relations − Apps for Smart Phones − Crowdsouring Education Research PI Community Public Outreach
  • 21. Majority of Hubble Space Telescope Scientific Publications Come From Data Archives 900 0 Unassigned d 800 0 Par ally Archival y l Archival l 700 0 Not Archival t l In 2011 there Papers Published Annually y 600 0 were over 1060 papers d 500 0 written using 400 0 data archived s 300 0 at MAST 200 0 100 0 0 1995 5 2000 0 2005 5 2010 0
  • 22. Multi-Mission Data Archives at STSI Will Continue to Grow - Doubling by 2018 Cumulative Petabyte Over 20 Years 1000 JWST Projected 800 JWST JWST 600 S&IT 400 Other b e T a y 200 s r t 0 1994 1997 2000 2003 2006 2009 2012 2015 2018
  • 23. Solar Dynamics Observatory 4096x4096 AIA Camera – 57, 600 Images/Day JSOC is Archiving ~5TB/day From 6 Cameras Leads to over 1 Petabyte per year! March 6, 2012 X5.4 Flare from Sunspot AR1429 Captured by the Solar Dynamics Observatory (SDO) in the 171 Angstrom Wavelength Credit: NASA/SDO/AIA
  • 24. Public Outreach Through Multiple Modes Simultaneous with Data to Research Scientists 3D Sun App-10s of Thousands Installed on Android in Last 30 Days
  • 25. NASA Space Images Are Widely Viewed by Public http://news.nationalgeographic.com/news/2012/03/120306-dark-matter-galaxies-mystery-space-
  • 26. 32 of the 200+ Apps in the Apple iStore that Return from a Search on “NASA”
  • 27. Crowdsourcing Science: Galaxy Zoo and Moon Zoo Bring the Public into Scientific Discovery More than 250,000 people have taken part in Galaxy Zoo so far. In the 14 months the site was up Galaxy Zoo 2 users helped us make over 60,000,000 classifications. Over the past year, volunteers from the original Galaxy Zoo project created the world's largest database of galaxy shapes. www.galaxyzoo.org
  • 28. NASA Earth Sciences Images Are Brought Together by Earth Observatory Web Site http://earthobservatory.nasa.gov/
  • 29. NASA Earth Observatory Integrates Global Variables: Time Evolution Over Last Decade http://earthobservatory.nasa.gov
  • 30. EOS-DIS Data Products Distribution Approaching ½ Billion/Year!
  • 31. 31 31 The Virtual Observatory ♦ The VO is foremost a data discovery, access, and integration facility ♦ International collaboration on metadata standards, data models, and protocols • Image, spectrum, time series data • Catalogs, databases • Transient event notices • Software and services • Distributed computing (authentication, authorization, process management) • Application inter-communication ♦ International Virtual Observatory Alliance established in 2001, patterned on WorldWideWeb Consortium (W3C) Robert Hanisch, NAC ITIC @ JHU 5 March 2012
  • 32. 32 32 SAMP Robert Hanisch, NAC ITIC @ JHU 5 March 2012
  • 33. NSF’s Ocean Observatory Initiative Cyberinfrastructure Supports Science, Education, and Public Outreach Source: Matthew Arrott, Calit2 Program Manager for OOI CI
  • 34. OOI CI is Built on Dedicated Optical Networks and Federal Agency & Commercial Clouds Source: John Orcutt, Matthew Arrott, SIO/Calit2
  • 35. NAC Committee on IT Infrastructure DRAFT* Recommendation #2 ♦ Recommendation: NASA should formally review the existing national data cyberinfrastructure supporting access to data repositories for NASA SMD missions. A comparison with best-of- breed practices within NASA and at other Federal agencies should be made. ♦ We request a briefing on this review to a joint meeting of the NAC IT Infrastructure, Science, and Education committees within one year of this recommendation. The briefing should contain recommendations for a NASA data-intensive cyberinfrastructure to support science discovery by both mission teams, remote researchers, and for education and public outreach appropriate to the growth driven by current and future SMD missions. * To be completed after a joint meeting of ITIC, Science, and Education Committees in July 2012 and the final recommendation submitted to July 2012 NAC meeting
  • 36. NAC Committee on IT Infrastructure Recommendation #2 (continued) ♦ Major Reasons for the Recommendation: NASA data repository and analysis facilities for SMD missions are distributed across NASA centers and throughout U.S. universities and research facilities. • There is considerable variation in the sophistication of the integrated cyberinfrastructure supporting scientific discovery, educational reuse, and public outreach across SMD subdivisions. • The rapid rise in the last decade of “mining data archives” by groups other than those funded by specific missions implies a need for a national-scale cyberinfrastructure architecture that can allow for free-flow of data to where it is needed. • Other agencies, specifically NSF’s Ocean Observatories Initiative Cyberinfrastructure program, should be used as a benchmark for NASA’s data-intensive architecture. ♦ Consequences of No Action on the Recommendation: The science , education, and public outreach potential of NASA’s investment in SMD space missions will not be realized .

Editor's Notes

  1. NCCS is only for SMD. NAS shared Pleiades and Columbia is now 52% for SMD and 48% for other MDs, earlier Pleiades, Columbia, and Schirra 50% for SMD, 50% for other MDs since 12/09, 40%/60% prior. NAS Columbia had adjustable allocations for the SCIENCE and SEAS domains; a 512 CPU sections was re-allocated from SEAS to SCIENCE early in April, 2009. NAS began retiring the Y2004 racks of Columbia in late April through early June 2010; 2048 of the remaining are ESMD only, the remaining 2560 CPUs are open to all users with nominal shares of 52% for SMD and 48% for other MDs. RT Jones was removed at the beginning of April 2010 and Schirra was decommissioned in late July 2010.
  2. Discussed with SMD management in November, 2012
  3. Foundation for fifth-generation architecture. Change economies and scaling properties. [no more on this slide]
  4. Note that the citations plot only include papers published over the past 3 years, which have not reached their ultimate citation impact yet. If we look at all years: HST (36 cites), IUE (34) both have > 30 citations/paper; FUSE 23, GALEX 20 (but it is a younger mission), Kepler 12, Other 34 -Probably GALEX and Kepler will ultimately have >30 cites/paper on average. -Note that library staff did some research to try and determine reason for dip in 2008/2009 time frame and many observatories noticed the dip.
  5. -Projected growth in GALEX is from addition of photon data -The tick marks in the plot above mark the end of the calendar years (which is why the projected line appears in the middle of 2010)
  6. From http://earthdata.nasa.gov/about-eosdis/performance