SlideShare a Scribd company logo
Common Vulnerability Scoring System



          Christian Heinrich
          ASIA RMSIG
          July 2007
cmlh

Currently Security Researcher
    – Defeating Network Intrusion Detection/Prevention and Forensics
    – Presented at RUXCON 2K5 and RUXCON 2K6



Former Security Manager
    – News Limited
    – DSD Gateway Certified Service Provider
    – Federal Government Endorsed Business




Public Profile on LinkedIn - http://www.linkedin.com/in/ChristianHeinrich
Agenda


1. History from the VDF to CVSS v2
2. CVSS v2 from the End User’s Perspective
3. Caveats, Politics and other Traps :)
Vulnerability Disclosure Framework

National Infrastructure Advisory Council (NIAC)
   Vulnerability Disclosure Working Group (VDWG) – 13 Jan 2004


Findings with Existing Methodologies from Microsoft, CERT, etc
   – Specific to Vendor x Product y not Vendor z Product y
   – No consideration to
      • Environment of End User
      • Time Line of Vulnerability
CVSS to CVSS v2
12 October 2004 - Vulnerability Scoring Working Sub Group of VDWG


February 2005 - Presented at RSA by Mike Schiffman (Cisco)


11 May 2005
- NAIC Appointed Forum of Incident Response and Security Teams (FIRST)
- FIRST formed Special Interest Group (CVSS-SIG)


20 June 2007 – CVSS v2
CVSS v2
Base Metrics
Intrinsic to any given vulnerability that do not change over or in different environments




1. Access from Local Console or Remote Network via Bluetooth -> Internet
2. “Technical” Likelihood
3. Authentication


“Technical” Impact to 4. Confidentiality, 5. Integrity and 6. Availability
Temporal Metrics
Characteristics of the vulnerability which evolve over the lifetime of the vulnerability




1. Maturity of the Exploit i.e. Proof of Concept, Worm, etc?
2. Is a Patch and/or Workaround, Available?
3. Confidence in the Report?
Environmental Metrics
Contain those characteristics of vulnerability which are tied to a specific implementation of the end user




1. Potential Collateral Damage to Critical Infrastructure?
2. Total number of Targets?


“Business” Impact to 3. Confidentiality, 4. Integrity and 5. Availability
Scoring
Calculators published via the “Scores and Calculators” Page at http://www.first.org/cvss


Presentation of Base Metrics
    AV:[L,A,N]/AC:[H,M,L]/Au:[M,S,N]/C:[N,P,C]/I:[N,P,C]/A:[N,P,C]



Presentation of Temporal Metrics
    E:[U,POC,F,H,ND]/RL:[OF,TF,W,U,ND]/RC:[UC,UR,C,ND]


Presentation of Environmental Metrics
    CDP:[N,L,LM,MH,H,ND]/TD:[N,L,M,H,ND]/CR:[L,M,H,ND]/IR:[L,M,H,ND]/AR:[L,M,H,ND]


Presentation of Base Metrics Example:
    AV:L/AC:M/Au:N/C:N/I:P/A:C
Caveats, Politics and other Traps :)
Base Metrics

Vendor’s “subjective” interpretation of Base Metrics
     “Independent” NIST National Vulnerability Database (NVD)

Vendor publishes Base Score but withholds Base Metrics
     Derive Possible Base Metrics from Base Score with Fuzzer

Attack Vector – Metric with Highest Numerical Value, not most common
               Some attacks e.g. XSS only considers Web Server, not Browser
Authentication – Can be “reduced” due to certain implementations e.g. Token, S/KEY


Considerations towards End User’s Environment
    –    Probability of Deriving Authentication Credential
    –    Range of Wireless Network? What if High Gain Antenna? What if Faraday Cage?
Caveats, Politics and other Traps :)
Temporal Metrics
“Will this affect my network range?”- No feed, real-time or otherwise, is provided
Doesn’t Consider reduction in time due to “Binary Diff” and/or “Fuzzing”


Environmental Metrics
Target Distribution - Map “Connectivity” with Active and Passive Discovery
Doesn’t Consider:
      - Cost to Implement Patch and/or Workaround
      - Technical Knowledge Required for Attack Complexity
Caveats, Politics and other Traps :)
Scoring


Developing “Fuzzer” to Derive All Scores by Calculating All Numerical Values
          Rounding to “Reduce” Score.
          Substitution – Different Metric Yet Same Score
          Derive Possible Metrics from Score


Based on CVSS v1 Fuzzer


Expect an Announcement from Jeff Jones (Microsoft)


Come to the Security Interchange meeting later this year
Caveats, Politics and other Traps :)
Lack of Representation:
    –    No invitation to End Users and little from Security Researchers (e.g. Schiffman)
    –    No lesson learnt by CERT


The Horse has Bolted – First Impressions Last:
    –    Optional Scores
    –    Resistance from Initial Supporters such as Microsoft
    –    CVE still in process of reclassifying vulnerabilities to updated schema



Advocate to Vendor as it provides YOU with Advantages in removing Subjectivity from:
    –    Priorities Remediation regardless of Vendor and/or Product and/or Technology
    –    Objective Vulnerability Distribution Studies
Thanks

John Greaves
David Palmer & Westpac
Chris Wood & Patchlink
David Reinhold
John Dale
John Frisken

More Related Content

CVSS

  • 1. Common Vulnerability Scoring System Christian Heinrich ASIA RMSIG July 2007
  • 2. cmlh Currently Security Researcher – Defeating Network Intrusion Detection/Prevention and Forensics – Presented at RUXCON 2K5 and RUXCON 2K6 Former Security Manager – News Limited – DSD Gateway Certified Service Provider – Federal Government Endorsed Business Public Profile on LinkedIn - http://www.linkedin.com/in/ChristianHeinrich
  • 3. Agenda 1. History from the VDF to CVSS v2 2. CVSS v2 from the End User’s Perspective 3. Caveats, Politics and other Traps :)
  • 4. Vulnerability Disclosure Framework National Infrastructure Advisory Council (NIAC) Vulnerability Disclosure Working Group (VDWG) – 13 Jan 2004 Findings with Existing Methodologies from Microsoft, CERT, etc – Specific to Vendor x Product y not Vendor z Product y – No consideration to • Environment of End User • Time Line of Vulnerability
  • 5. CVSS to CVSS v2 12 October 2004 - Vulnerability Scoring Working Sub Group of VDWG February 2005 - Presented at RSA by Mike Schiffman (Cisco) 11 May 2005 - NAIC Appointed Forum of Incident Response and Security Teams (FIRST) - FIRST formed Special Interest Group (CVSS-SIG) 20 June 2007 – CVSS v2
  • 7. Base Metrics Intrinsic to any given vulnerability that do not change over or in different environments 1. Access from Local Console or Remote Network via Bluetooth -> Internet 2. “Technical” Likelihood 3. Authentication “Technical” Impact to 4. Confidentiality, 5. Integrity and 6. Availability
  • 8. Temporal Metrics Characteristics of the vulnerability which evolve over the lifetime of the vulnerability 1. Maturity of the Exploit i.e. Proof of Concept, Worm, etc? 2. Is a Patch and/or Workaround, Available? 3. Confidence in the Report?
  • 9. Environmental Metrics Contain those characteristics of vulnerability which are tied to a specific implementation of the end user 1. Potential Collateral Damage to Critical Infrastructure? 2. Total number of Targets? “Business” Impact to 3. Confidentiality, 4. Integrity and 5. Availability
  • 10. Scoring Calculators published via the “Scores and Calculators” Page at http://www.first.org/cvss Presentation of Base Metrics AV:[L,A,N]/AC:[H,M,L]/Au:[M,S,N]/C:[N,P,C]/I:[N,P,C]/A:[N,P,C] Presentation of Temporal Metrics E:[U,POC,F,H,ND]/RL:[OF,TF,W,U,ND]/RC:[UC,UR,C,ND] Presentation of Environmental Metrics CDP:[N,L,LM,MH,H,ND]/TD:[N,L,M,H,ND]/CR:[L,M,H,ND]/IR:[L,M,H,ND]/AR:[L,M,H,ND] Presentation of Base Metrics Example: AV:L/AC:M/Au:N/C:N/I:P/A:C
  • 11. Caveats, Politics and other Traps :) Base Metrics Vendor’s “subjective” interpretation of Base Metrics “Independent” NIST National Vulnerability Database (NVD) Vendor publishes Base Score but withholds Base Metrics Derive Possible Base Metrics from Base Score with Fuzzer Attack Vector – Metric with Highest Numerical Value, not most common Some attacks e.g. XSS only considers Web Server, not Browser Authentication – Can be “reduced” due to certain implementations e.g. Token, S/KEY Considerations towards End User’s Environment – Probability of Deriving Authentication Credential – Range of Wireless Network? What if High Gain Antenna? What if Faraday Cage?
  • 12. Caveats, Politics and other Traps :) Temporal Metrics “Will this affect my network range?”- No feed, real-time or otherwise, is provided Doesn’t Consider reduction in time due to “Binary Diff” and/or “Fuzzing” Environmental Metrics Target Distribution - Map “Connectivity” with Active and Passive Discovery Doesn’t Consider: - Cost to Implement Patch and/or Workaround - Technical Knowledge Required for Attack Complexity
  • 13. Caveats, Politics and other Traps :) Scoring Developing “Fuzzer” to Derive All Scores by Calculating All Numerical Values Rounding to “Reduce” Score. Substitution – Different Metric Yet Same Score Derive Possible Metrics from Score Based on CVSS v1 Fuzzer Expect an Announcement from Jeff Jones (Microsoft) Come to the Security Interchange meeting later this year
  • 14. Caveats, Politics and other Traps :) Lack of Representation: – No invitation to End Users and little from Security Researchers (e.g. Schiffman) – No lesson learnt by CERT The Horse has Bolted – First Impressions Last: – Optional Scores – Resistance from Initial Supporters such as Microsoft – CVE still in process of reclassifying vulnerabilities to updated schema Advocate to Vendor as it provides YOU with Advantages in removing Subjectivity from: – Priorities Remediation regardless of Vendor and/or Product and/or Technology – Objective Vulnerability Distribution Studies
  • 15. Thanks John Greaves David Palmer & Westpac Chris Wood & Patchlink David Reinhold John Dale John Frisken

Editor's Notes

  1. AISO Web Server
  2. Accusations of Subjectivity Due to Lack of Consistent Score Vulnerability Disclosure Framework P8 Scoring - To protect the nation’s critical information infrastructure, the Council believes reliable, consistent vulnerability scoring methods are essential. The Study Group evaluated alternative procedures actively employed by several stakeholders to categorize reported vulnerabilities. Existing vulnerability scoring methods vary widely. To protect the nation’s critical information infrastructure, the Working Group concluded that reliable, consistent vulnerability scoring methods are essential. Unfortunately, the existing diversity in the methods used to identify vulnerabilities and assign scoring metrics presents a contradictory risk—disagreements provide malicious actors increased time to exploit the vulnerability or increase the damages resulting from existing exploitative situations. Therefore, the NIAC commissioned a research task to develop a consistent scoring methodology. The results of the Scoring Subgroup’s work will be published separately when complete. P 37 Support development and use of a universally compatible vulnerability scoring methodology. When complete, such a scoring method should: Employ standardized threat scoring classification schemes structured around accepted criteria by which to assess and evaluate vulnerabilities. The goal of standardized threat scoring is to promote understanding by a range of private and public sector researchers regarding reported vulnerabilities. Allow for local variations, depending on impact, environment, culture, and roles of those developing scores. Permit ongoing adjustment of an assigned score or set of scores in order to reflect research results or the impact of confirmed exploitations or remediation efforts. Incorporate procedures for independent validation of the suitability of any score or set of scores assigned to a vulnerability, along with a means for improper results to be adjusted in a neutral manner.
  3. Recommendations Support use of CVSS by all Federal Departments and Agencies by calculating Environmental Metrics. Encourage DHS to promote the use of CVSS to the global community, including critical infrastructure owners and outside of the USA NIAC appointed to identify organization to function as the permanent home for CVSS. NAIC appointed FIRST 11th May 2005 Significant Technical Expertise Experience in Managing Vulnerabilities Maintains a Global Focus Renamed again CVSS v2 to CVSS v1.1 to CVSS v2
  4. Base Metrics “ Intrinsic to any given vulnerability that do not change over or in different environments ” Six Metrics Scored by Vendor Temporal Metrics “ Characteristics of the vulnerability which evolve over the lifetime of the vulnerability ” Three Metrics Scored by Vendor and/or FIRST Member Environmental Metrics “ Contain those characteristics of vulnerability which are tied to a specific implementation of the end user. ” Five Metrics Scored by End User
  5. Six Metrics Scored by Vendor Changes from CVSS v1 Authentication includes multiple use of same credentials Access Vector – Bluetooth. 802.11 Wireless, etc
  6. Three Metrics Scored by Vendor and/or FIRST Member
  7. Six Metrics Scored by Vendor Changes from CVSS v1 Authentication includes multiple use of same credentials Access Vector – Bluetooth. 802.11 Wireless, etc
  8. Reduced Privileges of Running Process
  9. Binary Diff and Fuzzing weren’t considered by Vulnerability Disclosure Framework either!
  10. Jeff Jones Complies Vulnerability Statistics for Microsoft.
  11. Vulnerability Disclosure Framework – Equal Involvement from All Parties
  12. Vulnerability Disclosure Framework – Equal Involvement from All Parties