Academics warn over "problematic metrics and documentation" in CVSS system
A study out of Germany has highlighted shortcomings in the CVSS system and the way security vulnerabilities are assessed and scored
A recent study out of Germany has confirmed what many security professionals have known for years: the CVSS system needs an overhaul.
Known formally as the Common Vulnerability Scoring System, CVSS is the industry standard for rating the risk level of software bugs and security holes. Vulnerabilities are gauged on a number of factors and then issued a risk level from 0-10.
Unfortunately, researchers in Germany have found that the assessment process for those risk factors is rather subjective and does not directly apply to real-world vulnerability.
Moreover, the team from Friedrich-Alexander-Universität Erlangen-Nürnberg, Heilbronn University of Applied Sciences, and Rheinland-Pfälzische Technische Universität Kaiserslautern-Landau all concluded that security professionals do not fully believe in the system either.
"These inconsistencies in CVSS scores can lead to inaccurate resource allocation," the researchers wrote.
"This could result in critical vulnerabilities being sidelined while less severe ones receive undue attention."
In a study polling some 250 security professionals who both evaluate and base their policy off of CVSS scores, the study found a disconnect between how vulnerabilities were scored and what their actual risk was.
To be fair, and as the team notes, vulnerability assessment is very subjective and can vary from one network to another.
However, many IT professionals will take CVSS scores into account when deciding patch priority. A CVSS score of 9 will almost certainly be patched before one with a score of 6.5.
The researchers found that for a number of popular attack methods, including man-in-the-middle and cross-site-scripting attacks, the severity ratings will vary between assessors.
Additionally, the researchers found that any one evaluator's opinion on the severity of an issue can change over time, reflecting on the CVSS score any one issue receives.
Ultimately, however, the team concluded that the problem does not lie within the people evaluating bugs and issuing CVSS scores, but rather ambiguities in the assessment process itself.
"It seems that inconsistency is more closely related to the properties of CVSS, such as problematic metrics and documentation, than to the personal factors that we investigated," the team noted.
The researchers did, however, come away with a positive view on the CVSS process and noted that the system can remain useful into the future should some changes be made.
CVSS 4.0 does aim to address some criticisms of the system.
Out for public preview since June as part of the biggest overhaul of the CVSS system in seven years, CVSS 4.0 makes a significant effort to improve how risk is calculated, amid concerns that CVSS Base Scores are being used by many organisations to (inadequately) assess risk. It also adds new guidance for Operational Technology (OT) and a wide range of supplemental metrics including “Vulnerability Response Effort” – ranked as “low, moderate, high.”
New metrics for Operational Technology exposure include whether the "consequences of the vulnerability meet definition of IEC 61508 consequence categories of "marginal," "critical," or "catastrophic."