http://www.spinellis.gr/pubs/jrnl/2005-CS-SecAdvisory/html/LS05.htm
This is an HTML rendering of a working paper draft that led to a publication. The publication should always be cited in preference to this draft using the following reference:

Citation(s): 3 (selected).

This document is also available in PDF format.

The document's metadata is available in BibTeX format.

Find the publication on Google Scholar

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.

Diomidis Spinellis Publications


© 2005 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

 

 

 

Handling and Reporting Security Advisories:
A Scorecard Approach

 

Dimitrios Lekkasa and Diomidis Spinellisb

 

aDepartment of Product and Systems Design Engineering
University of the Aegean, Syros Island 84100, Greece
tel:+30 2281097100, fax:+30 2281097109, e-mail:
dlek@aegean.gr

 

bDepartment of Management Science and Technology
Athens University of Economics & Business
76 Patission St., Athens GR-10434, Greece, e-mail:
dds@aueb.gr

Abstract

Security advisories aim to inform the community on new security vulnerabilities, their possible impact and candidate solutions. Various product vendors and independent response centers have completely different views on whether an advisory should be published, on what to publish and on how the information included should be organized. We present a scorecard as an exhaustive categorized list of security-relevant metrics that helps an interested party to assess a security advisory in a complete and efficient way, by following a clearly defined sequence of conditional actions. Scorecard users should thus neither neglect nor overreact in response to a potential security threat. Based on our scorecard approach and the goal-question-metric technique, we also provide prescriptive guidelines for collecting and disseminating system-specific information related to a specific vulnerability. This paradigm is appropriate for employment not only by end-users but also by security response centers as an organized, stable and efficient scheme for publishing security advisories.

Categories and Subject Descriptors: D.4.6 Security and Privacy Protection, K.6.5 Security and Protection

Keywords: Vulnerability disclosure, Metrics, Applicability, Impact, Exploitation, Countermeasures

 

Introduction

Bulletins of security advisories typically include a description of a vulnerability, its possible impact on specific targets and candidate solutions. These are published by Security Response Centers, operating as independent organizations or as specialized departments of software and hardware vendors, to help the interested community operate their systems and networks in a secure manner. The life-cycle of a security advisory starts form the ‘vulnerability disclosure’ i.e. the discovery of a security problem after users’ reports or as a result of research and product evolution. The product vendor decides on the necessary workaround, builds patches and fixes and publishes a detailed ‘security advisory’. At the same time the advisory may also appear in other vendor-independent fora such as the reports of various Computer Emergency Response Teams [1] and the Common Vulnerability and Exposures (CVE) dictionary [2]. Various revisions of the advisory may be published during its life-cycle, while the vendors release relevant patches and workarounds or, at a later stage, incorporate a solution into a major product release. A security advisory remains of interest to the community during the life-cycle of the relevant vulnerability, until the number of systems it can exploit shrinks to insignificance [3].

A security advisory is usually judged by security experts and classified to a severity rating before it is disclosed to the public. However, a basic vulnerability rating escalation such as Critical, Moderate or Low risk and even a comparable value, is insufficient for an end-user or a system administrator to assess the risk [4], since the vulnerability’s impact will typically depend on specific product versions, system use profiles, configurations, hardware platforms, functional conditions and local policies.

In fact it is quite possible that an eventual victim will overlook or have difficulties to identify a vulnerability disclosure that represents a significant risk or, on the other side, she will invest significant resources to face a vulnerability that does not introduce a real threat. The information overload regarding the vulnerabilities and exploits, which is continuously increasing, is further worsening the situation and is a separate and important threat by itself. According to industry experience, attacks that impact systems rarely result from attackers’ exploitation of previously unknown vulnerabilities. Rather, as in the cases of Nimda worm and the recent Windows RPC buffer overrun, attacks typically exploit vulnerabilities for which solutions have long been available, but not applied. It is in fact reported that over 90% of the security exploits are carried out through vulnerabilities for which there are known patches [5].

The objective of the paper is twofold: 1) to present our vulnerability scorecard as practical guide that will help end users and system administrators to efficiently manage and quickly assess impacts of a vulnerability disclosure and 2) to provide prescriptive guidelines, based on a goal-question-metric approach, as a tool for end-users to record useful information as well as for security response centers to publish advisories in a way that will help their community efficiently respond to them.

 

The (Un)readability of Security Bulletins

A quick survey of various security bulletin boards showed that each has a completely different view on what to publish, on what information to include and on how the data is organized. In respect to the indicative volume of published advisories, we recorded similar values for specific vendors, being 45 for Cisco, 72 for Microsoft and 44 for FreeBSD, average per year, for the last 3 years. Respectively, for the non vendor-specific informational postings we recorded 37 advisories for CERT, 734 for AusCert, 56 for Symantec and 1568 for CVE. The unexpectedly high difference between the above numbers indicates that there is no clear rule on what is considered as a security advisory and that there is confusion between the terms ‘vulnerability disclosure’ and ‘security advisory’, we described in the first paragraph of the present article.

As a typical example, a first look into the following recent, apparently clear security advisories, gave us a misleading impression for their effect on our systems:

The security advisory SA-03:14.arp published by FreeBSD, affects all releases of FreeBSD systems, independently of configuration, services running and software installed. According to the advisory, it is possible for an attacker to cause the system to hang by flooding it with ARP requests, which seemed as an urgent situation. However, after reading through the 3 pages of the advisory, it became obvious that there is no real concern for our FreeBSD system at home, since:

a)      It describes a temporary denial-of-service attack and does not involve arbitrary code execution or data modification, which are the real concerns for a personal system.

b)     The attack may only originate from the local network and not from the Internet; our system only communicates with the Internet through a dial-up line and a packet filtering router.

In another case, Microsoft published the MS02-030 security bulletin, applicable to systems running SQL server 2000. The severity of the vulnerability was rated as ‘moderate’ by the vendor and it would not indicate an urgent threat for system administrators. However, after a careful study of the 7 pages long advisory we concluded that our web server which was configured to accept ad hoc URL queries against a database frequently used by the academic community, was in real danger for two reasons:

a)      An option called SQLXML, which is not enabled by default, was in fact enabled and could allow an attacker to run arbitrary code by injecting scripts through XML tags.

b)     The subject system was providing critical web-based applications to hundreds interested parties.

It is obvious that a clear decision on the applicability and the severity of security advisories is hardly drawn by a superficial study on them. A first conclusion on the impact and on some applicability factors may be extracted by the detection of some specific keywords within the summary of the advisory. We performed a rough keyword classification in the collection of about 2600 advisories in the Common Vulnerabilities and Exposures dictionary [2], as shown in Figure 1. However, there are other important factors, including but not limited to the impact on the community that uses the system, the exploitation preconditions, the solution requirements and the solution impact, which are not obvious.

 

 

 

Figure 1. CVE keyword classification

 

The Metrics-based Scorecard

What is really missing form the bulletin boards of the vendors and even from the organizations that are exclusively focusing on handling emergency incidents, is a practical guide for how one may read, evaluate and handle a security advisory. The advisories are addressed to a heterogeneous community, being system or network administrators and end-users, operating publicly used servers, critical infrastructures, personal computers, network devices and other purpose-specific systems. Any attempt by the vendor to conclude on the applicability and the severity of the security advisory, would mislead an important portion of the community.

A practical solution to this problem is the definition of a series of metrics (the scorecard) that the interested party has to evaluate for a specific security advisory, in order to conclude on the risk faced by a specific system. The exhaustive scorecard presented in Table 1, contains nine major categories of metrics, presented in a logical sequence, giving a complete picture of the vulnerability and the relevant risk. An interested party has to give values (answers) to the metrics in the order presented and in most cases it will arrive at a conclusion before traversing the whole list.

The identified categories are mutually exclusive and the items presented cover a wide spectrum of security related attributes, although they may be further refined. Items 1 to 6 refer to the assessment of the applicability and the impact of the advisory, while the items 7-9 refer to the implementation of the proposed solution. More specifically, the target of a vulnerability is classified in logical and physical items according to S.Gritzalis [6]. The items in the exploitation impact category is a refined result of various listings, which either make a distinction between hardware and software impact or presenting a classification according to the effects on availability, integrity and confidentiality respectively or a generic classification into misuse, exposure and denial-of-service [7, 8]. The metrics of the community impact are based on a risk management analysis by S.Katsikas [9] and finally the solution requirements are partially based on relevant taxonomies [10, 11].

 

 

 

 

 

Metrics

Short Description

Assessment phase

1. Target

·        Logical:

o       Account

o       Process

o       Data

·        Physical:      

o       System infrastructure

o       Network (local range)

o       Internet (wide range)

Logical targets refer to informational and processing resources. Physical targets may refer to hardware, to local area network infrastructure or, in a  extreme case, to the entire Internet infrastructure.

2.  Applicability – Scope

·        Hardware architecture and platform

·        Version of installed firmware

·        Operating system and version

·        Software installed

·        Enabled feature

·        Configuration parameter

·        Peripherals and hardware specific software

The applicability of a security advisory, depends on hardware type, OS, software installed and various configuration settings. It is usually clearly indicated in the text provided by the advisory.

3. Exploitation Preconditions

·        Internet remotely exploitable

·        Intranet remotely exploitable

·        LAN shared medium exploitable

·        LAN switched medium exploitable

·        Registered user exploitable

·        Requires physical access

The exploitation of a vulnerability is usually performed remotely, either location independently or only within specific logical or physical limits, such as an Intranet logical area, a LAN or a switched LAN segment. In other cases the exploitation may succeed only by normally registered users or by physical access.

4. Organizational Factors

·        Comprehensiveness and completeness of advisory description

·        Existence of an incident response team

·        Existence of prior risk analysis

These factors may considerably mitigate the impact of a vulnerability, by providing the means for better information dissemination and response procedures.

5. Exploitation Impact (Damage)

·        Availability disruption (denial of service)

·        System or data integrity violation and loss of data

·        Data disclosure and confidentiality breach

·        Privilege elevation

·        Stolen credentials

·        Code/script execution

·        Bypass of intended controls

·        Misuse of resources

·        Violation of system’s security policy

·        Affecting neighbor systems (spreading)

·        Erroneous transmission

·        Physical damage

The first 5 items refer to the basic security properties, i.e. the availability, the integrity and the confidentiality of the information and the infrastructure. Exploitation may also result to unauthorized action and system misuse, such as the code execution and the bypass of authentication and authorization controls. In other cases the exploitation may provoke spreading to neighbor systems, erroneous transmission (e.g. network disruption, traffic redirection, transmission out-of-sequence) or physical damage.

6. Community Impact

·        Financial loss – Labor time loss

·        Loss of trust

·        Personal abuse, defamation and humiliation

·        Unauthorized gain of political authority and status

·        Blackmail and other criminal action

·        Action against the law

·        Effects on national security and defense

·        Effects on international relations

Financial loss may be direct theft, down-time cost or restoration cost. Loss of trust against the information system is also a severe impact. The rest of the items refer to illegal or criminal action and in more extreme cases on national and international aspects.

Implementation phase

7. Solution Requirements

·        No action

·        Workaround

·        Locate relevant security advisories by other sources

·        Patch availability and installation

·        OS or application upgrade

·        Software development

·        Command execution

·        Configuration modification

·        Rebuilding of kernel or other executables

·        Configuration or installation of additional protection measures

·        Enable logging of evidence data

The solution requirements focus on the solution implementation, such as patching and configuring, according to the relevant security advisories. Additional protection measures may be required, such as the use of ACLs, an IDS, firewalls, cryptography, VPNs and antivirus applications.  The collection of reliable evidence data, by means of system logging, may be also a requirement of the solution.

8. Solution Impact

·        Cost in time and money for the establishment of the countermeasures

·        Cost in time and money for regression testing of the updated system.

·        Availability of the system while applying the solution

·        Consequences on the functionality of the organization

·        Time margin to take action

The implementation of a proposed solution is not costless. Cost in terms of money, labor time, system availability and organization functionality is usually significant.  The time margin to take action is also an issue and according to the severity of the impact it would be immediate, short-term or long-term.

9. Conclusions Impact

·        Severity of the security advisory

·        Efficiency of the countermeasures

·        Need future plan for the protection of the system

·        Need further communication with the vendor

·        Indication that attacks will be repeated or increased

The conclusions that will arise after the assessment and the implementation phases of a security advisory are either informational or indicating further action.

 

Table 1. The Scorecard metrics

 

The value that will be assigned to a metric heavily depends on the type and the usage of the target system, being a server, a router/switch, a shared terminal, an online personal computer, an offline PC and other networked devices. For example, a Denial-of-Service attack will have completely different impact value on a mail server with hundreds of users and on a personal computer, although they may have the same operating system, hardware and configuration.

As a result, complete examination of certain security advisories under the perspectives of various systems would be drawn upon a collection of quadratic vectors in the form:

{advisory, system usage, metric, value}

Since, in practice, one deals with a single advisory or vulnerability and a single system at a time, we may simplify the above approach by removing the two dimensions. Consequently, the assignment of values to all or part of the abovementioned metrics will result in a single scorecard for a given vulnerability and a specific system.

A prior existence of a “Risk Analysis” study [12] would add value to the assessment process, by having:

Another practical assumption is that all the metrics may be assigned discrete values. Discrete values such as Boolean (yes/no) or a Low/High escalation give a more readable overall result that will lead the investigator to a quick conclusion. Examples:

Countermeasures cost:   High, Moderate, Low, None

Exploitation Impact:       High, Moderate, Low, None

Applicability:                 All, Some, One, None

Time of action:              Immediate, Short-term, Long-term, N/A

Loss of data:                 Fully, Partially, None

The sequence of the evaluation of the presented metrics for the assessment of a vulnerability disclosure, is also important. As a general guide, the following sequence should be followed, where the ([…]) notation indicates an optional part in the sequence:

[Event Detection à ] Security Advisory retrieval à Target à Applicability à Preconditions à Organizational factors [ à Exploitation Impact à Community Impact [ à Solution Requirements à Solution Impact [ à Solution Implementation à Conclusions ] ] ]

The following list contains the conditions under which the distinct steps of the evaluation process are executed. Figure 2 depicts the whole procedure in a UML sequence diagram.

 

 

Figure 2. Action sequence for handling Security Advisories

 

The Goal-Question-Metric Approach

The management of systems security cannot only be seen by the perspective of some static characteristics, but also by the perspectives of the emerging threats. Each and every new vulnerability or exposure may pose different objectives for the assessment and improvement of the security of a system. Various techniques offer the opportunity to implement a quantitative and qualitative analysis of the system’s security against a specific threat. The Goal-Question-Metric (GQM) [13] technique and the Balanced Scorecard [14] framework are recommended for supporting process improvements. Such quantitative approaches may be used either by the Response Centers, as a practical guide for publishing advisories and by the end-user wishing to efficiently handle a security advisory against a specific system.

The construction of a practical guide for the assessment of the vulnerability disclosures may be perfectly based on the Goal-Question-Metric technique that is a common analysis tool in software engineering and in quality management. According to this technique an involved party sets an objective, which cannot be directly interpreted, but is described by a series of questions. Each question in turn is answered by a series of metrics, which may be quantitative (obtain absolute values) or qualitative (answered by a subjective judgment or by a comparable value).

A goal is composed by four parts:

A series of questions and relevant metrics have to be constructed according to the characteristics and the requirements of a specific system (object). For example, a question on denial-of-service impact would probably appear in the case of a core mail server, but not in the case of a home personal system. Examples on the examination of two security advisories under the perspectives of two different system types follow:

1st example: Goal: Assess (intention) the impact (issue) of the vulnerability (object) described by CAN-2002-0187 (“Unchecked buffer in SQLXML could lead to code execution”) against my personal Windows web server (perspective).

 

Question

Metric

Value

Question A: (General factors) Is the vulnerability disclosure well described and documented?

 

Metric A1: Comprehensiveness

Good

Metric A2: Completeness

High

Question B: (Applicability and Preconditions) Is my system in danger?

 

Metric B1: Microsoft SQL server 2000 installed

Yes

Metric B2: Connectivity to the Internet

Partial

Metric B3: XML queries through http enabled

Yes

Metric B4: Privileged user only exploitable

Yes

Question C: (Target) Which system objects are in danger?

 

Metric C1: Logical targets

Data

Metric C2: Physical targets

None

Question D: (Exploitation Impact) Is the risk high?

 

Metric D1: Code execution

High

Metric D2: Data modification and loss

High

Metric D4: Credentials stolen

No

Question E: (Solution Impact) Do I have to take immediate action or leave it for next working day?

 

Metric E1: Severity rating by the vendor

Moderate

Metric E2: Number of people using the server

<10

Question F: (Conclusions Impact) Is the protection against the vulnerability really needed?

 

Metric F1: Subjective reply based on metrics A to E

Not urgently

Table 2. Unchecked buffer in SQLXML could lead to code execution – GQM analysis

 

2nd example: Goal: Protect (intention) the border router (perspective) of the national tax information system from the vulnerability (issue) described in Cisco security advisory (object) with ID 44020 “Cisco IOS interface blocked by IPv4 packets” released on July 16th 2003.

 

Question

Metric

Value

Question A (Target): What is the target of the threat?

 

Metric A1: Parts of system infrastructure affected.

All interfaces of border router

Question B (Applicability): Is the current configuration of the router tolerant against the threat?

 

Metric B1: Current version of IOS

Vulnerable

Metric B2: Protocol Independent Multicast (PIM) enabled

Yes, higher risk

Metric B3: Existence of Ethernet Interface that possibly affects neighbor systems through ARP

Yes

Question C (Preconditions): Are the additional preconditions for the exploitation of the vulnerability satisfied?

 

Metric C1: IPv4 enabled

Yes

Metric C2: No ACL blocks TCP protocols 53, 55, 77 and 103

Yes

Question D (Exploitation Impact): What is the maximum exploitation impact?

 

Metric D1: Denial of Service

High

Metric D2: Affecting neighbor systems

Yes

Metric D3: Code execution

No

Question E (Community Impact): What will be the consequences of a possible attack to the citizens?

 

Metric E1: Financial Loss

Likely

Metric E2: Loss of Trust

High

Metric E3: Criminal action

No

Question F (Solution Requirements): What is the immediate action needed?

 

Metric F1: IOS upgrade

Yes

Metric F2: ACL update

Yes

Question G (Solution Impact): What are the effects of the solution implementation?

 

Metric G1: Cost in money

zero

Metric G2: Down time

10 minutes

Metric G3: Impact to other systems

None

Metric G4: Consequences to the functionality of public services, if applied on low-traffic time

Low

Metric G5: Rate {Cost of impact / Cost of solution}

High

Table 3. Cisco IOS interface blocked by IPv4 packets - GQM analysis

 

Ideally, a security advisory published by a vendor should be accompanied with a Goal-Question-Metric list, such as the ones presented above, containing the most relevant to the advisory metrics and the estimated values. This will help an interested party to identify the required data in a more organized fashion and thus to come quickly to a conclusion on the faced risk. In any case, at least two GQM lists should be presented separately in respect to the two extreme situations, where an affected system is either a personal system or a critical widely used infrastructure.

 

Case study: Testing the repeatability of scoring reports

A key issue that adds value to our proposed solution is the repeatability of the scoring reports. The rating of a security advisory, based on our scorecard, is characterized as repeatable when a different observer can rely on this rating without undertaking a detailed examination of the security advisory. An efficient exploitation of our proposal that produces highly repeatable scoring reports will reduce the complexity of judging risk and the work factor that is necessary to examine a vulnerability disclosure and its related security advisories. High repeatability of scoring reports may also contribute to the automation and the scalability of intrusion detection procedures [15].

As a case study, we used historical data of vulnerability disclosures from the CVE dictionary to apply our metrics-based scorecard. The objective of the case study was to collect results on the effort required to quantify the scorecard metrics, on the accuracy of the drawn conclusion and on the repeatability of the results of the scoring procedure. We examined 200 CVE entries covering a 5-month period within 2002,[1] having also added some extra entries applicable to NetBSD systems. To collect the data necessary to perform the case study, we extracted the vulnerability descriptions from the CVE dictionary and located the related security advisories from product vendors. We studied each advisory up to the point where a clear conclusion regarding its applicability could be drawn, recording the metrics of our scorecard.[2] This procedure was repeated for three different systems: a) An Intel-based system with Microsoft Windows 2000, serving as a database and Internet web server for a medium-size University b) A Cisco router (7500 series) serving as a university’s border router. c) An ARM-based NetBSD network appliance periodically connected to the Internet.

In brief, we initially found 35 applicable entries for the Windows system; 18 entries out of the 35 were found to be non-applicable at a later step, due to specific configuration parameters. For 7 entries out of the 35 the assessment stopped at an intermediate step, mainly because their impact proved to be not important (e.g. a browser vulnerability has impact on workstations but not on servers). For 10 entries the complete assessment and the resolution implementation steps were carried out. Similarly, it was necessary to perform all assessment and resolution implementation steps for approximately 25% of the vulnerabilities applicable to the Cisco and NetBSD systems.

Our experiments focused on our basic research question: the repeatability of a scoring report under multiple observers. Toward this objective, we repeated the scoring procedure for the Windows 2000 system (against the 35 applicable vulnerabilities) under four additional perspectives,2 in order to examine whether a different observer may rely on an existing precompiled scoring report or has to read through the complete security advisory. The perspectives under which the study was preformed were:

  1. a database and Internet web server running SQL Server 2000 and IIS 5.0,
  2. an Internet mail server running Microsoft Exchange,
  3. an Intranet Windows NT-based file server,
  4. a personal workstation as observed by the article’s first author, and
  5. a personal workstation as observed by the article’s second author.

Assuming that the examined systems have similar applicability factors, in terms of platforms and basic configuration, a high percentage of the scoring reports proved to be repeatable in many cases, as summarized in Table 4. Each value in Table 4 (percentage and absolute number) represents the number of scoring reports that were found to be repeatable between two different observing perspectives. The order of the observations does not affect the result and therefore the resulting matrix is symmetric (aij = aji). The diagonal of the table is shaded since it corresponds to comparisons between the same observations.

 

 

Observation Perspective

DB & Web Server

Mail Server

Intranet File Server

Workstation Lekkas

Workstation Spinellis

DB & Web Server

 

97% (34)

94% (33)

57% (20)

57% (20)

Mail Server

97% (34)

 

91% (32)

54% (19)

54% (19)

Intranet File Server

94% (33)

91% (32)

 

57% (20)

57% (20)

Workstation Lekkas

57% (20)

54% (19)

57% (20)

 

94% (33)

Workstation Spinellis

57% (20)

54% (19)

57% (20)

94% (33)

 

Table 4. Repeatability of scoring reports under multiple observations

 

The results of the case study show a total repeatability of the scoring reports when observed under the perspective of Internet servers, high repeatability between Internet and Intranet servers, partial repeatability between servers and workstations and high repeatability between different workstations. By further analyzing the results, we have concluded that the different scorings recorded between Internet and Intranet servers are due to different exploitation preconditions of a vulnerability and specifically the dependency on whether the vulnerability is remotely or locally (and usually by authenticated users) exploitable. On the other hand, the majority of the differences between servers and workstations derive from the need of local user intervention for the exploitation of a vulnerability (e.g. opening a document or interacting with a remote site) which is common on a workstation but not probable on a server. The number of non-repeatable reports between the two workstations was very small and was caused by applicability differences, something which in most cases was obvious by just reading the title of the vulnerability.

We can now argue that the targeted pre-processing of our scorecard under three main perspectives, results in highly repeatable reports and may substantially increase the efficiency of the relevant security advisories. The perspectives were based on the different usages of each system: an Internet server, an appliance not directly connected to the Internet, serving a local community (Intranet server) and a workstation, partially connected to the Internet. It is also possible to identify different categorizations of system usage, depending on the type of the device. For example, a relevant categorization for a network device would be a) Border router, b) Access server and c) LAN equipment. A sample simplified scoring report for a specific vulnerability is shown in Table 5.

 

CVE-2002-0650
Description

The keep-alive mechanism for Microsoft SQL Server 2000 allows remote attackers to cause a denial of service (bandwidth consumption) via a "ping" style packet to the Resolution Service (UDP port 1434) with a spoofed IP address of another SQL Server system, which causes the two servers to exchange packets in an infinite loop.

Perspective:

Internet Server

Intranet Server

Workstation

Target

System, Network, Internet

System, Network

System, Network

Applicability

SQL server installed and enabled

SQL server installed and enabled

SQL server installed and enabled

Preconditions

remotely exploitable

Spreading by neighbor systems

Spreading by neighbor systems

Organizational

Advisory existed long before current massive exploits

Advisory  existed long before current massive exploits

Automatic system update does not download the patch

Damage

DOS – System and Network disruption

System disruption and low risk of LAN disruption

Low impact. No data disclosure or code execution

Community Impact

High - Financial loss

Important

None

Solution Requirements

patch installation or  ACL blocking port 1434

Patch installation

Patch installation

Solution Impact

Server needs restarting. Remote connections disabled if ACL enabled

Server needs restarting

None

Conclusions

Critical situation. Need further observation

Important risk.

Not a critical situation.

Table 5. Reducing work factor in the examination of security advisories: A simplified precompiled scorecard for the basic system usage perspectives

 

Conclusions

The security advisories regarding specific product vulnerabilities are published by product vendors and by independent organizations. They usually contain significant amount of data, but rarely follow specific rules on the organization of the information. The overall assessment of the advisory by an interested party is a difficult task since it depends on various parameters, such as its applicability, the preconditions and the impact of the vulnerability exploitation, probable community impacts and the requirements of the solution. Furthermore, one must take into account the type and the usage of the subject system, in order to draw up a clear conclusion.

A list of metrics (a Scorecard) grouped into major categories will help the interested party to record the information included in a security advisory in a more productive way that gives a clear picture of the risk faced by a system, among other information such as the solution impact. The collection of the metrics’ values becomes a straightforward procedure, since we describe it as a conditional sequence of clearly defined steps. The Goal-Question-Metric technique gives a practical solution for collecting all the information relevant to a security advisory, by building up a list of questions and relevant metrics, against a specific target system. The case study we performed against 200 CVE entries demonstrated that the publication of pre-processed scorecards for a small number of basic system usage perspectives can significantly increase the repeatability of the proposed scoring reports. Consequently, the effort spent by users and administrators for examining a security advisory can be significantly reduced. This paradigm can be used as a guide by vendors and CERTs toward a more stable and valuable security advisory publication scheme.

 

References

  1. CERT/CC, “Vulnerability Notes Database”, Computer Emergency Response Team Coordination Center, Carnegie Mellon University, 2003, available at http://www.kb.cert.org/vuls
  2. MITRE Corporation, “Common Vulnerabilities and Exposures (CVE) dictionary”, 2003, available at http://cve.mitre.org
  3. Arbaugh W., Fithen W., McHugh J., “Windows of Vulnerability: A Case Study Analysis”, IEEE Computer, Vol. 33, No. 12, pp. 52-59, 2000
  4. Mash S., “Risk Assessment for Dummies”, Computer Fraud & Security, Vol.2002, No.12, pp.11-13, 2002
  5. McGhie L., “Software Patch Management – The New Frontier”, Secure Business Quarterly, Vol.3, No.2, 2003
  6. Gritzalis S., “Information Systems Security in Distributed Environments”, Ph.D. Thesis, National and Kapodistrian University of Athens, May 1998
  7. Lindqvist U. and Jonsson E., “How to Systematically Classify Computer Security Intrusions”, In Proceedings of the 1997 IEEE Symposium on Security & Privacy, pp.154-163, May 4-7, 1997.
  8. Howard J., Longstaff T., “A Common Language for Computer Security Incidents”, Sandia International Laboratories, Report No. SAND98-8667, 1998
  9. Katsikas S., “Risk management of Information Systems”, In Kiountouzis E. (Ed.) Information Security: Technical, Legal and Social issues, EPY editions, Athens, 1995
  10. Venter H., Eloff J., “A taxonomy for information security technologies”, Computers & Security, Vol.22, No.4, pp.299-307, May 2003
  11. Laakso M., “Introducing constructive vulnerability disclosures”, In Proceedings of 13th FIRST Conference on Computer Security Incident Handling & Response, Toulouse, France, June 2001
  12. Baskerville R., “Risk Analysis: An interpretative feasibility tool in justifying information systems security”, European Journal of Information Systems, Vol.1, No.2, pp.121-130, 1991
  13. Basili V.R., Caldiera G., Rombach D., “The Goal Question Metric approach”, Encyclopedia of Software Engineering, Volume 2, pp 528-532, John Wiley & Sons Inc, 1994
  14. Buglione L., Abran A., “Balanced Scorecards and GQM: What are the differences? ”, In Proceedings of the 3rd European FESMA-AEMES Software Measurement Conference, October 18-20, 2000
  15. Bishop M., “Trends in academic research: Vulnerabilities analysis and intrusion detection”, Computers & Security, Vol. 21, No.7, 2002

 



[1] More recent vulnerabilities are considered as ‘candidate entries’ since they were not confirmed by the CVE Editorial Board at the time of the case study.

[2] The detailed scoring of the examined advisories may be found at http://www.syros.aegean.gr/users/lekkas/cve200_scoring.htm