A risk assessment is essential in forming a clearer picture of how external and internal threats could impact on your organisation, how severe and how likely those threats are and how well your organisation is already prepared.
There are many process possibilities for conducting a risk assessment, but a good starting point for directors is the NIST’s guidance in SP 800-30. The Institute identifies nine stages of the information risk assessment process, starting with a review of the existing or proposed system and ending with a commitment to monitor the system on an ongoing basis.
By defining the scope of the risk management process, directors and IT personnel can understand the boundaries of the project to form an accurate picture of all the assets and resources that constitute the system.
Characterising the system should highlight key personnel involved in the project, define their roles and authorisation powers, as well as mapping out the hardware, software and network equipment that make up the system’s landscape.
By the end of the process, directors should have a clear understanding of (at least) the system’s properties, assets and just who uses the resources. They should also know exactly what the data criticality is, why it is important to the company and just how sensitive the data might be were it lost or stolen.
The boardroom will also need to know about existing controls that have been put in place, such as whether or not there are physical security controls to access key equipment, whether there are contingency plans for data back-up and whether or not there are continuity plans in case of electrical or communications failure.
Also included in the review are management and process controls that might instil and encourage a culture of treating information risk as a business risk. Is there, for example, a well-documented policy of whether staff must encrypt data leaving the office, or whether they can use USB sticks and, if so, have those controls been explained? Are they even enforced?
It might seem like a simple security review – and in many ways it is – but zooming in on the risk element focuses the mind, and getting everyone in the organisation involved in the process has educational benefits, as well as providing parameters for the risk assessment.
Management should employ questionnaires, interviews and document review to gather this information, as well as using automated software such as network mapping tools.
As the NIST puts it: ‘A threat is the potential for a particular threat-source to successfully exercise a particular vulnerability. A vulnerability is a weakness that can be accidentally triggered or intentionally exploited’. By the end of the threat evaluation, directors should have a clear understanding of the potential causes of damage to the systems being studied.
Threat sources fall into three broad categories – natural, human or environmental. It should look at the potential motivation for hackers to deliberately break into the system, as well as consider factors such as what would happen if a water main burst near the server room, or whether the storage facility could continue to function if the air conditioning breaks down on a hot August afternoon.
While it is possible to mitigate against some threats, such as power failure – humans are more erratic and can either intentionally or accidentally damage system security through spiteful attack or through being too keen and e-mailing work to a non-secure home e-mail account. According to the NIST: ‘Motivation and the resources for carrying out an attack make humans potentially dangerous threat-sources’.
However, the full threat picture needs a wider remit, including weather data, known vulnerabilities, documentation of previous security incidents and reports from security houses such as NIST, CERT, the Federal Computer Incident Response Center (FedCIRC) and SANS Institute.
The Yin to the Yang of threats is vulnerability. And once threats are perceived it is clearer where vulnerabilities might lie. Matching threats to vulnerabilities is like piecing together parts of the security jigsaw. The goal of this step is to develop a list of system vulnerabilities (flaws or weaknesses) that could be exploited by the potential threat-sources.
The task should highlight the confluence between vulnerability and threat, so, for example, former employees’ swipe cards not being handed in are a vulnerability. The threat source is a disgruntled former employee, and the threat action might involve a former employee accessing the site and causing damage to equipment.
Vendor websites, staff interviews, audit reports and vulnerability repositories such as the NIST I-CAT vulnerability database are a good place to harvest vulnerability information before applying it to your system’s environment. With a vulnerability list in place for all aspects of the organisation, an automated vulnerability and penetration testing programme should highlight the strengths and weaknesses present in the technical systems.
The results of this testing, much of which should be ongoing anyway, will be key to how much money the IT department will need to boost security. ‘During this step, the risk assessment personnel determine whether the security requirements stipulated for the IT system and collected during system characterisation are being met by existing or planned security controls’, the NIST observes. ‘Typically, the system security requirements can be presented in table form, with each requirement accompanied by an explanation of how the system’s design or implementation does or does not satisfy that security control requirement.’
Controlling the technical side of the information is only half the system and management must assess the level of control placed on staff, and what data and systems they can access. Risk can be substantially minimised by good authentication methods preventing access to uber-sensitive documents by all but directors, while good physical and environmental security will deter intruders. Strong control should also include software that detects when staff have tried to breach procedures or violated policy. Audit trails, intrusion detection methods, and checksums also offer a deterrent to casual miscreants, prevent accidental disclosure and allow post-attack detective work.
With a clear picture of existing security measured, weighed against the level and motivation of threats, it should be possible to work out the likelihood of a vulnerability being exploited. This is normally rated from low (where there is little threat-source motivation and good controls) to high (where the threat source is more determined and poor controls are in place).
Using the data collected in earlier stages, management needs to assess the damage that could be caused to systems, should a threat and vulnerability come to fruition. A mission impact analysis prioritises the impact levels of a compromise of an organisation’s information assets based on a qualitative or quantitative assessment of the sensitivity and criticality of those assets. Systems and information owners need to assess the impact of three main security areas:
• Loss of integrity
Integrity is lost if there are unauthorised changes to the system or its data, which could result in continued hacking attacks that could affect availability and confidentiality.
• Loss of availability
Systems going down mean that staff are unable to work effectively, and for many businesses a systems failure means a temporary cessation of trading.
• Loss of confidentiality
Unintentional disclosure could lead to loss of public confidence, embarrassment, or legal action against the organisation, especially if private data has been released in contravention of data protection regulations.
The impacts are difficult to assess tangibly, and directors should demand both qualitative and quantitative analysis that will help gauge the financial and reputational costs of given risks; they should classify them as either ‘high’ magnitude, where there is major financial or legal fall-out or loss through to ‘low’, where there is minimal loss or damage.
Risk determination combines the likelihood of a threat with the impact of that threat and the results are shown in a risk-level matrix (see Figure 1). Priority must be given to areas where ‘high’ likelihood coincides with ‘high’ impact, dealing up the ‘perfect storm’. IT staff should be able to put figures into these boxes for a more quantitative approach, but the matrix will anyway highlight trouble hotspots.
Having assessed risks, management needs clear recommendations on ways the organisation can limit or mitigate against those risks through investing in hardware, software and management controls. The resulting recommendations should consider the costs and effectiveness of potential controls, as well as their effect on the organisation’s mission and legislative position. Whether these controls will be implemented will be decided after a cost-benefit analysis, outlined in Chapter 6.
The end result of the process should be an official organisation risk assessment report that helps senior management, the mission owners, make decisions on policy, procedural, budget, and system operational and management changes. Unlike an audit or investigation report, which looks for wrongdoing, a risk assessment report should be accusatory, but establish a systematic and analytical approach to assessing risk.
Documentation is required at every stage of the process of risk assessment and mitigation, building on the documents already in-house and prepared during the categorisation stage. Such documentation also helps meet the requirements of Internal Control: Revised Guidance for Directors on the Combined Code (Oct 2005) (the ‘Turnbull Guidance’).