Incident response planning is vulnerable to legacy thinking

0
54
Oracle enhances customer experience platform with a B2B refresh

Source is ComputerWeekly.com

Part of the challenge is demonstrated in the question here. Our legacy thinking may lead us into thinking about incident response in a perimeter-focused way; preparing for an attack does not take into consideration the 80%+ security incidents that come from our own insiders, not from attacks. This means we are missing the biggest possible threat to our information assets; documents, files, data, systems, and platforms.

Perhaps the best way to start with this is to stop thinking about it as a security incident response plan and start thinking about it as an information incident response plan, i.e. any inappropriate or unauthorised access to, disclosure of, modification to or destruction of, an information asset. With a security mindset we tend to be focused on loss because this is what gets the headlines every time we see an ICO report, but if we are thinking about information as a whole we can see there is much more to it than loss. If you think about recent incidents such as that affecting the Police Service of Northern Ireland (PSNI), that was not a cyber attack, it was an inappropriate disclosure. Look how serious that is and a perfect example of where an information incident plan could help.

Input for writing these plans must come from the business; data protection, physical security, IT, infosec and risk management. Compare this process to writing a disaster recovery plan that is left on a network that in the event of a disaster may not be accessible, indeed the building it was housed in might not be accessible either. We need much wider thinking if our plans are going to have any positive impact.

Let’s look now at the assertion of an ‘inevitable attack’.  Actually, attacks are not inevitable, but human error is. The only way the attack is inevitable is in that we are all under attack all of the time, the way the attacks land or gain purchase through to success is generally human error. That means how we talk about cyber attacks and ransomware attacks is misleading and takes the onus away from where it needs to be, with people. Most of these ‘attacks’ would go nowhere without the human facilitation (normally non-malicious) they get but we focus almost exclusively on the ‘attack’. Now we are ready to write an incident plan…

  • Using risk managers – the way in which we respond to an incident has to be in proportion to the level of risk associated with the incident. So, one incident plan cannot rule them all…in some cases you mean need a full incident response team, including gold, silver and bronze command. In other cases, it may be dealt with as part of business as usual.
  • Do we have detecting and reporting mechanisms in place? We cannot deal with an incident we know nothing about.

Containment comes next.

  • We need a planned, coordinated, and well-rehearsed set of processes for containing an incident. These will vary depending on why the incident has occurred. Was it insider error for instance? Or perhaps it happened in our supply chain or ecosystem or was it indeed a focused and targeted attack on our organisation?
  • Who gets involved and at what stage of the containment of the incident? Senior management, technical people, physical people. When do communications teams need to be alerted and involved and will they need to communicate internally, externally or both? Don’t just write the people involved into the plan, talk to them, tell them about it and get their buy-in. This group will need to be able to engage and work swiftly and effectively together. Bring them together so the first time they encounter an incident it won’t be the first time they have all come together and they know what is expected of them.
  • Make sure that information gathered during the management of an incident is itself properly protected. You do not want to inadvertently cause a second data breach.
  • Make sure your incident response plan works with other policies. Start with an information incident policy, supported by a plan, and integrate that with existing incident management processes and plans. This will mean an information incident will be managed in the same way we would manage any other incident and it is consistent. If there has been nefarious or criminal activity, your forensic readiness plan, for instance, needs to work well with this plan.  This will maintain the integrity of what you are uncovering without compromising it and is a good example of why your information response plan needs to link effectively with other plans.
  • Ongoing reporting needs to be managed. Do senior stakeholders need to be kept in the loop, or other organisations and regulators etc? This should be included in the plan.
  • Information incidents rarely work to convenient office hours so there needs to be contingency planning wrapped around this plan too.

We now enter the recovery phase

  • This is where we see how important it is to link planning for incidents together, such as resilience planning and business continuity planning.
  • Should include guidance on communicating with external agencies such as the NCSC, police digital services and so on, that may be able to help with key activities. For example, are we able to check on dark web to see if our information has ended up there? This would be a regular activity, not a one off.

Now starts the hard work of your root cause analysis to find out what actually went wrong. If your investigation gets you as far as ‘human error’, you need to keep digging because that isn’t the answer, the answer is what caused the human error. If you find it is lack of education or training for instance, then that would be a genuine root cause answer.

What was the total cost of ownership of the incident? This isn’t just about what you had to pay your PR team, this is about the work that wasn’t done in your organisation while you went through the process of discovering, managing and recovering from an information incident.

One of the most important things you must not forget is to have a good near miss reporting mechanism. When something could have gone wrong but it didn’t… on this occasion. What can it teach you about vulnerabilities and opportunities that could be exploited or accidentally used. A no-blame culture is vital for this, so that people feel empowered to report things that either don’t seem right or that could have gone horribly wrong. That is a potent defence and activates people in just the right way.

Source is ComputerWeekly.com

Vorig artikelCross-application audit log analysis with AWS AppFabric
Volgend artikelGA: gRPC support on Azure App Service