Security Think Tank: Why “secure coding” is neither

0
203
An encouraging new conversation around sustainable IT, says Nordic CIO

Source is ComputerWeekly.com

There’s a little bit of a trap sometimes that can arise in the way that humans understand and process language. Specifically, sometimes we take the meaning of a word or phrase for granted. By this, I mean we use a term meaning a given thing, only for those hearing us to understand the term in a completely different way.

This is counterproductive when it happens in day-to-day communication, but can be dangerous in the context of risk-impacting disciplines such as cyber security, assurance, and governance. In these situations, it can create risk.

I bring this up because often we hear about ways to ensure “secure coding” in organisations that author and maintain software as part of their business, either for internal or external use. It’s important because, frankly, most businesses fall into this category nowadays. While it’s natural to discuss the challenges of software risk this way, I believe the term “secure coding” itself presupposes a context that makes the intended end state actually harder to achieve – at least when taken literally.

And I don’t mean this just in a semantic sense. For example, I’d argue that understanding why that statement is true has actual, tangible, practical value. It speaks to the root cause of why many organisations struggle with application and software risk, and it highlights practical steps organisations can take to improve. With that in mind then, let’s unpack what actual software risk reduction goals are, and how best to effect them as we fulfil our requirements to develop and publish software safely and resiliently.

Software development security vs. risk reduction

The first thing to unpack is the intended end state of what we mean by “secure coding.” In my opinion, there are a few different, related goals usually intended by this term. First, by “security” in this context, folks typically mean two things:

  1. Employing application architecture and design patterns that foster risk reduction principals (e.g., confidentiality, integrity and availability)
  2. Creating software that is resilient to attack (e.g., via avenues like vulnerabilities and misconfigurations) 

Both of these things are, of course, incredibly important. Spend some time talking to application security practitioners and they’ll, rightly, highlight to you Barry Boehm’s famous work about the economics of finding and fixing vulnerabilities early in the development process. They’ll, rightly, explain to you the value of tools like application threat modeling that can be brought to bear to understand what and where security design issues are in software. The trap is that these things, important as they are, are not the entirety of what we might be interested in when it comes to reducing risk in software. For example, other things we might be interested in at least equally could include:

  • Maturity – ensuring processes are mature so that they are resilient to employee attrition and so that outcomes are consistent
  • Transparency – ensuring transparency in the supply chain of the components and libraries that our products in turn rely upon (and being able to provide that transparency to customers)
  • Compliance – ensure that we are compliant with the various (commercial and open source) licenses we use in developing our software
  • Design simplicity – does the design lend itself to being easily understood and evaluated

And so on. In fact, these things are only the tip of the iceberg of considerations that can and do impact software risk as a practical matter. You could just as easily include things like: fitness for purpose, design rigor, supportability, testing coverage, code quality, time to market, and numerous other things that impact the risks associated with how we design, develop, test, deploy, maintain, support, and ultimately decommission our software.

Through this lens, the question we should be asking isn’t about “security” at all – but instead risk reduction (of which security is a subset, albeit a large and important one.)  Tying software considerations just to security narrows the set of stakeholders, it narrows responsibility, and it changes the discussion. By contrast, keeping the focus on risk means everyone – even those far from the software development universe – have a vital role to play.

The software lifecycle

The second thing I’d call your attention to is the “coding” element of the phrase. Yes, coding is important. But just like “security,” it’s only a piece – though a large one – of the lifecycle involved in development of software. Consider how software is normally designed and how many different steps are involved. While individual software development lifecycles (SDLCs) might describe them differently, at a high level you might have steps – in the abstract anyway – similar to the following:

  • Identification of need
  • Ideation/Inception
  • Requirements gathering
  • Design
  • Development
  • Testing
  • Deployment
  • Support
  • Maintenance
  • Decommissioning

This is a lot of steps. And you’ll notice that each of them could themselves be further broken down into myriad individual sub-steps. For example, a step like “testing” can encompass (depending on SDLC in use, context, etc.): unit testing, functional testing, regression testing, performance testing, security testing, and any number of other individual items. How many of these involve just “coding” vs. how many don’t but are nevertheless critical to ensuring a robust product at the end?

In other words, there are a legion of possible ways for stakeholders involved at any stage of this process to either introduce or mitigate risks depending on the processes they follow, their training, their awareness, and numerous other factors. This means that a risk-aware program designed to reduce, manage, and mitigate software risk needs to account for all of them and, wherever possible, bolster those actions that favor risk reduction outcomes. Point being, while coding is arguably the most “visible” step along the software development and release process (at least internally), it’s also not the only place where we should focus. 

As you’d expect then, your application – or software depending on preferred parlance – risk  management efforts should include the whole lifecycle. This by extension means two things: 1) that you understand and account for the whole lifecycle holistically, and 2) that you extend your planning to include areas outside development that nevertheless hold a stake. Include and deputize testing personnel, business analysts, project and product managers, support teams, sales, marketing, HR, and legal – bring them under the umbrella of caring about the security of what you build. 

As I said at the outset, this isn’t just about the semantics (though granted I’ve framed it that way to illustrate the point.) Instead, it’s about understanding that risk is impacted by the entirety of the processes surrounding software development and that risk extends well beyond what we traditionally might tend to focus on when looking at things like vulnerabilities. 

Why is it not just semantics? Because it speaks to something bigger that’s incredibly important. In Ends and Means, Aldous Huxley once famously said that, “The end cannot justify the means for the simple and obvious reason that the means employed determine the nature of the ends produced.” His point was that how we do something determines, in large degree, what the end state will be. Extending that to software development, I’d argue that disorganised, immature, and “slapdash” development processes will inexorably produce software that is more shoddily designed and more poorly implemented than would otherwise be the case if a more robust and disciplined process were followed instead. This in turn means, we target goals beyond security and we embrace processes beyond writing code.

Source is ComputerWeekly.com

Vorig artikelBASF ramps up petaflops with new Quriosity HPE-build hardware
Volgend artikelHow to Learn Hard Things (Like DNS)