The world as we know it is comprised of layers upon layers of carefully connected technology, found in everything from international banks and local community owned shops to wireless doorbells and smart kitchen appliances. Each and every piece of technology between you and these core aspects of our lives have one thing in common: the code they run. It may seem like a small detail, but when something goes wrong, it has the potential to leave billions of personal, and sometimes sensitive, information records vulnerable to malicious actors.
This raises a question, how do we know the services we use most are protected, what do we mean when we say ‘secure coding practices’, and what happens when secure coding practices are not followed?
What are secure coding practices?
Secure coding practices are guidelines set out for developers (programmers) in corporate entities and are intended to govern and enforce a methodology to be followed when implementing features. These guidelines range from simple suggestions, like ensuring documentation is created when expanding the existing code base, to detailing the structure and layout of the code itself.
Developers will often conform their code bases to a specific design paradigm for the purposes of future-proofing, increasing modularity and reducing the likelihood of mistakes occurring due to overall code complexity.
How do we know our most trusted services are secure?
While companies within the public sector are regulated by government authorities, a different approach is taken for private and limited companies. In order to remain compliant with the latest standards, they must provide proof that their key infrastructure has undergone a form of in-depth security assurance.
If these companies are not compliant, they risk fines and penalties. Moreover, insurance providers may no longer be willing to renew contracts. In short, reducing risk and potential impact to the business, both financially and reputationally, will be at the forefront of many businesses’ minds.
What happens when something goes wrong
Some of the vulnerabilities which have caused the biggest impact can be traced back to oversights in secure coding practices. Even the most robust guidelines can still allow for bugs and mistakes in the final code, although the frequency of issues typically lessen as the guidelines mature.
Some of the most problematic weaknesses in our most popular software could have been caught with strict quality control and secure coding guidelines. Take EternalBlue, which targeted a vulnerability within Microsoft’s Windows operating system and its core components to allow execution of malicious code. This was ultimately a coding issue which was exploited in the WannaCry ransomware attack, which was reported to have infected over 230,000 Windows PCs worldwide in just a single day.
The past decade has seen a growing recognition of the importance of secure coding practices, and governments and corporate entities around the world have taken steps to promote and incentivise secure software development (e.g. bug bounties). In the United States, for example, the US’ Department of Homeland Security’s Software Assurance Marketplace (SWAMP) programme provides a suite of tools and resources to help developers identify and address security vulnerabilities in their software. Meanwhile, the European Union’s (EU’s) General Data Protection Regulation (GDPR) mandates that software developers implement appropriate security measures to protect personal data.
Despite these efforts, data breaches and cyber attacks continue to occur at an alarming rate. In 2020 alone, over 37 billion records were exposed in data breaches worldwide, according to Risk Based Security’s 2020 Year End Data Breach QuickView Report. Furthermore, the breaches reported primarily focused on some of our most important public services, namely healthcare. This highlights the need for continued vigilance and improvement in many areas of security, including secure coding practices.
Combating the core of the problem
Companies with large development teams are gradually making the transition to safer standards and safer programming languages, such as Rust. This partially combats the problem by enforcing a secure-by-design paradigm where any operation deemed unsafe must be explicitly declared, decreasing the likelihood of insecure operation through oversights. Certainly, secure-by-design paradigms are a leap forward in development practices along with modern advancements in secure coding practices. However, for a solution to truly be considered safe and trustworthy, detail-oriented security assessments will always be necessary.
As secure coding practices are maturing, we are seeing a reduction in the overall number and risk of vulnerabilities within modern software. However, this is counterbalanced by the growing number of digitally connected devices continually increasing the footprint of code subject to attack.
Modern security will always be a race between developers and malicious actors. Secure development practices and well thought out designs can help to build a solid base on which to add new features, and ultimately, these practices must continue to grow and improve just as the skills of potential adversaries certainly will.
Joseph Foote is a cyber security expert at PA Consulting