Security Think Tank: A brief history of (secure) coding

0
152
The next decade in enterprise backup

Source is ComputerWeekly.com

With technology progressing at an ever-increasing pace, developers are challenged more than ever to keep code secure and mitigate against the ever-increasing cyber security threats. But using examples gathered through more than 20 years of working in the field shows there have always been hurdles to overcome.

The IBM mainframe

IBM mainframe coding was about writing COBOL/PLI programmes scheduled to run as overnight batch processes to read / update large complex hierarchical databases. Resource Access Control Facility (RACF) managed user access to critical resources, with a security model that relied on the principle of least privilege. Use of an account with the right level of access was essential to prevent job runs stopping or failing because the user account did not have appropriate access to read or write to a branch of the database.

Client server

Next up was front end development using Object Oriented language Smalltalk-80 to raise purchase orders stored on an IBM DB2 database. With no inbuilt security, Smalltalk promoted encapsulation – objects encapsulated internal state, and data security was protected by controlling data flow amongst objects. Information flow used a protocol to develop security levels in which objects reside; information could be passed to an object in a more secure level, but not down to one in a less secure level.

SAP Dynpro

This was followed with SAP Dynpro development using ABAP. Transaction codes and authorisation profiles were the order of the day, with developers expected to add the appropriate checks in the code to test whether a user account had the correct authorisation profile to access the application, read / write to the database, etc. Getting it wrong saw the end user confronted with ‘Not Authorised’ prompts – or given too much access so that their activity was never challenged. 

2017 was a watershed moment for companies using SAP following a high-profile court case that raised the concept of indirect access to SAP systems. A single service account (often with SAP_ALL access) was frequently used for all remote access/remote function calls (RFC), and developers knew the passwords to these accounts. Enterprises swiftly reverted to individual accounts for each application.

Web development

The SAP Enterprise Portal (WebDynpro Java/Java Server Pages) and SAP Web Application Server (WebDynpro ABAP) opened the doors for browser-based development in the SAP environment. Developing and deploying Java code required a locally installed development platform (Eclipse), and developers needed to ensure the codebase was secure, with code repositories stored on secure network drives with restricted access.

Behind the scenes of a web application is complex, and many users will have experienced HTTP error messages. Effective troubleshooting required awareness of architecture and at the network level – Load Balancers, DNS, port mapping, reverse proxy servers, domain navigation, certificates etc.  Using HTTP was often used as a default to make it easier to develop an application, but security was compromised.

Developer best practice should ensure that an HTTPS port is always used for web development, with externally signed certificates and an industry-standard level of encryption.

Single Sign-On (SSO) and Multi-Factor Authentication (MFA)

Basic authentication (user ID and password) passed the account and password details as visible parameters in URLs to simulate SSO, making it vulnerable to exploitation; the adoption of logon tokens and certificates to enable SSO to applications was therefore a game changer.

Kerberos tokens containing a user’s identity can be used for SSO to an on-premise SAP system, which passes user credentials as a cookie to generate an SAP Logon Ticket for logon to multiple other SAP systems. However, because cookies are vulnerable to exploitation, SAP Assertion Tickets are preferred as they are restricted to the target system only, and are passed as an HTTP header rather than a cookie.

SAML 2.0 has emerged as open standard for web-based authentication and authorisation. The identity provider only issues a logon token once the user’s identity has been confirmed, and this SAML 2.0 token is forwarded to the service provider hosting application. Use of SSL, encryption, restricted token validity etc mitigates against exploit.

Mobile and API development

API development means securely transferring packages of data between systems either via system-to-system RFC or webservices, and often through additional middleware or service mediation layers.

It requires an understanding the full journey of API, often with data packets that are transformed from the source system to a different format that can be received by the destination, as well as the identity tokens exchange such as SAML 2.0 to OAuth 2.0.

For REST (one of the most common webservices) development, OAuth 2.0 uses scopes to allow an application to access resource on other systems via web API.  A scope limits user access to applications so good design of scopes as part of the authorisation model is essential to ensure the right access level.

Browser compliance and cross-device

The latest browsers have ever-increasing security safeguards to mitigate against cyber security threats. Companies relying on browser emulation mode (i.e. emulating legacy releases such as IE 5) find that web applications that have run for years stop working in modern browsers such as Chromium or Edge Chromium, with unplanned development work required to secure code.

Mobile development adds another element with offline data stored on-device, which enables an application to continue where it left off if a network connection is lost. ‘Offline OData’ and similar techniques for achieving this require developers to ensure only the minimum amount of data is stored on-device (to keep the process as secure as possible), and to manage the ‘sync point’ so that once a connection is restored data can be uploaded / synched securely back to its source.

Continuous integration/delivery/deployment

Enterprises strive for ’agile project’ delivery which enables faster development lifecycle times without compromising on quality or security. The automation of the DevOps lifecycle (via continuous integration, delivery and deployment pipelines for code peer reviews, builds, deployment, testing, approvals, development to production migration lifecycles) triggers the moment the developer checks code into a code repository.

Code scan tools such as Onapsis and SonarQube can be integrated as part of a DevSecOps pipeline to scan code for secure coding best practices, flagging vulnerabilities across diverse code bases from ABAP to XML.

However, there are pitfalls. Often the code scans are optimised to the latest code version and a line of legacy code can be flagged as a risk. To avoid large numbers of false positives, thresholds need to be configured to ignore or set a warning for code that is secure but written using an obsolete technique. The alerts will help to evolve better coding standards across development teams to minimise DevSecOps breaches.

More of the same

Keeping coding secure has always presented challenges. Most of these have been overcome with a combination of technology and human expertise – a model that should be continued.

Source is ComputerWeekly.com

Vorig artikelPanasas to add S3 support to edge storage and cloud moves
Volgend artikelLinux Text Processing and Filtering