Guidelines for Secure AI Systems Issued by CISA and NCSC

0
271
9 Top Jobs in Cybersecurity

Dit bericht verscheen eerder bij FOSSlife

New guidelines from the Cybersecurity and Infrastructure Security Agency (CISA) and the National Cyber Security Centre (NCSC) provide recommendations to help stakeholders make informed decisions and help providers “build AI systems that function as intended, are available when needed, and work without revealing sensitive data to unauthorized parties.”

The “Guidelines for Secure AI System Development” document covers four main areas:

  • Secure design — This section covers understanding risks and threat modeling, as well as trade-offs to consider on system and model design.
  • Secure development — This applies to the development stage of the AI system development life cycle, including supply chain security, documentation, and asset and technical debt management.
  • Secure deployment — This section includes protecting infrastructure and models from compromise, threat or loss, developing incident management processes, and responsible release.
  • Secure operation and maintenance — This section offers guidelines on logging and monitoring, update management, and information sharing.

In general, the guidelines follow a “secure by default” approach and align closely to existing recommendations from CISA, the NCSC, and other agencies. 

“The approach prioritizes ownership of security outcomes for customers, embraces radical transparency and accountability, and establishes organizational structures where secure design is a top priority,” CISA says.

Read more at NCSC.

Contact FOSSlife to learn about partnership and sponsorship opportunities.

Dit bericht verscheen eerder bij FOSSlife

Vorig artikelGenerally available: Azure Monitor OpenTelemetry-based Distro for ASP.NET Core Applications
Volgend artikelMicrosoft to invest £2.5bn in doubling AI datacentre capacity in UK