Cloud Security Alliance Expands Work on Agent AI Governance
The Cloud Security Alliance (CSA) has made significant strides in advancing the governance and security of agent AI systems. Announced at the CSA Agentic AI Security Summit on April 29, a series of CSAI Foundation milestones were unveiled, focusing on securing the critical agent control layer. This includes a new disaster risk initiative, CVE Numbering Authority authorization, and the acquisition of two agent AI specifications.
Enhancing the Agentic Control Plane
As the CSA works towards its mission of “Securing the Agentic Control Plane” by 2026, these milestones mark a pivotal expansion of the CSAI Foundation’s efforts. The key announcements include the launch of the STAR for AI Catastrophic Risk Annex, which is supported by Coefficient Giving—an organization dedicated to long-term AI security work.
According to Jim Reavis, CEO and co-founder of CSA, “The global economy is grappling with two exponential developments at once: frontier models that leapfrog one another from month to month and the viral bottom-up adoption of agents within the enterprise.” Reavis emphasized that these announcements arm companies, auditors, and regulators with the necessary technical specifications and security frameworks to embrace agent AI without losing control.
Disaster Risk Annex Planned
The STAR for AI Catastrophic Risk Annex is designed to expand the AI Controls Matrix and the STAR for AI assurance program. It aims to address scenarios involving loss of human control, uncontrolled system behavior, and other extensive, irreversible societal impacts. The appendix will focus on controls testable in production environments, identifying existing AICM controls relevant to disaster risk, introducing new controls where gaps exist, and defining evidence requirements and testing criteria suitable for independent assessment.
The rollout of this initiative is planned in four phases, spanning from June 2026 to December 2027. Phase 1 will translate catastrophic risk scenarios into verifiable control language, while Phase 2 will develop validation protocols. Phase 3 will introduce the appendix into real-world environments through pilot evaluations, investigator training, and reference implementations. Finally, Phase 4 will produce public AI STAR registry entries, benchmarking, and a report on the state of catastrophic AI risk controls. The appendix will be guided by the NIST AI RMF, the EU AI Act, and ISO/IEC 42001.
AICM and STAR for AI Context
The appendix builds upon CSA’s AI Controls Matrix, a vendor-independent framework for cloud-based AI systems. The AICM includes 243 control objectives across 18 security areas and complies with standards such as ISO 42001, ISO 27001, NIST AI RMF 1.0, and BSI AIC4. The AICM package comprises the matrix itself, mappings to NIST AI 600-1, ISO 42001, and the EU AI Act, alongside implementation and audit guidelines, an AI-CAIQ questionnaire, and a STAR for AI Level 1 submission guide.
For more detailed information, visit the original source: Here.
“`

