Presenters

Source

Securing the Future: Building Trust in AI with SBOMs and Model Signing 🛡️✨

The AI revolution is here, and with it comes an exciting wave of innovation. But as we harness the power of artificial intelligence, a critical question arises: how do we ensure it’s secure and trustworthy? That’s precisely the challenge the tech world is tackling head-on, and at a recent conference, the conversation around AI governance and security took center stage. Get ready, because we’re diving deep into how we can build a more secure AI future, one component at a time! 🚀

The Evolving Landscape of AI Governance 🌐

The focus on AI security is no longer a niche concern; it’s a top priority. Initiatives like the Coalition for Secure AI (COAI), a powerhouse collaboration featuring giants like Google, IBM, and Microsoft, alongside leading model providers such as Cohere and OpenAI, are actively shaping the future of AI governance. Their work is divided into four crucial areas:

  • AI Security Risk Governance: Understanding and managing the inherent risks of AI.
  • Defending Against AI Security Threats: Building robust defenses against malicious attacks.
  • Supply Chain Security: Ensuring the integrity of every component that goes into our AI.
  • Securing Artificial Intelligence Systems: Implementing comprehensive security measures across the entire AI lifecycle.

A Three-Layered Approach to Trust 🏗️

To tackle this complex challenge, a practical, three-layered model for AI governance has been proposed, drawing inspiration from the proven shared responsibility model in public cloud environments. This framework distributes accountability across different stakeholders:

  • User Layer: This is where the “why” of AI governance is defined. Business drivers, use cases, and overarching objectives are set by leadership, including board members and management.
  • Core Layer: This is the operational heart. Technical measures and AI infrastructure security are implemented by measurement and governance teams to achieve the defined objectives.
  • Foundational Layer: This layer acknowledges the external building blocks of AI, including crucial technologies like Kubernetes and AI models that aren’t developed in-house. Their security is paramount for the entire ecosystem.

This collaborative model emphasizes that securing AI is a shared responsibility, fostering a proactive approach to trust.

The Pillars of AI Supply Chain Security: AI SBOMs and Model Signing 🔑

A significant challenge in AI security is the complex supply chain. Just like with traditional software, understanding what goes into our AI models is vital. Two game-changing solutions are emerging to address this: AI Bill of Materials (AI SBOM) and Model Signing.

AI SBOMs: Knowing What’s Inside 📦

Think of an AI SBOM as a detailed ingredient list for your AI model. It provides a comprehensive record of all the components, data, and dependencies used throughout the AI model’s lifecycle – from training to deployment. This end-to-end tracking is essential for security and transparency.

Enter SPDX (Software Package Data Exchange), an ISO standard that’s proving indispensable. It acts as a universal language for capturing and sharing data about software components. The latest advancements in SPDX 3.1 are particularly exciting:

  • Supply Chain Profile: This profile offers a multi-dimensional view of interconnected supply chains. It recognizes that AI models, software, and data are deeply intertwined and emphasizes the power of linked data for comprehensive analysis.
  • Threat Control Profile: This innovative profile integrates threat models and knowledge graphs. This allows for automated threat modeling and provides unprecedented visibility into the entire AI lifecycle.

SPDX’s ability to dynamically generate SBOMs, including crucial hardware SBOMs, and integrate them with other types of SBOMs (like those for data and software), enables universal data analysis. This paves the way for creating common knowledge graphs between organizations, facilitating dynamic data capture and sharing across manufacturers, suppliers, and service providers.

Model Signing: Proving Authenticity and Integrity ✍️

While AI SBOMs tell us what is in our AI, Model Signing answers the critical questions about who created it and whether it’s been tampered with:

  • Who actually trained this model?
  • Has it undergone any unauthorized fine-tuning?
  • Can we be sure it hasn’t been compromised?
  • Can these claims be proven?

Model signing provides a verifiable way to confirm a model’s origin and ensure its integrity. When combined with AI SBOMs, it guarantees transparency, security, and authenticity throughout the entire AI model lifecycle. The ultimate goal? A future where all claims about metadata, ML models, and datasets are tamper-proof, tied to a verifiable identity, and automatically generated by common ML developer tools.

A Real-World Defense: The “Pet Behavior Detective” Demo 🐶🐱

To illustrate the power of these concepts, a compelling demonstration showcased a practical defense against AI attacks. The scenario involved a “pet behavior detective” AI trained on pet behaviors. An “evil mastermind” cat, Mingo, attempted to sabotage the system by poisoning the training data, aiming to falsely blame the dogs for mischief! 😈

Here’s how the demonstration unfolded and how AI SBOMs and model signing saved the day:

  1. The Attack: Mingo successfully altered the training dataset, causing the AI to incorrectly attribute actions to the dogs.
  2. Defense with Data SBOM: The system leveraged an SPDX 3 SBOM for the dataset. This SBOM, tied to a hash of the dataset, acted as an attestation of its integrity. A policy engine then checked for this SBOM during the build process, blocking the poisoned data before it could be used.
  3. The Next Attack: Undeterred, Mingo then targeted the model training process itself.
  4. Defense with Model Signing: The system employed Open Source Model Signing (part of the Sigstore project) to sign the model. This process generates a verifiable signature based on a hash of the model’s components. A policy then checked for this signature during the inference image build, blocking the tampered model.

This powerful two-pronged defense – securing the data with AI SBOMs and the model with signing – effectively restored “justice” and prevented the AI from distributing false blame. The demo highlighted that while the initial policies were simple, the framework allows for much more sophisticated checks, such as locking weights and hyperparameters based on AI SBOM data and verifying signer identities.

The Path Forward: Open Source and Continuous Improvement 🛠️

The exciting news is that the open-source nature of this demonstration, including the policy engine and implementations of model transparency and SPDX 3 specs, is readily available. This fosters community collaboration and will undoubtedly accelerate the development of even more robust AI security solutions.

As SPDX 3.1 gets closer to its release, we can anticipate even more advanced capabilities for universal data analysis and supply chain security. The future of AI is being built on a foundation of trust, and with tools like AI SBOMs and model signing, we’re well on our way to a more secure and reliable AI-powered world! ✨

Appendix