When your organization encourages its users to leverage artificial intelligence, the security and compliance teams find their scope of responsibility expanding. 📈
In addition to traditional security concerns, suddenly the organization needs to be aware of security issues, potential controversies, and compliance issues relating to:
One emerging approach that helps organizations get their arms around the challenge is an AI Bill of Materials, or AI BOM. 📄 It’s similar in concept to the Software Bill of Materials (SBOM) that organizations use to track and keep tabs on software in the supply chain. 🚚
AI BOM is a nod to the reality that for many enterprises, that prompt box where users enter their AI queries actually lives on someone else’s server, or is passing at least part of the request via API to a third party. ☁️
The AI BOM serves two purposes:
First, it gives security and compliance teams a quick roadmap to what external tools are in use around the enterprise. As a simple example, maybe the marketing team is using ChatGPT, the dev team is working with Anthropic Claude, and sales is using Microsoft Copilot. 🧑💻 Meanwhile, the IT department might be using a specialized backup tool from a third party that leverages ChatGPT as a fourth party on the back end.
Armed with that knowledge, then at least if one of those major platforms hits the news in a bad way, it’s easier and faster to pinpoint which parts of your enterprise might be exposed. A side benefit of this part of an AI BOM is that it provides a structured process to address the related problem of “Shadow AI,” 👻 when groups in the organization adopt AI tools without the knowledge of the IT department.
The second purpose is to document what your enterprise has done to fine-tune, augment, or integrate a third-party generative AI tool. That part of the AI BOM should include information about the proprietary datasets your team used for fine-tuning, RAG pipelines, and the organizational cadence of things like bias audits. Should a problem arise, this part of the AI BOM process will help you more quickly isolate whether the issue is with your supplier or with the modifications and integrations that your organization has performed. 🛠️
One factor currently limiting the value of AI BOM is that many of the large language models with the most market share are proprietary. 🚫 So if ChatGPT, Gemini and Claude don’t list the datasets, dependencies, and algorithms they use, there’s only so much you can know about the underlying biases, potential personally identifiable information, or copyright status of the output. 🔐
Implementations of generative artificial intelligence are sprawling and complicated and fraught with risk. ⚠️ AI BOM won’t solve many of the problems, but it’s a good start toward getting a more comprehensive understanding of your organization’s AI usage and exposure. ✅