Parth Patel

Solutions Architect with 15+ years of CyberSecurity, DevOps, Software Development and Automation experience. He is an active member in the open source community contributing/path-finding on various projects. Maintainer on the OpenSSF project GUAC (Graph for Understanding Artifact Composition), in-toto-golang and in-toto attestations. Outside of work, he loves to travel and find new restaurants to explore!

The speaker's profile picture

Sessions

11-11
11:30
30min
Papers, Please - Scrutinizing AI model creation
Parth Patel, Mihai Maruseac

When an AI model misbehaves (e.g., it tells you to put glue on pizza), you must investigate how this happened. Sometimes these are accidents caused by the training data, but these incidents can also be due to nefarious activities – we’ve seen ML malware deployed in 2024. At the end of the day AI is still software, so security needs to be established around its creation. The same transparency and accountability must be enforced as with other parts of the software supply chain. Utilizing SLSA (Supply Chain Levels for Software Artifacts) and GUAC (Graph for Understanding Artifact Composition), we can determine the provenance of each dataset and the composition of each model. In this talk, we dive into the anatomy of AI model attacks: identifying bad models, determining the root cause of badness, and finding the blast radius of models affected. Once the data is collected, we can create an SBOM and distribute with the AI model provenance to meet compliance and transparency requirements.

Theater