EP84 How to Secure Artificial Intelligence (AI): Threats, Approaches, Lessons So Far
Sep 19, 2022•26 min•Season 1Ep. 84
Episode description
Guest:
- Alex Polyakov, CEO of Adversa.ai
Topics:
- You did research by analyzing 2000 papers on AI attacks released in the previous decade. What are the main insights?
- How do you approach discovering the relevant threat models for various AI systems and scenarios?
- Which threats are real today vs in a few years?
- What are the common attack vectors? What do you see in the field of supply chain attacks on AI, software supply, data?
- All these reported cyberphysical attacks on computer vision, how real are they, and what are the possible examples of exploitation? Are they a real danger to people?
- What are the main differences between protecting AI vs protecting traditional enterprise applications?
- Who should be responsible for Securing AI? What about for building trustworthy AI?
- Given that the machinery of AI is often opaque, how to go about discovering vulnerabilities? Is there responsible disclosure for AI vulnerabilities, such as in open-source models and in public APIs?
- What should companies do first, when embarking on an AI security program? Who should have such a program?
Resources:
- “EP52 Securing AI with DeepMind CISO” (ep52)
- “EP68 How We Attack AI? Learn More at Our RSA Panel!” (ep68)
- Adversarial AI attacks work on Humans (!)
- “Maverick* Research: Your Smart Machine Has Been Conned! Now What?” (2015)
- “The Road to Secure and Trusted AI” by Adversa AI
- “Towards Trusted AI Week 37 – What are the security principles of AI and ML?”
- Adversa AI blog
- AIAAIC Repository
- Machine Learning Security Evasion Competition at MLSec
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast