Protecting AI models and the AI development environment may soon be the CSO's greatest challenge, given the complexity of the underlying big data platforms and the mathematics required to understand many of the esoteric attacks against modern AI algorithms. In this session, we will describe building robust defenses throughout the AI model development lifecycle -- how to harden your environment against the new family of attacks aimed at AI and strengthen existing measures for application security. Although improving overall security hygiene in the organization is a key first step, ultimately protecting AI means building and maintaining a test environment using custom software built upon open source frameworks and tightly integrated into the AI devops flow.
- Understand the emerging cyber threat against AI models
- Identify typical points of vulnerability throughout the AI model development lifecycle
- Understand the role of infrastructure security for protecting AI
- Learn the importance to AI model defense of applying principles of application security
- Demonstrate the need for a dedicated environment for testing and hardening AI models