2024 Workshop
Fiesta 7/8
9 am - 5 pm, Saturday, September 21
(Not Included in World Pass) AI SecureOps: Navigating Attacks, Defenses & Development in GenAI & LLM Security | 2-Day Workshop
About

CTF-style GenAI security training, focusing on real-world LLM attacks/use-cases, securing public & private AI services. Learn to build custom models for specific security challenges, Pen-testing GenAI apps and implementing guardrails for security & monitoring of enterprise AI services.
Learn the intricacies of GenAI and LLM security through this training program that blends CTF styled practical Pen-test exercises, designed for security professionals. This course provides hands-on experience in addressing real-world LLM threats and constructing defense mechanisms, encompassing threat identification, neutralization, and the deployment of LLM agents to tackle enterprise security challenges. By the end of this training, you will be able to:


- Identify and mitigate GenAI vulnerabilities using adversary simulation, OWASP and MITRE Atlas frameworks, and apply AI security and ethical principles in real-world scenarios.
- Execute and defend against advanced adversarial attacks, including prompt injection and data poisoning, utilizing CTF-style exercises and real-world LLM use-cases.
- Build an LLM firewall, leveraging custom models to protect against adversarial attacks and secure enterprise AI services.
- Develop and deploy enterprise-grade LLM defenses, including custom guardrails for input/output protection and benchmarking models for security.
- Implement RAG to train custom LLM agents and solve specific security challenges, such as compliance automation, cloud policy generation & Security Operations Copilot.
- Establish a comprehensive LLM SecOps process, to secure the GenAI supply chain against adversarial attacks.

Learning Objectives:

  • Proficiency in identifying and mitigating GenAI vulnerabilities, applying security and ethical principles to real-world scenarios, and combating advanced adversarial attacks including prompt injection and data poisoning.
  • Skills to build and deploy enterprise-grade LLM defenses, including custom guardrails and models for input/output protection, alongside practical experience in securing AI services against adversarial threats.
  • The ability to develop custom LLM agents for specific security challenges, such as compliance automation and cloud policy generation, and establish a comprehensive SecOps process to enhance GenAI supply chain security.

* Please note: This is not included in the Main Conference registration and requires a separate registration.

Get in touch
Get in touch
Customer Service
For any and all inquiries please click the button below
Speaking Opportunities

Tim Garon
Director, Event Content and Strategy

InfoSec World
Stay Informed
Join our mailing list for the latest news on InfoSec World 2024.