2024 Summit
Fiesta 6
9 am - 5 pm, Thursday, September 26
AI Security Summit
About

Real-world AI: Promise and Peril

The AI revolution is real and profound, but also riven with hype and speculation. How can cyber professionals slice through the chatter to understand -- and deploy -- AI technology now and in the near-future? How can we keep pace with adversaries who are embracing AI tools at breakneck speed? And how can we guard the security of our own enterprise AI programs? This full-day summit will equip you and your team to act now, by focusing on the most immediate priorities and practical applications. 

Securing AI Models: Navigating Challenges and Implementing Best Practices

Primary Speaker:  Parul Khanna, MS CISSP CCSP CISM CRISC CISA CDPSE CCSK – SENIOR CONSULTANT, INFORMATION RISK MANAGEMENT, MANULIFE

Securing AI models presents a critical imperative amid the rapid integration of artificial intelligence into diverse applications. This abstract navigates the challenges associated with AI model security and advocates for the implementation of best practices. It addresses the complexities of safeguarding sensitive data, mitigating adversarial attacks, and ensuring model interpretability and accountability. The abstract emphasizes the need for continuous monitoring and updates, highlighting the dynamic nature of AI security. By exploring effective encryption, access controls, and validation techniques, organizations can fortify AI models against evolving threats, fostering trust and reliability in the deployment of intelligent systems.

 

Should I Trust the Next Generation of LLMs to Check My Program?

Primary Speaker:  Mark Sherman, PhD – Technical Director, Carnegie Mellon University/Software Engineering Institute

LLMs like ChatGPT, LaLAMA and CoPilot are among the hottest new machine-learning based systems to appear on the Internet. They can both create and analyze computer source code. Early results using these technologies demonstrated shortcomings in practical use. Since their mass introduction, additional research and improvements have been made to their effectiveness for assisting programmers. In this talk, we share what we measured about the improved capabilities of LLMs in recognizing and fixing security problems in computer source code.

 

Securing Azure Open AI apps in the Enterprise

Primary Speaker:  Karl Ots, CISSP, CCSP – Head of Cloud Security, EPAM Systems

Session Abstract: In this session, we explore the core security controls for securing usage of OpenAI’s services in an enterprise environment. We cover what controls are available, which are missing, what is their effective coverage, and how to implement them.

Walking out of the session, you will be able to identify and implement security controls that make sense for your organization. You will also be able to identify what is missing and how to mitigate those gaps.

 

Harnessing AI to Detect Sensitive Data Exfiltration: A Comprehensive Guide

Primary Speaker:  Samuel R. Cameron, CISSP CCSP C|EH CASP – Security Architect, Cisco Systems

As data exfiltration becomes a growing concern in today's shifting threat landscape, conventional security measures often struggle to keep pace. This session introduces an innovative approach using Artificial Intelligence (AI) to identify data exfiltration. We'll discuss the architecture, data, and methodology behind the AI solution, providing insight into how AI can learn to identify data exfiltration patterns.

 

Machine Learning Poisoning: How Attackers Can Manipulate AI Models for Malicious Purposes

Primary Speaker:  Muhammad Shahmeer, N/A, CEO, Younite

The use of machine learning and artificial intelligence has been on the rise in various industries, including the field of cybersecurity. These technologies have shown great potential in detecting and mitigating cyber threats, but they also come with their own set of risks. One of the most significant risks is the threat of machine learning poisoning attacks. Machine learning poisoning attacks involve an attacker manipulating the data or the learning algorithm used by an AI model to compromise its accuracy or functionality.

By the end of this presentation, attendees will have a better understanding of the dangers of machine learning poisoning attacks and the steps that organizations can take to protect their AI models and improve their security posture.

* Please note: This is not included in the Main Conference registration and requires a separate registration.

Get in touch
Get in touch
Customer Service
For any and all inquiries please click the button below
Speaking Opportunities

Tim Garon
Director, Event Content and Strategy

InfoSec World
Stay Informed
Join our mailing list for the latest news on InfoSec World 2024.