Discover how over 100 large language models perform on real-world secure coding tasks and what their limitations mean for developers, security teams, and businesses.
As generative AI becomes a mainstream tool for software development, one question is becoming increasingly urgent: Can we trust AI to write secure code?
You will hear the key findings from the 2025 GenAI Code Security Report, one of the most comprehensive evaluations to date of code security across over 100 large language models. Covering Java, Python, C#, and JavaScript, our research reveals troubling trends, including high failure rates on critical security tasks and no measurable improvement in security performance over time, even as models grow more powerful.
Join us to learn:
How often AI-generated code introduces vulnerabilities and in which languages
What types of security issues are most common
Why newer, bigger models aren’t necessarily safer
The hidden risks facing your software supply chain
What developers and security teams must do to stay ahead
Whether you’re a developer, security lead, or business decision-maker, this session will help you navigate the real-world security implications of GenAI in your development workflow.