Large language models (LLMs) like ChatGPT are transforming the landscape of software development and evaluation, though claims of AI replacing programmers are often overstated. This session delves into the core technologies of LLMs and their role in software generation and assessment, emphasizing the influence of training data that includes insecure coding practices. We share insights from analyzing over 100 million lines of code in languages such as C, C++, and Java using tools like ChatGPT 3.4, ChatGPT 4, and CoPilot. Participants will gain a comprehensive understanding of the advantages and potential pitfalls of LLMs, strategies for mitigating associated risks, and foresight into the future of secure AI-driven software development.
In this session you will: