2024 Session
3:45 pm - 4:30 pm, Tuesday, September 24
Finetuning Large Language Models (LLMs) for Developing Security Log Detections

Traditional security log detections rely on static rules. But when striving to develop mature types of detections, they aren’t possible with rule-based logic. Recognizing this challenge, we can turn to machine learning to develop dynamic security detections.

While most people have been using LLMs for chatbot-like assistants, LLMs can also be leveraged for classification tasks, specifically security log detections. Our talk will go in-depth about one particular security detection use case: command obfuscation detection. We will demonstrate how to detect command obfuscation by finetuning a popular open-source LLM and how you can generalize this training framework for other security detections.

Learning Objectives:

  • Participants will learn how to finetune Large Language Models (LLMs) for security log detections in their own environment
  • Participants will understand different command obfuscation detection methods and the advantages of finetuning LLMs for this task
  • Participants will walk away with a better understanding on the theory behind how LLMs work
Get in touch
Get in touch
Customer Service
For any and all inquiries please click the button below
Speaking Opportunities

Tim Garon
Director, Event Content and Strategy

InfoSec World
Stay Informed
Join our mailing list for the latest news on InfoSec World 2024.