Finetuning Large Language Models (LLMs) for Developing Security Log Detections
About
Traditional security log detections rely on static rules. But when striving to develop mature types of detections, they aren’t possible with rule-based logic. Recognizing this challenge, we can turn to machine learning to develop dynamic security detections.
While most people have been using LLMs for chatbot-like assistants, LLMs can also be leveraged for classification tasks, specifically security log detections. Our talk will go in-depth about one particular security detection use case: command obfuscation detection. We will demonstrate how to detect command obfuscation by finetuning a popular open-source LLM and how you can generalize this training framework for other security detections.
Learning Objectives:
Participants will learn how to finetune Large Language Models (LLMs) for security log detections in their own environment
Participants will understand different command obfuscation detection methods and the advantages of finetuning LLMs for this task
Participants will walk away with a better understanding on the theory behind how LLMs work
This website uses cookies to improve your experience, provide social media features and deliver advertising offers that are relevant to you. If you continue without changing your settings, you consent to our use of cookies in accordance with our privacy policy. You may disable cookies.