Large language models (LLMs) are used in an increasing number of applications that handle more critical tasks and are granted greater degrees of autonomy. While such agentic applications are powerful and versatile, LLM-based applications remain brittle and vulnerable to LLM-specific security threats.
In this webinar, Vladislav Tushkanov from Kaspersky AI Research Center will introduce one of these key vulnerabilities, prompt injection. He will examine the most recent and notable cases, such as EchoLeak, where prompt injections in production LLM applications led to significant privacy breaches. Finally, we will implement a demonstration agent to explore how these attacks function under the hood.
Join the webinar for:
Additionally, Vladislav will present a newly released Kaspersky online training program on Large Language Model Security. This course covers both the theoretical foundations of how LLMs are trained to resist attacks and why this training may fail, while also providing extensive practical guidance for testing LLM application security and implementing effective safeguards against real-world attacks.
Register for the webinar
Please provide additional information below.