This intensive, 4-day hands-on workshop provides a deep dive into the evolving threat landscape of Large Language Models (LLMs), focusing on the updated 2025 OWASP Top 10 for LLM Applications. The course bridges the gap between traditional application security and AI-native security, covering risks from input manipulation to architectural vulnerabilities. Participants will learn to identify, exploit, and mitigate critical flaws in LLM-powered applications, including RAG systems and autonomous agents. The training emphasizes a practical “offensive-defensive” approach, featuring labs for prompt injection, data poisoning, excessive agency, and securing output handlers.
By the end of this course, participants will be able to:
Learning Objectives
Topics
Threat Modeling for LLM Systems
Learning Objectives
Topics
Lab: Break a tool‑enabled agent
Learning Objectives
Topics
Learning Objectives
Topics
Learning Objectives
Topics
| Course Name | Date | Time | |
|---|---|---|---|
| Course NameTop 10 LLM Security Risks with Solutions | Date05/04/2026 - 05/08/2026 | Time09:00 AM-05:00 PM |