About Show #1031
AI Agents can be powerful tools for an organization - but are they a security risk? Richard talks to Niall Merrigan about his experiences dealing with the various ways that LLMs can be attacked, starting with prompt injection. While some attacks are humorous, others can be very serious, especially in the context of agents, where the right prompt can cause an agent to use its capabilities to access or affect data outside its expected behavior. This has already led to several well-publicized CVEs, including the ServiceNow Privilege Escalation advisory. New tools have emerged to help restrict prompts and keep agents on task - but as with all things security, this is another set of tools you need to get familiar with!
Links
- AI Recommendation Poisoning
- Detecting Prompt Injection Attacks
- Mark Russinovich Crescendo Multi-Turn LLM Jailbreak Attack
- Cross-Site Scripting (XSS)
- Cameron Mattis LinkedIn
- Privilege Escalation in ServiceNow AI Platform
- Azure AI Content Safety Prompt Shields
- Task Adherence
- Simon Willison's Lethal Trifecta
- Microsoft Agent 365
- PyRIT
- OWASP Securing Agentic Applications Guide
Recorded February 16, 2026