Loading…
Loading…
Written by Max Zeshut
Founder at Agentmelt
The practice of deliberately attacking an AI agent—through adversarial prompts, prompt injection, jailbreaks, and edge-case inputs—to discover failure modes before attackers or customers do. Red teaming is a required step before launching agents in regulated domains (healthcare, finance, legal) and is increasingly standard for any customer-facing agent with write access to systems.
Back to glossary