Loading…
Loading…
A predefined alternative action an AI agent takes when its primary approach fails—such as escalating to a human when confidence is low, switching to a simpler model when latency spikes, or returning a canned response when the knowledge base has no match. Well-designed fallbacks prevent agents from failing silently or producing low-quality outputs.