Introduction
When using AI to generate text or answers, sometimes the output can be confusing, incorrect, or inappropriate. Output guardrails help keep the AI's responses safe, clear, and useful for people.
Imagine a helpful robot assistant in a library that answers questions. The robot has a set of rules to never share private information, avoid rude words, and always give correct facts. These rules keep the robot helpful and safe for everyone.
┌───────────────────────────┐
│ User Input │
└────────────┬──────────────┘
│
▼
┌───────────────────────────┐
│ AI Generates Output │
└────────────┬──────────────┘
│
▼
┌───────────────────────────┐
│ Output Guardrails │
│ (Filters and Checks) │
└────────────┬──────────────┘
│
Output Safe and Clear
▼
┌───────────────────────────┐
│ User Receives │
│ Guarded Output │
└───────────────────────────┘