Key Safety Categories Covered by WSO2 AI Guardrails¶
WSO2 AI Guardrails comprehensively address critical safety aspects across four foundational categories, ensuring secure, compliant, and reliable AI interactions:
Category | Description |
---|---|
Content Safety | Detects and filters toxic, harmful, or offensive content to ensure safe and appropriate AI outputs. |
Content Usage Control | Implements organizational policies by enforcing word, sentences, and content usage guidelines consistently. |
Explore the comprehensive guardrail capabilities offered within each safety category.