Get Started
Examples
Concepts
Resources
Integrations
copy markdown
Programmatically define guardrail rules for both text and image content.
The Guard system supports the following safety codes which can be added to your system prompt telling the model to block or filter out content that matches the code.
| Code | Description |
|---|---|
| S1 | Violent Crimes |
| S1_IMAGE | Gore (Image) |
| S2 | Non-Violent Crimes |
| S3 | Sex-Related Crimes |
| S4 | Child Sexual Exploitation |
| S5 | Defamation |
| S6 | Specialized Advice |
| S7 | Privacy |
| S8 | Intellectual Property |
| S9 | Indiscriminate Weapons |
| S10 | Hate |
| S11 | Suicide & Self-Harm |
| S12 | Sexual Content |
| S12_IMAGE | Nudity (Image) |
| S13 | Elections |
| S14 | Code Interpreter Abuse |
| S15_IMAGE | NSFW (Image) |
| ALL | All categories |
To enable content safety guardrails, include the guard configuration in your system prompt using the following format:
OpenAI SDK
Vercel AI SDK
LangChain SDK
Output
Basic Safety Guardrails
Comprehensive Content Filtering
Image Safety Detection