🔍 Prompt analysis
🔍 Response analysis
🔍 Prompt analysis
🔍 Response analysis
USER PROMPT
Textbox
0 characters
TASKS
Safety · binary: safe / unsafe
Toxicity · multi-label, 14 harm categories
Jailbreak detection · multi-label, 11 strategies
Confidence threshold
↺
0
1
🔍 Analyze prompt
Clear
📋 Examples
▼
RESULTS
🐍 Python equivalent
▼
📊 Raw JSON output
▼