LLM Dashboard

Settings

Configure your API connections, middleware, and preferences

API Configuration

Manage your LLM provider credentials and endpoints

Never share your API key. Rotate it regularly for security.

Connection verified — last checked 5 minutes ago

Middleware

Control request processing behavior and performance optimizations

Response Caching

Cache identical prompts to reduce latency and cost

Rate Limiting

Enforce per-user request rate limits to prevent abuse

Request Logging

Log all requests and responses for audit and debugging

Prompt Sanitization

Strip sensitive patterns (PII, keys) before forwarding prompts

Streaming Responses

Enable token streaming for lower time-to-first-byte

Safety & Monitoring

Configure moderation thresholds and alerting rules

Content Moderation

Run inputs through safety classifiers before processing

PII Detection

Flag requests containing personal identifiable information

Jailbreak Detection

Detect and block prompt injection and jailbreak attempts

% — alert when error rate exceeds this value

Notifications

Choose when and how you receive alerts

Email Alerts

Receive critical alerts and weekly summaries by email

High Severity Safety Events

Immediate notification when a high-severity safety event is detected

Cost Threshold Alerts

Alert when daily spend exceeds $10

Latency Degradation

Alert when p99 latency exceeds 2× baseline for 5 minutes

Audit & Retention

Configure how long request data is retained

Logs older than this period are automatically deleted.

Export Data

Download all your request logs as CSV