API Reference
Demeterics provides OpenAI-compatible reverse proxy endpoints for Groq, OpenAI, Anthropic, and Gemini. Use your Demeterics API key as the Bearer token, and we'll automatically track usage, bill credits, and store interactions in BigQuery.
All endpoints require Authorization: Bearer <DEMETERICS_API_KEY>. Replace https://api.demeterics.com with your deployment URL if self-hosting.
Important
- Use your Demeterics API key (from
/api-keys), NOT your vendor API key - Demeterics handles vendor authentication, credit billing, and usage tracking automatically
- For BYOK (Bring Your Own Key), store your vendor keys in Settings → API Keys
- Model identifiers vary by provider—use the model name from your provider's documentation
LLM Reverse Proxy Endpoints
Groq Proxy
Base URL: https://api.demeterics.com/groq
Supported endpoints:
POST /v1/chat/completions- Chat completions (streaming supported)POST /v1/responses- Groq responses endpointGET /v1/models- List available modelsGET /groq/health- Health check
Example: Chat completion with Groq
curl -X POST https://api.demeterics.com/groq/v1/chat/completions \
-H "Authorization: Bearer dmt_your_demeterics_api_key" \
-H "Content-Type: application/json" \
-d '{
"model": "llama-3.3-70b-versatile",
"messages": [
{"role": "user", "content": "What is the capital of France?"}
]
}'
Response
{
"id": "chatcmpl-...",
"object": "chat.completion",
"created": 1234567890,
"model": "llama-3.3-70b-versatile",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The capital of France is Paris."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 15,
"completion_tokens": 8,
"total_tokens": 23
}
}
OpenAI Proxy
Base URL: https://api.demeterics.com/openai
Supported endpoints:
POST /v1/chat/completions- Chat completionsPOST /v1/completions- Text completionsPOST /v1/embeddings- EmbeddingsGET /v1/models- List modelsPOST /v1/audio/transcriptions- Whisper transcriptionPOST /v1/audio/translations- Whisper translationPOST /v1/audio/speech- Text-to-speechPOST /v1/images/generations- DALL-E image generationPOST /v1/images/edits- Image editingPOST /v1/images/variations- Image variationsPOST /v1/moderations- Content moderationGET /openai/health- Health check
Example: Chat completion with OpenAI
curl -X POST https://api.demeterics.com/openai/v1/chat/completions \
-H "Authorization: Bearer dmt_your_demeterics_api_key" \
-H "Content-Type: application/json" \
-d '{
"model": "openai/gpt-5",
"messages": [
{"role": "user", "content": "Explain quantum computing in simple terms"}
]
}'
Anthropic Proxy
Base URL: https://api.demeterics.com/anthropic
Supported endpoints:
POST /v1/messages- Claude messages APIPOST /v1/complete- Claude completions (legacy)GET /v1/models- List modelsPOST /v1/count_tokens- Token countingGET /anthropic/health- Health check
Example: Messages API with Claude
curl -X POST https://api.demeterics.com/anthropic/v1/messages \
-H "Authorization: Bearer dmt_your_demeterics_api_key" \
-H "Content-Type: application/json" \
-d '{
"model": "anthropic/claude-sonnet-4.5",
"max_tokens": 1024,
"messages": [
{"role": "user", "content": "Write a haiku about coding"}
]
}'
Gemini Proxy
Base URL: https://api.demeterics.com/gemini
Supported endpoints:
POST /v1/models/{model}:generateContent- Generate contentPOST /v1/models/{model}:streamGenerateContent- Streaming generationPOST /v1/models/{model}:embedContent- EmbeddingsPOST /v1/models/{model}:countTokens- Token countingGET /v1/models- List modelsPOST /v1beta/cachedContents- Cached content (beta)GET /gemini/health- Health check
Example: Generate content with Gemini
curl -X POST "https://api.demeterics.com/gemini/v1/models/gemini-2.0-flash-exp:generateContent" \
-H "Authorization: Bearer dmt_your_demeterics_api_key" \
-H "Content-Type: application/json" \
-d '{
"contents": [
{
"parts": [
{"text": "Explain the theory of relativity"}
]
}
]
}'
Data Management Endpoints
GET /api/v1/status
Verifies API key validity and returns service information.
Request
curl https://api.demeterics.com/api/v1/status \
-H "Authorization: Bearer dmt_your_demeterics_api_key"
Response
{
"status": "ok",
"project": "demeterics-api"
}
POST /api/v1/exports
Exports interaction data to JSON or CSV format. Supports streaming or GCS bucket delivery.
Request fields
format:"json"or"csv"range:{"start": "YYYY-MM-DD", "end": "YYYY-MM-DD"}filters: Optional filtering (e.g., by model, user_id)
Example
curl -X POST https://api.demeterics.com/api/v1/exports \
-H "Authorization: Bearer dmt_your_demeterics_api_key" \
-H "Content-Type: application/json" \
-d '{
"format": "json",
"range": {
"start": "2025-01-01",
"end": "2025-01-31"
}
}'
Response
- Streams JSON/CSV data directly, OR
- Returns
{"export_url": "gs://..."}if GCS export is configured
POST /api/v1/data
Requests data deletion for GDPR/privacy compliance. Supports deletion by user_id or transaction_ids.
Request fields
user_id: Delete all data for this user (optional)transaction_ids: Array of transaction IDs to delete (optional)reason: Explanation for deletion (required)
Example
curl -X POST https://api.demeterics.com/api/v1/data \
-H "Authorization: Bearer dmt_your_demeterics_api_key" \
-H "Content-Type: application/json" \
-d '{
"user_id": "user@example.com",
"reason": "GDPR deletion request"
}'
Response
{
"status": "ok",
"message": "Deletion request queued"
}
Authentication Modes
Demeterics supports three authentication modes:
-
Demeter-Managed Keys (Default)
- Use only your Demeterics API key
- Demeterics provides vendor API keys automatically
- Billed per-token via Stripe credits
-
BYOK (Bring Your Own Key)
- Store your vendor API keys in Settings → API Keys
- Demeterics uses your keys for API calls
- Still tracks usage and analytics (no billing)
-
Dual-Key Mode
- Format:
Authorization: Bearer dmt_YOUR_KEY;vendor_VENDOR_KEY - Combines Demeterics tracking with your vendor key
- Useful for migration or hybrid deployments
- Format:
Error Responses
All errors follow this format:
{
"error": {
"message": "Human-readable error description",
"type": "error_type",
"code": "error_code"
}
}
Common error codes:
400- Invalid request (malformed JSON, missing fields)401- Missing or invalid Authorization header403- API key valid but lacks permissions404- Endpoint not found or model unavailable429- Rate limited (too many requests)402- Insufficient credits (Stripe billing required)5xx- Server error (retry with exponential backoff)
Best Practices
- Idempotency: Include
X-Request-IDheader for retries - Streaming: Use streaming endpoints for real-time responses
- Error Handling: Implement exponential backoff for 5xx errors
- Model Validation: Don't hardcode model names—use provider documentation
- Credit Monitoring: Check credit balance at
/creditsto avoid service interruption - BYOK Setup: Store vendor keys in Settings → API Keys for zero-billing mode
SDKs
Demeterics is compatible with all OpenAI SDKs. Just change the base URL:
Python (OpenAI SDK)
from openai import OpenAI
client = OpenAI(
base_url="https://api.demeterics.com/groq/v1",
api_key="dmt_your_demeterics_api_key"
)
response = client.chat.completions.create(
model="llama-3.3-70b-versatile",
messages=[{"role": "user", "content": "Hello!"}]
)
Node.js (OpenAI SDK)
import OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'https://api.demeterics.com/groq/v1',
apiKey: 'dmt_your_demeterics_api_key'
});
const response = await client.chat.completions.create({
model: 'llama-3.3-70b-versatile',
messages: [{ role: 'user', content: 'Hello!' }]
});
cURL
curl -X POST https://api.demeterics.com/groq/v1/chat/completions \
-H "Authorization: Bearer dmt_your_demeterics_api_key" \
-H "Content-Type: application/json" \
-d '{"model": "llama-3.3-70b-versatile", "messages": [{"role": "user", "content": "Hello!"}]}'
For more examples and integration guides, visit the Quick Start page.