# OauthRouter — Complete API Documentation > OpenAI-compatible LLM router. Users bring their own API keys or OAuth tokens for any of 11+ providers and get a single unified endpoint. No token markup. ## What is OauthRouter OauthRouter lets developers route LLM requests to multiple AI providers (Anthropic, OpenAI, Google, Mistral, Groq, xAI, DeepSeek, Together, Cohere, Cloudflare AI, OpenRouter) through one OpenAI-compatible API. Unlike OpenRouter, users connect their OWN provider credentials (API keys or OAuth tokens like Claude Code sk-ant-oat) — so you pay providers directly with zero middleman markup. Key features: - OpenAI-compatible: drop-in replacement for any `openai.chat.completions.create` call - OAuth support for Anthropic Claude subscriptions (Claude Code CLI tokens) - Routing strategies: latest, random, round-robin, failover, least-used - Model filtering per token based on configured providers - Message limits by plan (100/mo free, 1k/mo starter, 5k/mo pro, unlimited enterprise) ## Base URLs - API Base: `https://api.oauthrouter.com/v1` (use this as base_url for SDKs) - Full chat endpoint: `https://api.oauthrouter.com/v1/chat/completions` - Models endpoint: `https://api.oauthrouter.com/v1/models` - Dashboard: `https://oauthrouter.com/dashboard` - OpenAPI spec: `https://oauthrouter.com/openapi.json` ## Authentication All API requests require a Bearer token in the Authorization header. Tokens start with `lr_live_` and are unique per account. ``` Authorization: Bearer lr_live_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx ``` To get a token: 1. Sign up at https://oauthrouter.com (free tier: 100 messages/month) 2. Add at least one provider in the Providers tab (paste your Anthropic/OpenAI/Google API key) 3. Copy the token from the "My API" tab ## Endpoint: POST /v1/chat/completions OpenAI-compatible chat completions. Works with the official OpenAI SDKs (Python, Node.js, etc) by setting `base_url` to `https://api.oauthrouter.com/v1`. ### Request body ```json { "model": "anthropic/claude-sonnet-4-5", "messages": [ {"role": "system", "content": "You are helpful."}, {"role": "user", "content": "Hello!"} ], "max_tokens": 1024, "temperature": 1, "top_p": 1, "stream": false } ``` ### Parameters | Name | Type | Required | Default | Description | |------|------|----------|---------|-------------| | model | string | yes | — | Model identifier. Format: `provider/model-name` (e.g. `anthropic/claude-sonnet-4-5`) or just `model-name`. | | messages | array | yes | — | Array of message objects with `role` (system/user/assistant) and `content`. | | max_tokens | number | no | 8192 | Maximum tokens in the response. | | temperature | number | no | 1.0 | Sampling temperature (0 to 2). Higher = more creative. | | top_p | number | no | 1.0 | Nucleus sampling threshold. Use either temperature or top_p. | | stream | boolean | no | false | If true, returns server-sent events. | ### Response Standard OpenAI format: ```json { "id": "chatcmpl-xxx", "object": "chat.completion", "created": 1234567890, "model": "claude-sonnet-4-5-20250929", "choices": [{ "index": 0, "message": {"role": "assistant", "content": "Hello!"}, "finish_reason": "stop" }], "usage": {"prompt_tokens": 10, "completion_tokens": 5, "total_tokens": 15} } ``` ## Endpoint: GET /v1/models Returns models available to the caller's token. The response depends on whether you're authenticated: - **Authenticated** (`Authorization: Bearer lr_live_...`): returns only models from providers the user has enabled. - **No auth**: returns the generic catalog (useful for tooling discovery). Response format is OpenAI-compatible: ```json { "data": [ { "id": "anthropic/claude-sonnet-4-5", "name": "Claude Sonnet 4.5", "context_length": 200000, "pricing": {"prompt": "3", "completion": "15"} } ] } ``` ## Supported providers and model IDs | Provider | Example model IDs | |----------|-------------------| | anthropic | claude-opus-4-6, claude-sonnet-4-5, claude-haiku-4-5 | | openai | gpt-4o, gpt-4o-mini, o1, o1-mini | | google | gemini-2.5-pro, gemini-2.5-flash, gemini-2.0-flash | | mistral | mistral-large-latest, mistral-medium-latest, codestral-latest | | groq | llama-3.3-70b-versatile, llama-3.1-8b-instant, gemma2-9b-it | | xai | grok-3, grok-3-mini | | deepseek | deepseek-chat, deepseek-reasoner | | together | meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo, Qwen/Qwen2.5-72B-Instruct-Turbo | | cohere | command-r-plus, command-r | | cloudflare-ai | @cf/meta/llama-3.1-8b-instruct (free tier) | | openrouter | sk-or-style routing to any OpenRouter model | Use model IDs as `provider/model-name` (e.g. `anthropic/claude-sonnet-4-5`). ## Error responses All errors follow a consistent JSON format: ```json { "error": { "message": "Invalid or disabled token", "type": "authentication_error", "docs_url": "https://oauthrouter.com/docs#errors" } } ``` ### HTTP status codes | Code | Type | Cause | Solution | |------|------|-------|----------| | 401 | authentication_error | Invalid or missing token | Reveal/regenerate your token at /dashboard/api | | 400 | invalid_request_error | Malformed request body | Check model name and messages array | | 429 | rate_limit_error | Monthly message limit or budget exceeded | Upgrade plan or wait for reset | | 502 | api_error | Upstream provider request failed | Verify your provider API key is valid | ## Rate limits (by plan) | Plan | Price | Messages/month | |------|-------|----------------| | Free | $0 | 100 | | Starter | $29 | 1,000 | | Pro | $49 | 5,000 | | Enterprise | $69 | Unlimited | Limits reset on the 1st of each month at 00:00 UTC. Bonus messages from referrals add to the monthly limit. ## Code examples ### cURL ```bash curl https://api.oauthrouter.com/v1/chat/completions \ -H "Authorization: Bearer lr_live_YOUR_TOKEN" \ -H "Content-Type: application/json" \ -d '{"model":"anthropic/claude-sonnet-4-5","messages":[{"role":"user","content":"Hello!"}]}' ``` ### Python (OpenAI SDK) ```python from openai import OpenAI client = OpenAI(api_key="lr_live_YOUR_TOKEN", base_url="https://api.oauthrouter.com/v1") resp = client.chat.completions.create( model="anthropic/claude-sonnet-4-5", messages=[{"role": "user", "content": "Hello!"}] ) print(resp.choices[0].message.content) ``` ### Node.js / TypeScript (OpenAI SDK) ```javascript import OpenAI from 'openai'; const client = new OpenAI({ apiKey: 'lr_live_YOUR_TOKEN', baseURL: 'https://api.oauthrouter.com/v1' }); const resp = await client.chat.completions.create({ model: 'anthropic/claude-sonnet-4-5', messages: [{ role: 'user', content: 'Hello!' }] }); console.log(resp.choices[0].message.content); ``` ### LiteLLM ```python import litellm response = litellm.completion( model="openai/anthropic/claude-sonnet-4-5", api_base="https://api.oauthrouter.com/v1", api_key="lr_live_YOUR_TOKEN", messages=[{"role": "user", "content": "Hello!"}] ) ``` ### LangChain ```python from langchain_openai import ChatOpenAI llm = ChatOpenAI( base_url="https://api.oauthrouter.com/v1", api_key="lr_live_YOUR_TOKEN", model="anthropic/claude-sonnet-4-5" ) ``` ## Integrations with popular tools For GUI tools like OpenClaw, Cursor, Cline, Continue.dev, OpenWebUI, LibreChat: - Provider type: **OpenAI-compatible** or **Custom OpenAI** - Base URL: **https://api.oauthrouter.com/v1** - API Key: **lr_live_YOUR_TOKEN** - Model: **anthropic/claude-sonnet-4-5** (or any supported model ID) ## Common pitfalls - **Wrong Base URL**: use `api.oauthrouter.com` (with `api.` subdomain), not `oauthrouter.com`. - **Trailing slash**: some SDKs break if Base URL ends with `/`. Use `/v1` without trailing slash. - **Slow models timeout**: Claude Opus 4.6 can take 60+ seconds. Set client timeout to 120s+. - **Model format**: both `provider/model-name` and plain `model-name` work. ## Support - Dashboard: https://oauthrouter.com/dashboard - OpenAPI: https://oauthrouter.com/openapi.json - Docs HTML: https://oauthrouter.com/docs