AgentRouter Pro

Troubleshooting

Common issues and error solutions

Troubleshooting

This document lists common issues and solutions you may encounter when using AgentRouter.

Authentication Errors

401 Unauthorized

Error Message:

{
  "error": {
    "message": "Invalid API key",
    "type": "invalid_request_error",
    "code": "invalid_api_key"
  }
}

Possible Causes:

  1. API key doesn't exist or has been deleted
  2. Incorrect API key format
  3. Missing authentication header
  4. Incorrect authentication header format

Solutions:

✅ Check API key format:

# API key must start with sk-ar-
api_key = "sk-ar-your-api-key"

✅ Confirm authentication header format:

# Correct way 1: Authorization Bearer
headers = {"Authorization": f"Bearer {api_key}"}

# Correct way 2: x-api-key
headers = {"x-api-key": api_key}

✅ Configure correctly when using SDK:

from openai import OpenAI

client = OpenAI(
    api_key="sk-ar-your-api-key",  # Ensure key is correct
    base_url="https://your-agentrouter.com/v1"
)

✅ Check if key exists:

Insufficient Balance

402 Payment Required

Error Message:

{
  "error": {
    "message": "Insufficient balance",
    "type": "insufficient_funds",
    "code": "insufficient_balance"
  }
}

Solutions:

  1. Check wallet balance:

  2. Top up:

    • Contact administrator for top-up
    • Confirm balance received
  3. Temporary solution:

    • Use cheaper models (e.g., gpt-3.5-turbo, deepseek-chat)
    • Reduce max_tokens limit

Rate Limiting

429 Too Many Requests

Error Message:

{
  "error": {
    "message": "Rate limit exceeded",
    "type": "rate_limit_error",
    "code": "rate_limit_exceeded"
  }
}

Response Headers:

X-RateLimit-Limit: 100
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1640000000

Solutions:

✅ Implement exponential backoff retry:

import time
from openai import RateLimitError

def chat_with_retry(client, max_retries=3, **kwargs):
    for attempt in range(max_retries):
        try:
            return client.chat.completions.create(**kwargs)
        except RateLimitError:
            if attempt == max_retries - 1:
                raise
            wait_time = 2 ** attempt  # 1, 2, 4 seconds
            print(f"Rate limited, retrying in {wait_time} seconds...")
            time.sleep(wait_time)

✅ Wait based on response headers:

import requests
import time

response = requests.post(url, headers=headers, json=data)

if response.status_code == 429:
    reset_time = int(response.headers.get('X-RateLimit-Reset', 0))
    wait_seconds = reset_time - int(time.time())
    if wait_seconds > 0:
        print(f"Waiting {wait_seconds} seconds...")
        time.sleep(wait_seconds)

✅ Adjust API key limits:

Request Errors

400 Bad Request

Common Causes and Solutions:

1. Missing Required Parameters

❌ Wrong:

# Anthropic missing max_tokens
message = client.messages.create(
    model="claude-3-5-sonnet-20241022",
    messages=[{"role": "user", "content": "Hello"}]
)

✅ Correct:

message = client.messages.create(
    model="claude-3-5-sonnet-20241022",
    max_tokens=1024,  # Required parameter
    messages=[{"role": "user", "content": "Hello"}]
)

2. Incorrect Message Format

❌ Wrong:

# Missing role or content
messages = [{"content": "Hello"}]

✅ Correct:

messages = [
    {"role": "user", "content": "Hello"}
]

3. Incorrect Model Name

❌ Wrong:

model = "gpt4"  # Model name doesn't exist

✅ Correct:

model = "gpt-4o"  # Correct model name

See Supported Models for the complete list.

Connection Errors

Connection Error / Timeout

Possible Causes:

  1. Network connection issues
  2. Incorrect base_url configuration
  3. Firewall or proxy settings

Solutions:

✅ Check base_url:

# OpenAI SDK
client = OpenAI(
    api_key="sk-ar-your-api-key",
    base_url="https://your-agentrouter.com/v1"  # Confirm URL is correct
)

# Anthropic SDK
client = Anthropic(
    api_key="sk-ar-your-api-key",
    base_url="https://your-agentrouter.com"  # Note: no /v1
)

✅ Test connection:

# Test if API is accessible
curl https://your-agentrouter.com/v1/chat/completions \
  -H "Authorization: Bearer sk-ar-your-api-key" \
  -H "Content-Type: application/json" \
  -d '{"model": "gpt-3.5-turbo", "messages": [{"role": "user", "content": "test"}]}'

✅ Set timeout:

from openai import OpenAI
import httpx

client = OpenAI(
    api_key="sk-ar-your-api-key",
    base_url="https://your-agentrouter.com/v1",
    timeout=60.0,  # 60 second timeout
    http_client=httpx.Client(timeout=60.0)
)

Upstream Provider Errors

500 Internal Server Error

Possible Causes:

  1. Upstream provider (OpenAI/Anthropic) service issues
  2. Upstream API key not configured or invalid
  3. AgentRouter server error

Solutions:

  1. Check Upstream Status:

  2. Retry Later:

import time
from openai import APIError

for attempt in range(3):
    try:
        response = client.chat.completions.create(...)
        break
    except APIError as e:
        if e.status_code == 500:
            wait_time = 5 * (attempt + 1)
            print(f"Server error, retrying in {wait_time} seconds...")
            time.sleep(wait_time)
        else:
            raise
  1. Try Other Models:
# If OpenAI has issues, try DeepSeek
model = "deepseek-chat"  # Backup provider
  1. Contact Support:
    • If 500 errors persist
    • Provide request time and error message

SDK Issues

OpenAI SDK Version Incompatibility

Error Message:

AttributeError: 'OpenAI' object has no attribute 'chat'

Solutions:

✅ Update to latest version:

pip install --upgrade openai

✅ Check version:

import openai
print(openai.__version__)  # Should be >= 1.0.0

Anthropic SDK Issues

Error Message:

ModuleNotFoundError: No module named 'anthropic'

Solutions:

pip install anthropic

Environment Variable Issues

Environment Variables Not Loading

Problem:

import os
api_key = os.getenv("AGENTROUTER_API_KEY")
print(api_key)  # None

Solutions:

✅ Use python-dotenv:

from dotenv import load_dotenv
import os

load_dotenv()  # Load .env file
api_key = os.getenv("AGENTROUTER_API_KEY")

✅ Check .env file location:

# .env should be in project root
project/
  ├── .env
  └── main.py

✅ .env file format:

# No quotes
AGENTROUTER_API_KEY=sk-ar-your-api-key

# No spaces
AGENTROUTER_BASE_URL=https://your-agentrouter.com/v1

Streaming Response Issues

Streaming Response Interrupted

Problem:

stream = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello"}],
    stream=True
)

for chunk in stream:
    # Sometimes suddenly interrupts
    print(chunk.choices[0].delta.content)

Solutions:

✅ Add error handling:

try:
    stream = client.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": "Hello"}],
        stream=True
    )
    
    for chunk in stream:
        if chunk.choices[0].delta.content:
            print(chunk.choices[0].delta.content, end="")
except Exception as e:
    print(f"Streaming error: {e}")

✅ Set timeout:

client = OpenAI(
    api_key="sk-ar-your-api-key",
    base_url="https://your-agentrouter.com/v1",
    timeout=300.0  # 5 minute timeout
)

Special Model Issues

DeepSeek Models Not Working

Problem: Error when using DeepSeek models

Checklist:

  1. Model name contains deepseek:
model = "deepseek-chat"  # ✅ Correct
model = "deep-seek-chat"  # ❌ Wrong, won't route to DeepSeek
  1. Server configured with DeepSeek API key:
  • Self-hosted users need to configure DEEPSEEK_API_KEY in .env
  1. Use OpenAI SDK format:
from openai import OpenAI  # ✅ Use OpenAI SDK

from anthropic import Anthropic  # ❌ DeepSeek doesn't support Anthropic format

Anthropic Tool Use Issues

Problem: Tool calls not responding

Solutions:

✅ Check response content type:

message = client.messages.create(
    model="claude-3-5-sonnet-20241022",
    max_tokens=1024,
    tools=tools,
    messages=[{"role": "user", "content": "How's the weather?"}]
)

# Check response type
for content in message.content:
    if content.type == "tool_use":
        print(f"Tool: {content.name}")
        print(f"Parameters: {content.input}")
    elif content.type == "text":
        print(f"Text: {content.text}")

Debugging Tips

1. Enable Logging

import logging

logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)

# OpenAI SDK will output detailed logs

2. View Raw Requests

import requests

response = requests.post(
    "https://your-agentrouter.com/v1/chat/completions",
    headers={
        "Authorization": "Bearer sk-ar-your-api-key",
        "Content-Type": "application/json"
    },
    json={
        "model": "gpt-4o",
        "messages": [{"role": "user", "content": "Hello"}]
    }
)

print(f"Status code: {response.status_code}")
print(f"Response headers: {response.headers}")
print(f"Response body: {response.text}")

3. Use curl for Testing

curl -v https://your-agentrouter.com/v1/chat/completions \
  -H "Authorization: Bearer sk-ar-your-api-key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o",
    "messages": [{"role": "user", "content": "Hello"}]
  }'

Getting Help

If none of the above solutions work:

1. Check Documentation

2. Check Example Code

3. Contact Support

Provide the Following Information

  • Error message (complete stack trace)
  • Request parameters (hide sensitive information)
  • SDK version
  • Python/Node.js version
  • Operating system

Next Steps

On this page