Guides
Error Handling
Learn how to handle errors returned by the CatLove AI API.
Error Response Format
When an error occurs, the API returns a JSON response with details about the error:
{
"error": {
"message": "Invalid value for 'model': 'invalid-model' is not a valid model.",
"type": "invalid_request_error",
"param": "model",
"code": "model_not_found"
}
}HTTP Status Codes
The API uses standard HTTP status codes to indicate the success or failure of requests:
| Code | Name | Description |
|---|---|---|
400 | Bad Request | The request was malformed or missing required parameters. |
401 | Unauthorized | Invalid or missing API key. |
403 | Forbidden | The API key does not have permission for this operation. |
404 | Not Found | The requested resource does not exist. |
429 | Too Many Requests | Rate limit exceeded. Please slow down your requests. |
500 | Internal Server Error | An error occurred on our servers. |
502 | Bad Gateway | The upstream AI provider returned an error. |
503 | Service Unavailable | The service is temporarily unavailable. |
Error Types
The type field in error responses indicates the category of error:
invalid_request_error- The request was malformedauthentication_error- Invalid API keyrate_limit_error- Rate limit exceededapi_error- Internal server errorcontext_length_exceeded- Input too long
Handling Errors in Code
Here's how to properly handle errors in your application:
from openai import OpenAI, APIError, RateLimitError
client = OpenAI(
api_key="sk-your-api-key",
base_url="https://api.catlove.cc/v1"
)
try:
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}]
)
except RateLimitError as e:
# Handle rate limiting - wait and retry
print(f"Rate limit exceeded: {e}")
time.sleep(60)
# Retry the request
except APIError as e:
# Handle other API errors
print(f"API error: {e.status_code} - {e.message}")
except Exception as e:
# Handle unexpected errors
print(f"Unexpected error: {e}")Retry Strategy
For transient errors (5xx status codes and 429), we recommend implementing exponential backoff:
import time
import random
def make_request_with_retry(max_retries=3):
for attempt in range(max_retries):
try:
return client.chat.completions.create(...)
except (RateLimitError, APIError) as e:
if attempt == max_retries - 1:
raise
# Exponential backoff with jitter
wait_time = (2 ** attempt) + random.random()
print(f"Retry {attempt + 1} after {wait_time:.1f}s")
time.sleep(wait_time)Automatic Retries
The official OpenAI SDK includes automatic retry logic for transient errors. When using our API with the OpenAI SDK, you get this behavior out of the box.