Simple, OpenAI-compatible API. If you know how to use OpenAI's API, you already know how to use ours.
https://api.lobsterstoken.co.uk
All requests must include your API key in the Authorization header as a Bearer token.
curl -H "Authorization: Bearer YOUR_API_KEY" \
https://api.lobsterstoken.co.uk/v1/models
Send a chat message and receive an AI response. This endpoint is compatible with OpenAI's Chat Completions API.
curl -X POST https://api.lobsterstoken.co.uk/chat \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "gpt-4o", "messages": [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Hello!"} ], "temperature": 0.7, "max_tokens": 1024 }'
| Parameter | Type | Default | Description |
|---|---|---|---|
model required | string | — | Model ID (see Available Models) |
messages required | array | — | Array of message objects with role and content |
temperature | number | 1.0 | Sampling temperature. Range: 0.0 – 2.0 |
max_tokens | integer | ∞ | Maximum tokens to generate |
top_p | number | 1.0 | Nucleus sampling threshold |
stream | boolean | false | Enable Server-Sent Events streaming |
stop | string/array | null | Stop sequences |
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"model": "gpt-4o",
"choices": [{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I help you today?"
},
"finish_reason": "stop"
}],
"usage": {
"prompt_tokens": 12,
"completion_tokens": 8,
"total_tokens": 20
}
}
Use any of these model IDs in the model parameter.
Set "stream": true to receive Server-Sent Events (SSE) with incremental response tokens.
curl -X POST https://api.lobsterstoken.co.uk/chat \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "gpt-4o-mini", "messages": [{"role": "user", "content": "Count to 5"}], "stream": true }'
Each streamed chunk looks like:
data: {"choices":[{"delta":{"content":"One"}}]}
data: {"choices":[{"delta":{"content":" two"}}]}
data: [DONE]
Returns a list of all available models you can use with your API key.
curl https://api.lobsterstoken.co.uk/v1/models \
-H "Authorization: Bearer YOUR_API_KEY"
| HTTP Code | Error | Meaning |
|---|---|---|
| 400 | invalid_request | Bad request — check your parameters |
| 401 | invalid_api_key | Unauthorized — invalid or missing API key |
| 429 | rate_limit_exceeded | Rate limit exceeded — slow down or upgrade your plan |
| 500 | server_error | Internal server error — something went wrong on our end |
| 503 | model_unavailable | Model temporarily unavailable — try again shortly |
Default rate limits per API key:
| Tier | Requests/min | Tokens/min |
|---|---|---|
| Free | 20 | 10,000 |
| Pay-as-you-go | 60 | 100,000 |
| Enterprise | Custom | Custom |
from openai import OpenAI client = OpenAI( api_key="YOUR_LOBSTERS_TOKEN_KEY", base_url="https://api.lobsterstoken.co.uk" ) chat = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "What is 2+2?"}] ) print(chat.choices[0].message.content)
import OpenAI from 'openai'; const client = new OpenAI({ apiKey: 'YOUR_LOBSTERS_TOKEN_KEY', baseURL: 'https://api.lobsterstoken.co.uk' }); const chat = await client.chat.completions.create({ model: 'gpt-4o', messages: [{role: 'user', content: 'What is 2+2?'}] }); console.log(chat.choices[0].message.content);