Integrate AI APIs in 5 minutes using your favorite framework.
Celuxe API provides a unified interface to access multiple top AI models. We are fully compatible with OpenAI's API format, meaning you can integrate directly using existing OpenAI SDKs, LangChain, Vercel AI SDK, and more.Just replace base_url and API Key.
Get started in 5 minutes with the steps below.
Sign up at Celuxe and get $5 in credits.
Create a new API Key in the dashboard. We recommend creating separate keys for different projects for easier management and monitoring.
Use your preferred SDK or send a request directly with cURL:
curl https://api.celuxe.shop/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer sk-your-api-key" \ -d '{ "model": "deepseek-v3", "messages": [ {"role": "user", "content": "Hello!"} ] }'
All API requests require your API Key in the Authorization header:
# Format
Authorization: Bearer sk-your-api-key
Your API Key can be obtained from Dashboard → API Key Management. We recommend storing it in an environment variable rather than hardcoding it in your code.
Chat Completions is the core feature of Celuxe API. Send a list of messages to the model and it will generate a response.
Request Endpoint:
POST https://api.celuxe.shop/v1/chat/completions
Request Parameters:
deepseek-v3, gpt-4osystem, user, assistant rolesExample Request:
{
"model": "deepseek-v3",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of France?"}
],
"temperature": 0.7,
"max_tokens": 1024
}
Example Response:
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1714000000,
"model": "deepseek-v3",
"choices": [{
"index": 0,
"message": {
"role": "assistant",
"content": "The capital of France is Paris."
},
"finish_reason": "stop"
}],
"usage": {
"prompt_tokens": 28,
"completion_tokens": 6,
"total_tokens": 34
}
}
Setting stream: true allows the model to return responses chunk by chunk, ideal for real-time chat experiences:
const response = await fetch("https://api.celuxe.shop/v1/chat/completions", { method: "POST", headers: { "Content-Type": "application/json", "Authorization": "Bearer sk-your-api-key" }, body: JSON.stringify({ model: "deepseek-v3", messages: [{role:"user", content:"Hello!"}], stream: true }) }); const reader = response.body.getReader(); const decoder = new TextDecoder(); while(true) { const {done, value} = await reader.read(); if(done) break; console.log(decoder.decode(value)); }
Get all available models:
curl https://api.celuxe.shop/v1/models \ -H "Authorization: Bearer sk-your-api-key"
You can also check the models page for detailed information and pricing of each model.
The API uses standard HTTP status codes to indicate request results:
Error responses include details:
{
"error": {
"message": "Insufficient balance",
"type": "insufficient_quota",
"code": "insufficient_balance"
}
}
Use the OpenAI Python SDK with Celuxe:
from openai import OpenAI client = OpenAI( base_url="https://api.celuxe.shop/v1", api_key="sk-your-api-key" ) response = client.chat.completions.create( model="deepseek-v3", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Hello!"} ] ) print(response.choices[0].message.content)
Use the OpenAI Node.js SDK:
import OpenAI from "openai"; const client = new OpenAI({ baseURL: "https://api.celuxe.shop/v1", apiKey: "sk-your-api-key", }); const completion = await client.chat.completions.create({ model: "deepseek-v3", messages: [ { role: "system", content: "You are a helpful assistant." }, { role: "user", content: "Hello!" } ], }); console.log(completion.choices[0].message.content);
Test directly with cURL:
curl https://api.celuxe.shop/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer sk-your-api-key" \ -d '{ "model": "deepseek-v3", "messages": [ {"role": "user", "content": "Hello!"} ] }'
Create, view, and revoke API Keys from the dashboard:
The dashboard provides detailed usage statistics and billing information:
API requests are protected by rate limits to ensure service stability:
Requests exceeding the limit return a 429 status code. We recommend implementing an exponential backoff retry strategy.