Unified API
Single OpenAI-compatible endpoint for all models. Switch models with one line change.
Multi-Model Routing
Access 1300+ models from OpenAI, Anthropic, Google, Meta, and open-source providers.
Credit-Based Billing
Pay only for what you use. Credits work across all models with transparent pricing.
API Key Management
Create, rotate, and revoke API keys. Set usage limits and monitor consumption.
Enterprise Security
SOC 2 compliant infrastructure. Data encrypted in transit and at rest.
Real-Time Analytics
Track usage, costs, and performance. Set alerts and budgets.
Quick Start
Get started in minutes. Use your existing OpenAI SDK with minimal changes—just update the base URL and your API key.
from openai import OpenAI
client = OpenAI(
base_url="https://api.bytarch.com/openai/v1",
api_key="YOUR_API_KEY"
)
response = client.chat.completions.create(
model="BytArch/BytArch-Lumina",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello, world!"}
]
)
print(response.choices[0].message.content)Standard OpenAI Response Format
All responses follow the OpenAI API format exactly. Your existing code will work without modification—just change the base URL.
- Compatible with all OpenAI SDKs
- Streaming responses with SSE
- Function calling support
- Token usage tracking
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1699000000,
"model": "BytArch/BytArch-Lumina",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I help you today?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 20,
"completion_tokens": 10,
"total_tokens": 30
}
}