Chat

Create chat completion

Creates a chat completion for the given messages. This endpoint is a drop-in replacement for the OpenAI Chat Completions API — it accepts the same request schema and returns the same response format, including usage with token counts.

The router selects a provider that serves the requested model, forwards the request, streams (or batches) the response, and calculates cost from actual token usage.

POST
/chat/completions
Sign-In-With-X<token>

Base64-encoded JSON envelope containing a CAIP-122 sign-in message and its cryptographic signature.

{
  "message": "<CAIP-122 message>",
  "signature": "<base58 for Solana, hex for EVM>"
}

In: header

Request Body

application/json

model*string

Model ID to use for the completion (e.g. gpt-4o, claude-sonnet-4).

messages*array<>

A list of messages comprising the conversation so far.

temperature?number

Sampling temperature (0.0–2.0).

Range0 <= value <= 2
top_p?number

Nucleus sampling parameter.

Range0 <= value <= 1
n?integer

Number of completions to generate.

Default1
stream?boolean

Whether to stream partial responses using SSE.

Defaultfalse
stop?|array<>

Up to 4 sequences where the API will stop generating.

max_tokens?integer

Maximum number of tokens to generate.

presence_penalty?number

Presence penalty (-2.0 to 2.0).

frequency_penalty?number

Frequency penalty (-2.0 to 2.0).

user?string

A unique identifier representing your end-user.

Response Body

application/json

application/json

application/json

curl -X POST "https://beta.aimo.network/api/v1/chat/completions" \  -H "Content-Type: application/json" \  -d '{    "model": "gpt-4o",    "messages": [      {        "role": "system",        "content": "string"      }    ]  }'
{
  "id": "string",
  "object": "chat.completion",
  "created": 0,
  "model": "string",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "system",
        "content": "string"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 0,
    "completion_tokens": 0,
    "total_tokens": 0
  }
}
{
  "error": {
    "code": "string",
    "message": "string"
  }
}
{
  "error": {
    "code": "string",
    "message": "string"
  }
}