Documentation Index
Fetch the complete documentation index at: https://docs.agentbot.raveculture.xyz/llms.txt
Use this file to discover all available pages before exploring further.
Demo chat
The demo chat endpoint lets you interact with AI models without signing up or deploying an agent. Requests are rate-limited by IP address.List available models
Response (200)
| Field | Type | Description |
|---|---|---|
models | array | Available demo models |
models[].id | string | Model identifier to use in POST requests |
models[].name | string | Human-readable model name |
models[].provider | string | AI provider name |
mode | string | Always demo |
message | string | Welcome message |
Send a demo message
Request body
| Field | Type | Required | Description |
|---|---|---|---|
message | string | Yes | Message to send |
model | string | No | Model ID from the models list. Defaults to xiaomi/mimo-v2-pro. |
mode | string | No | Chat mode identifier |
conversation | array | No | Previous conversation messages for context. Only user-role messages are retained (max 4000 characters each). |
Example request
Response (200)
| Field | Type | Description |
|---|---|---|
id | string | Response identifier from the AI provider |
model | string | Model ID that served the request |
message | string | AI response content |
usage | object | Token usage statistics |
usage.prompt_tokens | number | Input tokens consumed |
usage.completion_tokens | number | Output tokens generated |
usage.total_tokens | number | Total tokens used |
done | boolean | Always true (non-streaming) |
Error responses
| Status | Error | Description |
|---|---|---|
| 400 | Message required | The message field is missing from the request body |
| 429 | Too many requests | IP-based rate limit exceeded |
| 503 | Demo unavailable — service not configured. | The demo service is not configured on the server |
| varies | AI service error. Please try again. | The upstream AI provider returned an error. The HTTP status code is passed through from the provider (for example, 402 for quota issues or 422 for invalid model parameters). |
| 500 | Failed to get response | An unexpected error occurred |
The demo endpoint uses a fixed max token limit of 1024 and does not support streaming. For full chat capabilities, use the AI chat endpoint with a subscription plan.
Every successful demo chat request automatically logs token usage and cost to the usage tracking system. Demo requests are recorded with
userId: "demo" and agentId: "demo-chat", and are visible in the cost dashboard.