Skip to main content
Integrate your own LLM API keys and endpoints with LiveAvatar instead of using the platform’s default.

When to use this

  • You have an existing streamable LLM setup you want to maintain
  • You need privacy protections that preclude using LiveAvatar’s built-in LLM
  • You need access to language models not currently supported by the platform

Setup

Step 1: Register your API key

Store your API credentials securely. LiveAvatar encrypts secrets using Amazon KMS.
curl -X POST https://api.liveavatar.com/v1/secrets \
  -H "X-API-KEY: <YOUR_API_KEY>" \
  -H "content-type: application/json" \
  -d '{
    "secret_type": "LLM_API_KEY",
    "secret_value": "<your_llm_api_key>",
    "secret_name": "My LLM Key"
  }'
The response returns a secret_id for future reference.

Step 2: Configure your LLM endpoint

Create an LLM configuration with your endpoint details:
curl -X POST https://api.liveavatar.com/v1/llm_configurations \
  -H "X-API-KEY: <YOUR_API_KEY>" \
  -H "content-type: application/json" \
  -d '{
    "display_name": "My Custom LLM",
    "model_name": "gpt-4o-mini",
    "secret_id": "<secret_id>",
    "base_url": "https://api.openai.com"
  }'
The base_url is optional and defaults to OpenAI. Your endpoint must comply with the OpenAI API specification and use the /chat/completions route.
The response returns an llm_configuration_id.

Step 3: Start a session with your LLM

Set llm_configuration_id in your session token:
{
  "mode": "FULL",
  "avatar_id": "<avatar_id>",
  "llm_configuration_id": "<llm_configuration_id>",
  "avatar_persona": {
    "voice_id": "<voice_id>",
    "context_id": "<context_id>"
  }
}

Supported systems

Any OpenAI-compatible endpoint using the /chat/completions protocol. This includes OpenAI directly, Azure OpenAI, and custom endpoints that wrap other providers.