Custom LLM Integration

How to integrate using your own API keys/endpoints.

Using your own LLM API Keys

Sometimes, you may want to use your own API keys/endpoints. This is true for customers that

  • have a pre-existing streamable LLM configuration setup
  • have privacy reasons and can't use the LLM provided by LiveAvatar
  • want to use an LLM not offered by LiveAvatar.

This guide walks through how to

  1. Add your own API Key
  2. Configure your LLM endpoint
  3. Start a session with that configuration

This flow works with OpenAI and any OpenAI-specification endpoint that uses the chat/completion endpoint. You can also wrap your own endpoints via those specifications and call other LLM providers while we work towards exposing those APIs.

Adding your API Key

First, register your API key with LiveAvatar. We encrypt all user secrets with Amazon Encryption library and Amazon KMS. We will return a newly created secret_id in the response that references your stored secret.

See endpoint details here.

curl --request POST \
     --url https://api.liveavatar.com/v1/secrets \
     --header 'accept: application/json' \
     --header 'content-type: application/json' \
     --data '
{
  "secret_type": "OPENAI_API_KEY",
  "secret_value": "sk_secret_key",
  "secret_name": "<display name of your secret>"
}
'

Creating an LLM Configuration

Next, we require developer to specify information on the LLM configuration. We ask users to specify the following:

  • Base URL - if not present, we default to OpenAI.
    • Note our LLM will call <base_url>/v1/chat/completions
    • We expect the endpoint inserted to follow the OpenAI API specifications.
    • If not set, we will use the base OpenAI endpoint
  • Display Name - how you want to display the name of the LLM configuration
  • Model Name - which model you want to use (ie. gpt-4o-mini)
  • Secret ID - tied to the secret ID added from the previous step.

See endpoint details here.

curl --request POST \
     --url https://api.liveavatar.com/v1/llm-configurations \
     --header 'accept: application/json' \
     --header 'content-type: application/json' \
     --data '
{
  "display_name": "my_first_config",
  "model_name": "gpt-4o-mini",
  "secret_id": "<secret_id>"
}
'

Starting a Session

Finally, when starting a session, you can specify which LLM configuration you want us to use. Simply set the llm_configuration when creating a session token. When we start the session, instead of using our LLM systems to generate responses, we will now instead, send it to your system.

See the endpoint details here.