The building blocks
A LiveAvatar session is a room with participants. Every session has at least two participants:- The end user — your customer’s browser, mobile app, or client
- The avatar — rendered and streamed by LiveAvatar
LITE Mode integration paths
Five common ways to build with LITE Mode, ordered by how much you own:1. No agent — broadcast only
Skip the agent entirely. Use LiveAvatar as a rendering layer: feed audio in, get avatar video out. Best when the end user doesn’t need real-time interaction — livestream overlays, news-style streams, narrated video generation. Your end user still joins our room to watch, but there is no conversational loop.2. Custom agent — you build it into our room
LiveAvatar provisions the LiveKit room and the avatar; you build an agent and send it into that room. Your agent orchestrates the conversation (STT, LLM, TTS) and streams audio to the avatar. LiveAvatar exposes room access so your agent can publish audio, read events, and control avatar state. Best when you are starting from scratch and want the simplest path to a working conversation. Full control over agent logic without standing up your own transport layer.3. Connector — we run a hosted agent for you
You provide credentials for a supported third-party voice agent; LiveAvatar stands up a connector that bridges that agent into the LiveAvatar-owned room. No agent code to write, though small configuration changes on the voice agent side are typically required to make it fully work — see the specific connector’s page for requirements. Best when you already have a voice agent with a supported provider (today: ElevenLabs).4. Plugin — we stream into your existing transport
You already run your own LiveKit, Pipecat, or Agora stack. LiveAvatar streams avatar video into your transport instead of creating its own room. While you can build your own integration, we recommend using one of our plugins — we’ve built them with our partners to cover the most common setups. Best when you already have a real-time platform running in production and LiveAvatar is being added to an existing pipeline. Do not pick this path just to have it — path #2 is simpler if you’re not already on one of these stacks.5. Raw transport config — you wire it up yourself
You provide transport credentials directly (livekit_config or agora_config) in the session start payload. When we send our avatar into your transport layer, you’re responsible for wiring up the integration via the WebSocket events exposed by our LITE Mode API — no plugin involved.
Under the hood, our plugins are wired up this way. If no plugin exists for your stack (today: Node.js) or you’re on a transport we haven’t shipped a plugin for, this may be the path for you.
Have a specific use case or stack you’d like first-class support for? Reach out to support@liveavatar.com — we may be able to help build a plugin or connector for you.
- Token minting and LiveKit grants (including
canPublishData: true) - Connecting to the LITE WebSocket returned from
/sessions/start - Sending audio in the right format
- Handling state and error events
Choosing a path
| You have… | Recommended path |
|---|---|
| An existing LiveKit, Pipecat, or Agora stack with a matching plugin | Plugin — add LiveAvatar on top |
| An existing stack without a plugin (e.g., Node.js) | Raw transport config — wire it up manually |
| An ElevenLabs voice agent | Connector — ElevenLabs Agent Connector |
| Nothing yet, and you want a conversational avatar | Your agent in our room |
| A broadcast / one-way use case | No agent — drive audio directly |
What’s next
- Plugins — integrate with your existing LiveKit, Pipecat, or Agora stack
- Connectors — bridge a hosted voice agent
- Lifecycle — the three phases of a LITE Mode session
- Events — WebSocket command and response events