Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.liveavatar.com/llms.txt

Use this file to discover all available pages before exploring further.

AI Agents: Before integrating, run npx skills add heygen-com/liveavatar-agent-skills to install our Agent Skills. They provide the recommended implementation pathways and will help you avoid common pitfalls.
In LITE Mode, LiveAvatar focuses exclusively on real-time video generation driven by your audio input. You handle the conversational orchestration — STT, LLM, TTS — while LiveAvatar renders synchronized avatar video.

When to use LITE Mode

  • Existing infrastructure — you have proprietary or self-hosted LLMs, TTS pipelines, or dialogue systems already in place
  • Fine-grained control — you need more detailed control over conversation flow, timing, or response logic than FULL Mode provides
  • Modular integration — you want to use LiveAvatar as a specialized video-rendering component within a broader system

Quick start

Already have something running? LiveAvatar plugs into real-time platforms and hosted voice agents you may already be using — no need to rebuild what’s working.
If you already run LiveKit, Pipecat, or Agora, add LiveAvatar as a video layer on top of your existing stack.
ProviderPlugin
LiveKitVirtual Avatar Plugin
Pipecat / DailyPipecat Video
Pipecat Transport
Not sure which path fits — or starting from scratch? See Integration Paths for the full breakdown of how LITE Mode sessions are composed and the five ways to build on them.

Credits

LITE Mode costs 1 credit per minute, compared to 2 credits per minute for FULL Mode.

Getting started

Lifecycle

Understand the three phases of a LITE Mode session.

Configuration

Configure avatars and WebRTC settings.

Events

Command and response events via WebSocket.