Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.liveavatar.com/llms.txt

Use this file to discover all available pages before exploring further.

Pipecat provides two HeyGen integrations that plug into different layers of your pipeline. The right choice depends on who owns the room your end user joins. This page covers both integrations, when to use each, and how they connect to LiveAvatar.
This page assumes you already have a working Pipecat pipeline. If you do not, a different integration path will likely fit better — head back to Integration paths to pick one.
Upstream Pipecat plugin references:

Prerequisites

  • A LiveAvatar account and API key. Sign up at app.liveavatar.com.
  • A working Pipecat pipeline with your own STT, LLM, and TTS services.

Installation

Both integrations are available in the pipecat-ai plugin for HeyGen:
uv add "pipecat-ai[heygen]"
pip install "pipecat-ai[heygen]"

Configuration

The integrations read your LiveAvatar key from the environment:
HEYGEN_LIVE_AVATAR_API_KEY=...
You can also pass api_key="..." directly when constructing either class.

Two integrations, two architectures

Pipecat exposes LiveAvatar in two places in the pipeline. The choice comes down to who owns the room the end user joins.
HeyGenTransportHeyGenVideoService
Pipeline layerTransportService node
Who owns the roomLiveAvatar (LiveKit room managed for you)You (Daily, LiveKit, WebRTC, …)
How avatar reaches the userAvatar publishes audio + video into the shared roomBot receives avatar frames, republishes through your transport
Bot media roleSends TTS audio out, receives user audio inSits in the middle: forwards audio up, relays video + audio back down
Best forQuick start, single 1:1 session, no existing transportExisting Daily/LiveKit/WebRTC stack, recording, fan-out, custom routing
In both cases your pipeline still owns STT, LLM, and TTS. LiveAvatar only renders the avatar.

HeyGenTransport — LiveAvatar owns the room

The transport joins the bot, avatar, and end user into a LiveKit room that LiveAvatar manages. Your pipeline runs STT → LLM → TTS as usual; synthesized audio is forwarded to the avatar, which lip-syncs and publishes audio + video into the same room. The end user subscribes to the avatar directly. No separate output transport — transport.output() ships TTS audio straight to LiveAvatar.

Required fields

HeyGenTransport defaults to the legacy Streaming API. To use LiveAvatar you must set service_type=ServiceType.LIVE_AVATAR and pass a matching LiveAvatarNewSessionRequest.
FieldRequiredNotes
api_keyyesHEYGEN_LIVE_AVATAR_API_KEY
sessionyesaiohttp.ClientSession (shared across HTTP + WS lifecycle — don’t close early)
service_typeyesServiceType.LIVE_AVATAR (defaults to INTERACTIVE_AVATAR, the legacy Streaming API)
session_requestyesLiveAvatarNewSessionRequest(avatar_id=..., ...)
paramsnoHeyGenParams() — only audio_in_enabled / audio_out_enabled are used; video is handled internally
session_request and service_type must match: LiveAvatarNewSessionRequest pairs with ServiceType.LIVE_AVATAR.

Minimal example

import os
import aiohttp
from pipecat.transports.heygen import HeyGenTransport
from pipecat.services.heygen.client import ServiceType
from pipecat.services.heygen.api_liveavatar import LiveAvatarNewSessionRequest

async with aiohttp.ClientSession() as session:
    transport = HeyGenTransport(
        session=session,
        api_key=os.environ["HEYGEN_LIVE_AVATAR_API_KEY"],
        service_type=ServiceType.LIVE_AVATAR,
        session_request=LiveAvatarNewSessionRequest(
            # Sandbox mode only supports this fixed avatar_id and is_sandbox=True.
            # Remove both when moving to production and use your own avatar_id.
            avatar_id="dd73ea75-1218-4ef3-92ce-606d5f7fbc0a",
            is_sandbox=True,
        ),
    )

Pipeline placement

No HeyGenVideoService node. TTS audio is consumed directly by transport.output():
transport.input → STT → user_agg → LLM → TTS → transport.output → assistant_agg
Avatar video is subscribed by the client SDK directly from LiveKit and does not flow through the Pipecat pipeline.

Notes

  • Don’t mix with HeyGenVideoService. Two integrations in the same pipeline = duplicate session and WebSocket. Pick one.
  • Sample rate is fixed at 24 kHz internally; the transport resamples automatically. Overriding audio_out_sample_rate does not propagate.
  • Client kick-off: use on_client_connected, not a participant-join handler. The avatar joins as participant_id == "heygen" and is filtered, so only real users trigger the event.

HeyGenVideoService — your transport owns the room

The video service runs as a node inside your pipeline. Your bot keeps its own transport (Daily, LiveKit, WebRTC, etc.) and talks to the end user there. Inside the pipeline, TTS audio is sent up to LiveAvatar, which streams avatar audio + video frames back into the pipeline; the bot then republishes those frames through its transport. Internally the service still uses a HeyGen-managed LiveKit room for inbound avatar frames and a WebSocket for control — both are abstracted away. Your transport stays separate and is the only one your end user joins.

Required fields

HeyGenVideoService defaults to the legacy Streaming API. To use LiveAvatar you must set service_type=ServiceType.LIVE_AVATAR and pass a LiveAvatarNewSessionRequest.
FieldRequiredNotes
api_keyyesHEYGEN_LIVE_AVATAR_API_KEY
sessionyesaiohttp.ClientSession
service_typeyesServiceType.LIVE_AVATAR (defaults to INTERACTIVE_AVATAR, the legacy Streaming API)
session_requestyesLiveAvatarNewSessionRequest(avatar_id=..., ...)
LiveAvatarNewSessionRequest only requires avatar_id. Other fields (mode, video_settings, is_sandbox, avatar_persona, livekit_config) are optional.

Minimal example

import os
import aiohttp
from pipecat.services.heygen import HeyGenVideoService
from pipecat.services.heygen.client import ServiceType
from pipecat.services.heygen.api_liveavatar import LiveAvatarNewSessionRequest

async with aiohttp.ClientSession() as session:
    heygen = HeyGenVideoService(
        api_key=os.environ["HEYGEN_LIVE_AVATAR_API_KEY"],
        session=session,
        service_type=ServiceType.LIVE_AVATAR,
        session_request=LiveAvatarNewSessionRequest(
            # Sandbox mode only supports this fixed avatar_id and is_sandbox=True.
            # Remove both when moving to production and use your own avatar_id.
            avatar_id="dd73ea75-1218-4ef3-92ce-606d5f7fbc0a",
            is_sandbox=True,
        ),
    )

Pipeline placement

HeyGenVideoService must sit after TTS and before transport.output():
transport.input → STT → user_agg → LLM → TTS → heygen → transport.output → assistant_agg
It consumes TTSAudioRawFrame and emits OutputAudioRawFrame + OutputImageRawFrame downstream.

Picking one

  • Use HeyGenTransport if you do not already run a transport, want the simplest path to a working avatar bot, and are fine with LiveAvatar managing the LiveKit room.
  • Use HeyGenVideoService if you already run Daily, LiveKit, or another transport; need the bot in the media path for recording, fan-out, or custom routing; or want avatar video to share the same room as everything else your bot publishes.

Examples

The pipecat-ai/pipecat repo ships runnable examples for both integrations under examples/video-avatar/. Both use sandbox mode, so they run end-to-end without billing.

1. Clone and install

git clone https://github.com/pipecat-ai/pipecat.git
cd pipecat
uv sync --group dev --all-extras --no-extra gstreamer --no-extra local

2. Configure API keys

Create .env at the repo root:
HEYGEN_LIVE_AVATAR_API_KEY=...   # https://app.liveavatar.com → API settings
DEEPGRAM_API_KEY=...             # https://console.deepgram.com
CARTESIA_API_KEY=...             # https://play.cartesia.ai
GOOGLE_API_KEY=...               # https://aistudio.google.com/apikey
DAILY_API_KEY=...                # https://dashboard.daily.co (only for Daily transport)

3. Run examples

HeyGenTransport

LiveAvatar owns the LiveKit room. The script prints a room URL — connect a LiveKit client (e.g. meet.livekit.io) using the livekit_url and client token from the logs to talk to the bot.
uv run python examples/video-avatar/video-avatar-heygen-transport.py

HeyGenVideoService

Avatar bolted onto a separate transport. Pick webrtc for a built-in browser-based test, or daily to use Daily. WebRTC (easiest local test — open the printed URL, default http://localhost:7860):
uv run python examples/video-avatar/video-avatar-heygen-video-service.py --transport webrtc
Daily:
uv run python examples/video-avatar/video-avatar-heygen-video-service.py --transport daily
Both examples set is_sandbox=True and pin avatar_id="dd73ea75-1218-4ef3-92ce-606d5f7fbc0a". Swap in your own avatar_id and drop is_sandbox to run against production avatars.

Resources

HeyGenTransport reference

Upstream Pipecat reference for the transport-layer integration.

HeyGenVideoService reference

Upstream Pipecat reference for the service-node integration.