Pipecat provides two HeyGen integrations that plug into different layers of your pipeline. The right choice depends on who owns the room your end user joins. This page covers both integrations, when to use each, and how they connect to LiveAvatar.Documentation Index
Fetch the complete documentation index at: https://docs.liveavatar.com/llms.txt
Use this file to discover all available pages before exploring further.
This page assumes you already have a working Pipecat pipeline. If you do not, a different integration path will likely fit better — head back to Integration paths to pick one.
Prerequisites
- A LiveAvatar account and API key. Sign up at app.liveavatar.com.
- A working Pipecat pipeline with your own STT, LLM, and TTS services.
Installation
Both integrations are available in thepipecat-ai plugin for HeyGen:
Configuration
The integrations read your LiveAvatar key from the environment:api_key="..." directly when constructing either class.
Two integrations, two architectures
Pipecat exposes LiveAvatar in two places in the pipeline. The choice comes down to who owns the room the end user joins.HeyGenTransport | HeyGenVideoService | |
|---|---|---|
| Pipeline layer | Transport | Service node |
| Who owns the room | LiveAvatar (LiveKit room managed for you) | You (Daily, LiveKit, WebRTC, …) |
| How avatar reaches the user | Avatar publishes audio + video into the shared room | Bot receives avatar frames, republishes through your transport |
| Bot media role | Sends TTS audio out, receives user audio in | Sits in the middle: forwards audio up, relays video + audio back down |
| Best for | Quick start, single 1:1 session, no existing transport | Existing Daily/LiveKit/WebRTC stack, recording, fan-out, custom routing |
HeyGenTransport — LiveAvatar owns the room
The transport joins the bot, avatar, and end user into a LiveKit room that LiveAvatar manages. Your pipeline runs STT → LLM → TTS as usual; synthesized audio is forwarded to the avatar, which lip-syncs and publishes audio + video into the same room. The end user subscribes to the avatar directly.
No separate output transport — transport.output() ships TTS audio straight to LiveAvatar.
Required fields
HeyGenTransport defaults to the legacy Streaming API. To use LiveAvatar you must set service_type=ServiceType.LIVE_AVATAR and pass a matching LiveAvatarNewSessionRequest.
| Field | Required | Notes |
|---|---|---|
api_key | yes | HEYGEN_LIVE_AVATAR_API_KEY |
session | yes | aiohttp.ClientSession (shared across HTTP + WS lifecycle — don’t close early) |
service_type | yes | ServiceType.LIVE_AVATAR (defaults to INTERACTIVE_AVATAR, the legacy Streaming API) |
session_request | yes | LiveAvatarNewSessionRequest(avatar_id=..., ...) |
params | no | HeyGenParams() — only audio_in_enabled / audio_out_enabled are used; video is handled internally |
session_request and service_type must match: LiveAvatarNewSessionRequest pairs with ServiceType.LIVE_AVATAR.
Minimal example
Pipeline placement
NoHeyGenVideoService node. TTS audio is consumed directly by transport.output():
Notes
- Don’t mix with
HeyGenVideoService. Two integrations in the same pipeline = duplicate session and WebSocket. Pick one. - Sample rate is fixed at 24 kHz internally; the transport resamples automatically. Overriding
audio_out_sample_ratedoes not propagate. - Client kick-off: use
on_client_connected, not a participant-join handler. The avatar joins asparticipant_id == "heygen"and is filtered, so only real users trigger the event.
HeyGenVideoService — your transport owns the room
The video service runs as a node inside your pipeline. Your bot keeps its own transport (Daily, LiveKit, WebRTC, etc.) and talks to the end user there. Inside the pipeline, TTS audio is sent up to LiveAvatar, which streams avatar audio + video frames back into the pipeline; the bot then republishes those frames through its transport.
Internally the service still uses a HeyGen-managed LiveKit room for inbound avatar frames and a WebSocket for control — both are abstracted away. Your transport stays separate and is the only one your end user joins.
Required fields
HeyGenVideoService defaults to the legacy Streaming API. To use LiveAvatar you must set service_type=ServiceType.LIVE_AVATAR and pass a LiveAvatarNewSessionRequest.
| Field | Required | Notes |
|---|---|---|
api_key | yes | HEYGEN_LIVE_AVATAR_API_KEY |
session | yes | aiohttp.ClientSession |
service_type | yes | ServiceType.LIVE_AVATAR (defaults to INTERACTIVE_AVATAR, the legacy Streaming API) |
session_request | yes | LiveAvatarNewSessionRequest(avatar_id=..., ...) |
LiveAvatarNewSessionRequest only requires avatar_id. Other fields (mode, video_settings, is_sandbox, avatar_persona, livekit_config) are optional.
Minimal example
Pipeline placement
HeyGenVideoService must sit after TTS and before transport.output():
TTSAudioRawFrame and emits OutputAudioRawFrame + OutputImageRawFrame downstream.
Picking one
- Use
HeyGenTransportif you do not already run a transport, want the simplest path to a working avatar bot, and are fine with LiveAvatar managing the LiveKit room. - Use
HeyGenVideoServiceif you already run Daily, LiveKit, or another transport; need the bot in the media path for recording, fan-out, or custom routing; or want avatar video to share the same room as everything else your bot publishes.
Examples
Thepipecat-ai/pipecat repo ships runnable examples for both integrations under examples/video-avatar/. Both use sandbox mode, so they run end-to-end without billing.
1. Clone and install
2. Configure API keys
Create.env at the repo root:
3. Run examples
HeyGenTransport
LiveAvatar owns the LiveKit room. The script prints a room URL — connect a LiveKit client (e.g. meet.livekit.io) using the livekit_url and client token from the logs to talk to the bot.
HeyGenVideoService
Avatar bolted onto a separate transport. Pick webrtc for a built-in browser-based test, or daily to use Daily.
WebRTC (easiest local test — open the printed URL, default http://localhost:7860):
Resources
HeyGenTransport reference
Upstream Pipecat reference for the transport-layer integration.
HeyGenVideoService reference
Upstream Pipecat reference for the service-node integration.