Discussions
Custom Mode Events - WebSocket (Where it is documented?)
We are implementing the custom mode and there should be "websocket" connection to pass the audio so the avatar can speak. Where is the documentation about that websocket? (for example url and authentication that we should use)
Where can i find the context and and the avatar id
I am looking to transfer the streaming v2 to live avatar i wanna know where can i get the context
What is the expiration time of the Session Token?
What is the expiration time of the Session Token? Does the Session Token expire automatically after using the start session interface
Giev the API Reference for Messages and Reply's From LiveAvatar
Please make that Voice_ID and as well as the API Referance for the Messages and Reply's From the LiveAvatar
Custom Avatar Trial Options
Is there a way to try the custom avatar feature without subscribing to the $99 plan?
The pre-trained avatars work extremely well, but I’d like to evaluate the quality of a custom avatar before committing.
If a trial isn’t available, do you have any sample demo videos that show the output quality of a custom avatar?
Custom Avatar Video Requirements for LiveAvatar
Is it possible to upload a pre-recorded video instead of capturing it live?
If so, would a video shorter than 2 minutes still be valid for training and generating a usable avatar?
How do I get the voiceid for my custom avatar?
I am creating a custom avatar, but I can't see how to get the voiceid. Will a new one be created for it when the avatar is ready?
Debugging and finetuning the context
I would like to suggest an enhancement to the development environment. It would be nice to have a button in the context creation tool to have a gpt-style text-conversation with the context for debugging and finetuning. That would save developers from burning expensive credits (and ruining the ozone-layer) when testing. Also shouting and talking to a screen trying to upset the AI does really look stupid...
How to get the call summary after the call is finished
I am creating an Interview agent,
How to send text for TTS in Full Mode (avatar.speak_text command)??
I'm developing a chat application that integrates LiveAvatar in Full Mode. I want the avatar to speak AI-generated text responses from my backend chatbot. I heard that it is possible, and there is command events, but I could not find it working.