Real-time, offline, and cross-platform lip sync for MetaHuman and custom characters. Animate character lips in response to audio from various sources.
See the quality of real-time lip sync animation produced by the plugin. These animations can be applied to any supported character, whether it's MetaHuman or a custom character.
Runtime MetaHuman Lip Sync provides a comprehensive system for dynamic lip sync animation, enabling characters to speak naturally in response to audio input from various sources. Despite its name, the plugin works with a wide range of characters beyond just MetaHumans.
Generate lip sync animations in real-time from microphone input or pre-recorded audio
Works with MetaHuman, Daz Genesis 8/9, Reallusion CC3/CC4, Mixamo, ReadyPlayerMe, and more
Supports FACS-based systems, Apple ARKit, Preston Blair phoneme sets, and 3ds Max phoneme systems
Process audio from microphones, imported files, text-to-speech, or any PCM audio data
Set up lip sync animation with just a few Blueprint nodes. The plugin provides a clean, easy-to-use interface for implementing lip sync in your characters.
Integrate lip sync into your character's animation blueprint with the dedicated blending nodes.
Fine-tune the lip sync behavior with adjustable parameters for interpolation speed and reset timing.
Controls how quickly lip movements transition between visemes. Higher values result in more abrupt transitions.
The duration in seconds after which lip sync is reset when audio stops, preventing lingering mouth positions.
The plugin uses a standard set of visemes (visual phonemes) that are mapped to your character's morph targets:
Audio data is received as float PCM format with specified channels and sample rate
The plugin processes the audio to generate visemes (phonemes)
Visemes drive the lip sync animation using the character's pose asset
The animation is applied to the character in real-time
UE 5.0 - 5.5
MetaHuman plugin enabled in your project
Runtime Audio Importer plugin
Runtime Text To Speech or Runtime AI Chatbot Integrator plugins
Runtime MetaHuman Lip Sync works seamlessly with other plugins to create complete audio, speech, and animation solutions for your Unreal Engine projects.
Import, stream, and capture audio at runtime to drive lip sync animations. Process audio from files, memory, or microphone input.
Learn moreAdd speech recognition to create interactive characters that respond to voice commands while animating with lip sync.
Learn moreGenerate realistic speech from text offline with 900+ voices and animate character lips in response to the synthesized audio.
Learn moreCreate AI-powered talking characters that respond to user input with natural language and realistic lip sync.
Learn moreCombine Runtime MetaHuman Lip Sync with our other plugins to create fully interactive characters that can listen, understand, speak, and animate naturally. From voice commands to AI-driven conversations, our plugin ecosystem provides everything you need for next-generation character interaction.
Despite its name, Runtime MetaHuman Lip Sync works with a wide range of characters beyond just MetaHumans. Here's how to set up lip sync for various character systems.
The plugin supports mapping between different viseme standards, allowing you to use characters with various facial animation systems:
Map ARKit blendshapes to visemes
Map Action Units to visemes
Classic animation mouth shapes
Standard 3ds Max phonemes
Get started quickly with our detailed documentation and receive support through multiple channels. From basic setup to advanced character configuration, we're here to help you succeed.
Step-by-step guides for MetaHuman and custom characters
Visual guides for setup and configuration
Get real-time help from developers and users
Contact solutions@georgy.dev for tailored solutions
Demo Project Walkthrough
Bring your MetaHuman and custom characters to life with real-time lip sync animation. Create more immersive and engaging experiences with characters that speak naturally in response to audio input.