Real-Time Lip Syncing for Unity

SpeechBlend is a Unity package that provides real-time lip syncing for your characters. It works by analyzing audio from any Audio Source and using machine learning to predict realistic mouth shapes (visemes). The package supports various viseme blendshape sets, including Daz Studio, Character Creator 3, iClone, Adobe Fuse, and more. You can use SpeechBlend with a single jaw joint or mouth open blendshape for simple mouth tracking or with a character model that has viseme blendshapes for more realistic lip syncing. The package also supports live microphone input and offers various options to tweak the viseme prediction for optimal performance.

SpeechBlend works by analyzing audio from any Audio Source and using machine learning to predict realistic mouth shapes (visemes). It supports various viseme blendshape sets, including Daz Studio, Character Creator 3, iClone, Adobe Fuse, and more. The package can be used with a single jaw joint or mouth open blendshape for simple mouth tracking or with a character model that has viseme blendshapes for more realistic lip syncing. It also supports live microphone input and offers various options to tweak the viseme prediction for optimal performance.

Real-Time Lip Syncing for Unity - Asset Store

πŸ€– Chat with AI

AI is typing

Suggested Posts

AI Studio: Real-time AI Integration for Unity

AI Studio: Real-time AI Integration for Unity Thumbnail

MQTT for Unity: Real-Time Communication Made Easy

MQTT for Unity: Real-Time Communication Made Easy Thumbnail

Real-Time Fluid Simulations for Unity

Real-Time Fluid Simulations for Unity Thumbnail

GraphQL for Unity Pro: Real-Time Data Access

GraphQL for Unity Pro: Real-Time Data Access Thumbnail

Real-time Audio Streaming for Unity

Real-time Audio Streaming for Unity Thumbnail

Real-Time Heat Simulation for Unity

Real-Time Heat Simulation for Unity Thumbnail