{"id":1479,"date":"2026-02-13T15:39:25","date_gmt":"2026-02-13T07:39:25","guid":{"rendered":"https:\/\/gaga.art\/blog\/?p=1479"},"modified":"2026-02-05T15:51:54","modified_gmt":"2026-02-05T07:51:54","slug":"liveportrait","status":"publish","type":"post","link":"https:\/\/gaga.art\/blog\/liveportrait\/","title":{"rendered":"LivePortrait Install: Turn Static Photos Into Animated Portraits"},"content":{"rendered":"\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"685\" src=\"https:\/\/gaga.art\/blog\/wp-content\/uploads\/2026\/02\/liveportrait-1-1024x685.webp\" alt=\"liveportrait\" class=\"wp-image-1482\" srcset=\"https:\/\/gaga.art\/blog\/wp-content\/uploads\/2026\/02\/liveportrait-1-1024x685.webp 1024w, https:\/\/gaga.art\/blog\/wp-content\/uploads\/2026\/02\/liveportrait-1-300x201.webp 300w, https:\/\/gaga.art\/blog\/wp-content\/uploads\/2026\/02\/liveportrait-1-768x514.webp 768w, https:\/\/gaga.art\/blog\/wp-content\/uploads\/2026\/02\/liveportrait-1-1536x1028.webp 1536w, https:\/\/gaga.art\/blog\/wp-content\/uploads\/2026\/02\/liveportrait-1-2048x1370.webp 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"key-takeaways\"><strong>Key Takeaways<\/strong><\/h2>\n\n\n\n<p><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>LivePortrait<\/strong> is a free, open-source AI tool that animates static portrait photos using driving videos or real-time webcam input<\/li>\n\n\n\n<li>Developed by Kuaishou&#8217;s VGI team, achieving <strong>12.8ms inference speed<\/strong> on RTX 4090\u201420-30x faster than diffusion-based methods<\/li>\n\n\n\n<li>Supports both <strong>human and animal<\/strong> portrait animation with precise expression retargeting<\/li>\n\n\n\n<li>Available on <strong>Hugging Face<\/strong>, <strong>ComfyUI<\/strong>, and as a standalone Python application<\/li>\n\n\n\n<li>No expensive hardware required\u2014runs on RTX 4060 with 32GB RAM<\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<div class=\"wp-block-rank-math-toc-block has-custom-cd-994-c-color has-text-color has-link-color wp-elements-51e11cab1804de7449c2b6da0901c90e\" id=\"rank-math-toc\"><p>Table of Contents<\/p><nav><ul><li><a href=\"#key-takeaways\">Key Takeaways<\/a><\/li><li><a href=\"#what-is-live-portrait\">What Is LivePortrait?<\/a><ul><li><a href=\"#core-capabilities-at-a-glance\">Core Capabilities at a Glance<\/a><\/li><\/ul><\/li><li><a href=\"#live-portrait-hugging-face-getting-started-without-installation\">LivePortrait Hugging Face: Getting Started Without Installation<\/a><ul><li><a href=\"#what-you-can-do-on-hugging-face\">What You Can Do on Hugging Face<\/a><\/li><\/ul><\/li><li><a href=\"#comfy-ui-live-portrait-advanced-workflows-for-creators\">ComfyUI LivePortrait: Advanced Workflows for Creators<\/a><ul><li><a href=\"#top-comfy-ui-live-portrait-implementations\">Top ComfyUI LivePortrait Implementations<\/a><\/li><li><a href=\"#basic-comfy-ui-live-portrait-workflow\">Basic ComfyUI LivePortrait Workflow<\/a><\/li><\/ul><\/li><li><a href=\"#installing-live-portrait-locally-step-by-step-guide\">Installing LivePortrait Locally: Step-by-Step Guide<\/a><ul><li><a href=\"#system-requirements\">System Requirements<\/a><\/li><li><a href=\"#installation-process\">Installation Process<\/a><\/li><\/ul><\/li><li><a href=\"#how-to-use-live-portrait-practical-applications\">How to Use LivePortrait: Practical Applications<\/a><ul><\/ul><\/li><li><a href=\"#how-live-portrait-ai-works-the-technology-behind-the-animation\">How LivePortrait AI Works: The Technology Behind the Animation<\/a><ul><\/ul><\/li><li><a href=\"#live-portrait-vs-competitors-what-makes-it-stand-out\">LivePortrait vs. Competitors: What Makes It Stand Out<\/a><\/li><li><a href=\"#common-live-portrait-issues-and-solutions\">Common LivePortrait Issues and Solutions<\/a><ul><li><a href=\"#issue-1-cuda-out-of-memory-error\">Issue 1: &#8220;CUDA out of memory&#8221; Error<\/a><\/li><li><a href=\"#issue-2-poor-stitching-quality-visible-face-boundaries\">Issue 2: Poor Stitching Quality (Visible Face Boundaries)<\/a><\/li><li><a href=\"#issue-3-jittery-animation-in-long-videos\">Issue 3: Jittery Animation in Long Videos<\/a><\/li><li><a href=\"#issue-4-driving-video-auto-crop-misaligns-face\">Issue 4: Driving Video Auto-Crop Misaligns Face<\/a><\/li><li><a href=\"#issue-5-mac-os-performance-too-slow\">Issue 5: macOS Performance Too Slow<\/a><\/li><\/ul><\/li><li><a href=\"#advanced-use-cases-for-live-portrait-ai\">Advanced Use Cases for LivePortrait AI<\/a><ul><\/ul><\/li><li><a href=\"#bonus-gaga-ai-video-generator-for-complete-avatar-creation\">Bonus: Gaga AI Video Generator for Complete Avatar Creation<\/a><ul><\/ul><\/li><li><a href=\"#conclusion-why-live-portrait-matters-in-2025\">Conclusion: Why LivePortrait Matters in 2025<\/a><\/li><li><a href=\"#frequently-asked-questions-faq\">Frequently Asked Questions (FAQ)<\/a><ul><\/ul><\/li><\/ul><\/nav><\/div>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"what-is-live-portrait\"><strong>What Is LivePortrait?<\/strong><\/h2>\n\n\n\n<p><\/p>\n\n\n\n<p><a href=\"https:\/\/github.com\/KlingTeam\/LivePortrait\" rel=\"nofollow noopener\" target=\"_blank\"><strong>LivePortrait<\/strong><\/a><strong> is an efficient, video-driven portrait animation framework that generates realistic animated videos from a single static image.<\/strong> Unlike traditional animation tools, it uses implicit keypoint technology to transfer facial expressions, head movements, and micro-expressions from a driving video to your source photo while preserving the original identity features.<\/p>\n\n\n\n<p>The project has gained <strong>17,000+ GitHub stars<\/strong> and is now integrated into major platforms including Kuaishou, Douyin (TikTok China), and WeChat Channels, proving its production-ready quality.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"core-capabilities-at-a-glance\" style=\"font-size:24px\"><strong>Core Capabilities at a Glance<\/strong><\/h3>\n\n\n\n<p><\/p>\n\n\n\n<p>LivePortrait delivers four primary functions:<\/p>\n\n\n\n<p>1. <strong>Expression Transfer<\/strong>: Map facial expressions from any driving video to your static portrait<\/p>\n\n\n\n<p>2. <strong>Real-time Animation<\/strong>: Use your webcam to control portrait expressions live (virtual avatar applications)<\/p>\n\n\n\n<p>3. <strong>Stitching Control<\/strong>: Seamlessly blend animated faces back into original images, maintaining stable backgrounds and shoulders<\/p>\n\n\n\n<p>4. <strong>Retargeting Precision<\/strong>: Independently adjust eye openness and lip movements with ratio controls<\/p>\n\n\n\n<ol class=\"wp-block-list\"><\/ol>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"live-portrait-hugging-face-getting-started-without-installation\"><strong>LivePortrait Hugging Face: Getting Started Without Installation<\/strong><\/h2>\n\n\n\n<p><\/p>\n\n\n\n<p>The fastest way to try <strong>LivePortrait AI<\/strong> is through the official Hugging Face Space:<\/p>\n\n\n\n<p><strong>Access here<\/strong>:<a href=\"https:\/\/huggingface.co\/spaces\/KlingTeam\/LivePortrait\" rel=\"nofollow noopener\" target=\"_blank\"> https:\/\/huggingface.co\/spaces\/KlingTeam\/LivePortrait<\/a><\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"what-you-can-do-on-hugging-face\" style=\"font-size:24px\"><strong>What You Can Do on Hugging Face<\/strong><\/h3>\n\n\n\n<p><\/p>\n\n\n\n<p>1. Upload a portrait photo (JPG\/PNG, 1:1 aspect ratio recommended)<\/p>\n\n\n\n<p>2. Choose a driving video or use preset expressions<\/p>\n\n\n\n<p>3. Adjust retargeting controls:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Eye Retargeting<\/strong>: Control eyelid openness (0.0 = closed, 1.0 = source image default)<\/li>\n\n\n\n<li><strong>Lip Retargeting<\/strong>: Adjust mouth movement intensity<\/li>\n\n\n\n<li><strong>Head Rotation<\/strong>: Fine-tune pitch, yaw, and roll angles<\/li>\n<\/ul>\n\n\n\n<p>4. Generate and download your animated video<\/p>\n\n\n\n<ol class=\"wp-block-list\"><\/ol>\n\n\n\n<p><strong>Pro tip<\/strong>: For best results, use driving videos where the subject faces forward in the first frame with a neutral expression. This establishes a stable reference baseline.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"comfy-ui-live-portrait-advanced-workflows-for-creators\"><strong>ComfyUI LivePortrait: Advanced Workflows for Creators<\/strong><\/h2>\n\n\n\n<p><\/p>\n\n\n\n<p><strong>ComfyUI LivePortrait<\/strong> nodes enable visual programming workflows, giving technical users granular control over the animation pipeline.<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"How to use LivePortrait. Learn to Animate AI Faces in ComfyUI.\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube.com\/embed\/8-IcDDmiUMM?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"top-comfy-ui-live-portrait-implementations\" style=\"font-size:24px\"><strong>Top ComfyUI LivePortrait Implementations<\/strong><\/h3>\n\n\n\n<p><\/p>\n\n\n\n<p>Three community projects dominate the ComfyUI ecosystem:<\/p>\n\n\n\n<p class=\"has-vivid-green-cyan-color has-text-color has-link-color wp-elements-aee7ce9df97d12f71c9e45183cd54926\"><strong>1. ComfyUI-AdvancedLivePortrait<\/strong> (by @PowerHouseMan)<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Real-time preview during generation<\/li>\n\n\n\n<li>Expression interpolation between keyframes<\/li>\n\n\n\n<li>Batch processing for multiple portraits<\/li>\n\n\n\n<li>Inspired many derivative projects<\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<p class=\"has-vivid-green-cyan-color has-text-color has-link-color wp-elements-fcc99f7f2e31f2530298c504453816d1\"><strong>2. ComfyUI-LivePortraitKJ<\/strong> (by @kijai)<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>MediaPipe integration as InsightFace alternative<\/li>\n\n\n\n<li>Lower VRAM requirements<\/li>\n\n\n\n<li>Better macOS compatibility<\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<p class=\"has-vivid-green-cyan-color has-text-color has-link-color wp-elements-2ae178a6c1d9f9fc35a9a52122912428\"><strong>3. comfyui-liveportrait<\/strong> (by @shadowcz007)<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Multi-face detection and animation<\/li>\n\n\n\n<li>Expression blending controls<\/li>\n\n\n\n<li>Includes comprehensive video tutorial<\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"basic-comfy-ui-live-portrait-workflow\" style=\"font-size:24px\"><strong>Basic ComfyUI LivePortrait Workflow<\/strong><\/h3>\n\n\n\n<p><\/p>\n\n\n\n<p>[Load Image] \u2192 [LivePortrait Nodes] \u2192 [Expression Retargeting] \u2192 [Stitching Module] \u2192 [Output Video]<\/p>\n\n\n\n<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\u2193<\/p>\n\n\n\n<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[Load Driving Video]<\/p>\n\n\n\n<p>Key parameters to adjust:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Relative motion<\/strong>: Enable to preserve source image&#8217;s neutral expression baseline<\/li>\n\n\n\n<li><strong>Stitching strength<\/strong>: Balance between animation fidelity and background stability<\/li>\n\n\n\n<li><strong>Smooth strength<\/strong>: Reduce jitter in long video sequences (0.5 recommended)<\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"installing-live-portrait-locally-step-by-step-guide\"><strong>Installing LivePortrait Locally: Step-by-Step Guide<\/strong><\/h2>\n\n\n\n<p><\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"system-requirements\" style=\"font-size:24px\"><strong>System Requirements<\/strong><\/h3>\n\n\n\n<p><\/p>\n\n\n\n<p class=\"has-vivid-red-color has-text-color has-link-color wp-elements-7ce0645b66dacc9799da434bb6268e18\"><strong>Minimum specs<\/strong>:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>NVIDIA GPU with 8GB VRAM (RTX 3060 or better)<\/li>\n\n\n\n<li>16GB system RAM (32GB recommended)<\/li>\n\n\n\n<li>10GB free disk space<\/li>\n\n\n\n<li>Windows 11, Linux (Ubuntu 24), or macOS with Apple Silicon<\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<p class=\"has-vivid-red-color has-text-color has-link-color wp-elements-905466c5ea1533b59fc6fd74cf62b1f9\"><strong>Software dependencies<\/strong>:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Python 3.10<\/li>\n\n\n\n<li>CUDA Toolkit 11.8 (Windows\/Linux)<\/li>\n\n\n\n<li>Conda package manager<\/li>\n\n\n\n<li>Git<\/li>\n\n\n\n<li>FFmpeg<\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"installation-process\" style=\"font-size:24px\"><strong>Installation Process<\/strong><\/h3>\n\n\n\n<p><\/p>\n\n\n\n<p class=\"has-vivid-red-color has-text-color has-link-color wp-elements-4ca2b0b381f0d0cf31577362ff21c11b\"><strong>Step 1: Clone the Repository<\/strong><\/p>\n\n\n\n<p>git clone https:\/\/github.com\/KwaiVGI\/LivePortrait.git<\/p>\n\n\n\n<p>cd LivePortrait<\/p>\n\n\n\n<p class=\"has-vivid-red-color has-text-color has-link-color wp-elements-f9355708cc49837662f08a0d68a3fc47\"><strong>Step 2: Create Python Environment<\/strong><\/p>\n\n\n\n<p>conda create -n LivePortrait python=3.10<\/p>\n\n\n\n<p>conda activate LivePortrait<\/p>\n\n\n\n<p class=\"has-vivid-red-color has-text-color has-link-color wp-elements-3c0e0de7390cbfd4707373fcaa10e172\"><strong>Step 3: Install CUDA-Compatible PyTorch<\/strong><\/p>\n\n\n\n<p>Check your CUDA version:<\/p>\n\n\n\n<p>nvcc -V<\/p>\n\n\n\n<p>Install matching PyTorch (for CUDA 11.8):<\/p>\n\n\n\n<p>pip install torch==2.3.0 torchvision==0.18.0 torchaudio==2.3.0 &#8211;index-url https:\/\/download.pytorch.org\/whl\/cu118<\/p>\n\n\n\n<p class=\"has-vivid-red-color has-text-color has-link-color wp-elements-6bcc04a44747be053afb4bb3c2a6e115\"><strong>Step 4: Install Dependencies<\/strong><\/p>\n\n\n\n<p>pip install -r requirements.txt<\/p>\n\n\n\n<p>conda install ffmpeg<\/p>\n\n\n\n<p class=\"has-vivid-red-color has-text-color has-link-color wp-elements-7efcf5e09f673101583cf9d30ec736e8\"><strong>Step 5: Download Pretrained Weights<\/strong><\/p>\n\n\n\n<p>huggingface-cli download KwaiVGI\/LivePortrait &#8211;local-dir pretrained_weights &#8211;exclude &#8220;*.git*&#8221; &#8220;README.md&#8221; &#8220;docs&#8221;<\/p>\n\n\n\n<p>Alternative: Download from<a href=\"https:\/\/pan.baidu.com\/s\/1MGctWmNla_vZxDbEp2Dtzw?pwd=z5cn\" rel=\"nofollow noopener\" target=\"_blank\"> Baidu Cloud<\/a> if HuggingFace is inaccessible.<\/p>\n\n\n\n<p class=\"has-vivid-red-color has-text-color has-link-color wp-elements-535bc007abc94dbcdfc5a78afb6ccacc\"><strong>Step 6: Verify Installation<\/strong><\/p>\n\n\n\n<p>python inference.py<\/p>\n\n\n\n<p>Successful execution produces animations\/s6&#8211;d0_concat.mp4.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"how-to-use-live-portrait-practical-applications\"><strong>How to Use LivePortrait: Practical Applications<\/strong><\/h2>\n\n\n\n<p><\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"1-animate-a-static-photo-with-custom-driving-video\" style=\"font-size:24px\"><strong>1. Animate a Static Photo with Custom Driving Video<\/strong><\/h3>\n\n\n\n<p><\/p>\n\n\n\n<p>python inference.py -s path\/to\/your_photo.jpg -d path\/to\/driving_video.mp4<\/p>\n\n\n\n<p><strong>Auto-crop driving videos<\/strong> to focus on faces:<\/p>\n\n\n\n<p>python inference.py -s source.jpg -d driving.mp4 &#8211;flag_crop_driving_video<\/p>\n\n\n\n<p>Adjust crop parameters if auto-detection fails:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>&#8211;scale_crop_driving_video: Controls zoom level<\/li>\n\n\n\n<li>&#8211;vy_ratio_crop_driving_video: Adjusts vertical offset<\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"2-create-reusable-expression-templates\" style=\"font-size:24px\"><strong>2. Create Reusable Expression Templates<\/strong><\/h3>\n\n\n\n<p><\/p>\n\n\n\n<p>Generate privacy-safe .pkl motion files:<\/p>\n\n\n\n<p>python inference.py -s source.jpg -d driving.mp4<\/p>\n\n\n\n<p># Outputs both .mp4 and .pkl files<\/p>\n\n\n\n<p>Reuse templates without exposing original driving video:<\/p>\n\n\n\n<p>python inference.py -s new_photo.jpg -d expression_template.pkl<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"3-real-time-webcam-animation-gradio-interface\" style=\"font-size:24px\"><strong>3. Real-Time Webcam Animation (Gradio Interface)<\/strong><\/h3>\n\n\n\n<p><\/p>\n\n\n\n<p>Launch interactive web UI:<\/p>\n\n\n\n<p>python app.py<\/p>\n\n\n\n<p>Access at http:\/\/localhost:7860 with controls for:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Live webcam input as driving source<\/li>\n\n\n\n<li>Rotation sliders (Yaw: \u00b120\u00b0, Pitch: \u00b120\u00b0, Roll: \u00b120\u00b0)<\/li>\n\n\n\n<li>Expression intensity multipliers<\/li>\n\n\n\n<li>Stitching on\/off toggle<\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"4-animate-animals-cats-dogs\" style=\"font-size:24px\"><strong>4. Animate Animals (Cats &amp; Dogs)<\/strong><\/h3>\n\n\n\n<p><\/p>\n\n\n\n<p>python app_animals.py<\/p>\n\n\n\n<p>Requires additional X-Pose dependency (Linux\/Windows only):<\/p>\n\n\n\n<p>cd src\/utils\/dependencies\/XPose\/models\/UniPose\/ops<\/p>\n\n\n\n<p>python setup.py build install<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"how-live-portrait-ai-works-the-technology-behind-the-animation\"><strong>How LivePortrait AI Works: The Technology Behind the Animation<\/strong><\/h2>\n\n\n\n<p><\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"implicit-keypoint-architecture\" style=\"font-size:24px\"><strong>Implicit Keypoint Architecture<\/strong><\/h3>\n\n\n\n<p><\/p>\n\n\n\n<p>LivePortrait abandons computationally expensive diffusion models in favor of an end-to-end neural network consisting of:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Appearance Feature Extractor<\/strong>: Captures identity-specific features (facial structure, hair, skin tone)<\/li>\n\n\n\n<li><strong>Motion Extractor<\/strong>: Analyzes driving video to extract expression parameters<\/li>\n\n\n\n<li><strong>Warping Module<\/strong>: Deforms source image based on motion keypoints<\/li>\n\n\n\n<li><strong>SPADE Generator<\/strong>: Synthesizes final high-fidelity frames<\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<p>This architecture enables <strong>deterministic control<\/strong>\u2014you get consistent results without the randomness typical of generative AI.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"why-its-faster-than-competitors\" style=\"font-size:24px\"><strong>Why It&#8217;s Faster Than Competitors<\/strong><\/h3>\n\n\n\n<p><\/p>\n\n\n\n<p>Traditional portrait animation relies on diffusion models that require hundreds of denoising steps. LivePortrait&#8217;s keypoint-based approach completes inference in <strong>12.8 milliseconds per frame<\/strong> on an RTX 4090, making real-time applications feasible.<\/p>\n\n\n\n<p><strong>Performance comparison<\/strong>:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>LivePortrait: 12.8ms\/frame (RTX 4090)<\/li>\n\n\n\n<li>Diffusion-based methods: 250-500ms\/frame<\/li>\n\n\n\n<li>Speed advantage: <strong>20-30x faster<\/strong><\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"live-portrait-vs-competitors-what-makes-it-stand-out\"><strong>LivePortrait vs. Competitors: What Makes It Stand Out<\/strong><\/h2>\n\n\n\n<p><\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>Feature<\/strong><\/td><td><strong>LivePortrait<\/strong><\/td><td><strong>Diffusion Models<\/strong><\/td><td><strong>Traditional 3D Methods<\/strong><\/td><\/tr><tr><td><strong>Speed<\/strong><\/td><td>12.8ms\/frame<\/td><td>250-500ms\/frame<\/td><td>50-100ms\/frame<\/td><\/tr><tr><td><strong>Identity Preservation<\/strong><\/td><td>Excellent<\/td><td>Variable<\/td><td>Excellent<\/td><\/tr><tr><td><strong>Expression Fidelity<\/strong><\/td><td>High<\/td><td>Very High<\/td><td>Moderate<\/td><\/tr><tr><td><strong>Setup Complexity<\/strong><\/td><td>Medium<\/td><td>High<\/td><td>Very High<\/td><\/tr><tr><td><strong>GPU Requirements<\/strong><\/td><td>RTX 3060+<\/td><td>RTX 3090+<\/td><td>Varies<\/td><\/tr><tr><td><strong>Open Source<\/strong><\/td><td>\u2705 Yes<\/td><td>Some<\/td><td>Rare<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p><strong>Key differentiator<\/strong>: LivePortrait achieves near-perfect balance between controllability (deterministic outputs) and naturalness (lifelike animations) without requiring expensive hardware or cloud processing.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"common-live-portrait-issues-and-solutions\"><strong>Common LivePortrait Issues and Solutions<\/strong><\/h2>\n\n\n\n<p><\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"issue-1-cuda-out-of-memory-error\" style=\"font-size:24px\"><strong>Issue 1: &#8220;CUDA out of memory&#8221; Error<\/strong><\/h3>\n\n\n\n<p><\/p>\n\n\n\n<p><strong>Solution<\/strong>: Reduce batch size or process shorter video segments:<\/p>\n\n\n\n<p>python inference.py -s source.jpg -d driving.mp4 &#8211;max_frame_batch 16<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"issue-2-poor-stitching-quality-visible-face-boundaries\" style=\"font-size:24px\"><strong>Issue 2: Poor Stitching Quality (Visible Face Boundaries)<\/strong><\/h3>\n\n\n\n<p><\/p>\n\n\n\n<p><strong>Solution<\/strong>: Enable retargeting and adjust stitching strength:<\/p>\n\n\n\n<p>python inference.py -s source.jpg -d driving.mp4 &#8211;flag_stitching &#8211;stitching_strength 0.8<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"issue-3-jittery-animation-in-long-videos\" style=\"font-size:24px\"><strong>Issue 3: Jittery Animation in Long Videos<\/strong><\/h3>\n\n\n\n<p><\/p>\n\n\n\n<p><strong>Solution<\/strong>: Apply temporal smoothing:<\/p>\n\n\n\n<p>python inference.py -s source.jpg -d driving.mp4 &#8211;smooth_strength 0.5<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"issue-4-driving-video-auto-crop-misaligns-face\" style=\"font-size:24px\"><strong>Issue 4: Driving Video Auto-Crop Misaligns Face<\/strong><\/h3>\n\n\n\n<p><\/p>\n\n\n\n<p><strong>Solution<\/strong>: Manually adjust crop parameters:<\/p>\n\n\n\n<p>python inference.py -s source.jpg -d driving.mp4 &#8211;flag_crop_driving_video &#8211;scale_crop_driving_video 2.5 &#8211;vy_ratio_crop_driving_video -0.1<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"issue-5-mac-os-performance-too-slow\" style=\"font-size:24px\"><strong>Issue 5: macOS Performance Too Slow<\/strong><\/h3>\n\n\n\n<p><\/p>\n\n\n\n<p><strong>Explanation<\/strong>: Apple Silicon runs 20x slower than NVIDIA GPUs due to MPS backend limitations. Consider using:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Cloud GPU services (Google Colab, RunPod)<\/li>\n\n\n\n<li>Hugging Face Space for quick tests<\/li>\n\n\n\n<li>External GPU enclosures (eGPU) if on Intel Mac<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"advanced-use-cases-for-live-portrait-ai\"><strong>Advanced Use Cases for LivePortrait AI<\/strong><\/h2>\n\n\n\n<p><\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"1-virtual-live-streaming\" style=\"font-size:24px\"><strong>1. Virtual Live Streaming<\/strong><\/h3>\n\n\n\n<p><\/p>\n\n\n\n<p>Combine LivePortrait with OBS Studio:<\/p>\n\n\n\n<p>1. Run python app.py with webcam input<\/p>\n\n\n\n<p>2. Capture Gradio output using OBS virtual camera<\/p>\n\n\n\n<p>3. Stream animated avatar to Twitch\/YouTube<\/p>\n\n\n\n<ol class=\"wp-block-list\"><\/ol>\n\n\n\n<p><strong>Latency<\/strong>: ~100-150ms total (acceptable for most streaming scenarios)<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"2-video-dubbing-and-lip-sync\" style=\"font-size:24px\"><strong>2. Video Dubbing and Lip Sync<\/strong><\/h3>\n\n\n\n<p><\/p>\n\n\n\n<p>Pair LivePortrait lip retargeting with audio-driven tools:<\/p>\n\n\n\n<p>1. Generate speech with TTS (see Bonus section)<\/p>\n\n\n\n<p>2. Create driving video from audio using tools like SadTalker<\/p>\n\n\n\n<p>3. Apply LivePortrait with &#8211;lip_retargeting_ratio for precise sync<\/p>\n\n\n\n<ol class=\"wp-block-list\"><\/ol>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"3-digital-resurrection-projects\" style=\"font-size:24px\"><strong>3. Digital Resurrection Projects<\/strong><\/h3>\n\n\n\n<p><\/p>\n\n\n\n<p>Animate historical photographs:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Use high-resolution scans (600+ DPI)<\/li>\n\n\n\n<li>Apply conservative expression templates<\/li>\n\n\n\n<li>Enable stitching to preserve photo grain\/texture<\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"bonus-gaga-ai-video-generator-for-complete-avatar-creation\"><strong>Bonus: Gaga AI Video Generator for Complete Avatar Creation<\/strong><\/h2>\n\n\n\n<p><\/p>\n\n\n\n<p>While LivePortrait excels at portrait animation, creating a fully functional AI avatar requires additional components. <a href=\"https:\/\/gaga.art\/app\"><strong>Gaga AI<\/strong><\/a> provides an integrated solution combining:<\/p>\n\n\n\n<p><\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"623\" src=\"https:\/\/gaga.art\/blog\/wp-content\/uploads\/2026\/02\/gaga-ai-video-generation-1024x623.webp\" alt=\"gaga ai video generation\" class=\"wp-image-1426\" srcset=\"https:\/\/gaga.art\/blog\/wp-content\/uploads\/2026\/02\/gaga-ai-video-generation-1024x623.webp 1024w, https:\/\/gaga.art\/blog\/wp-content\/uploads\/2026\/02\/gaga-ai-video-generation-300x183.webp 300w, https:\/\/gaga.art\/blog\/wp-content\/uploads\/2026\/02\/gaga-ai-video-generation-768x467.webp 768w, https:\/\/gaga.art\/blog\/wp-content\/uploads\/2026\/02\/gaga-ai-video-generation-1536x935.webp 1536w, https:\/\/gaga.art\/blog\/wp-content\/uploads\/2026\/02\/gaga-ai-video-generation-2048x1246.webp 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"image-to-video-ai-pipeline\" style=\"font-size:24px\"><strong>Image to Video AI Pipeline<\/strong><\/h3>\n\n\n\n<p><\/p>\n\n\n\n<p>1. <strong>Static Portrait Generation<\/strong>: Create custom avatars with text-to-image AI<\/p>\n\n\n\n<p>2. <strong>LivePortrait Animation<\/strong>: Bring portraits to life with facial expressions<\/p>\n\n\n\n<p>3. <a href=\"https:\/\/gaga.art\/blog\/ai-voice-cloning\/\"><strong>Voice Cloning<\/strong><\/a>: Generate personalized voice profiles from 10-second audio samples<\/p>\n\n\n\n<p>4. <strong>Text-to-Speech (TTS)<\/strong>: Convert scripts to natural speech with cloned voices<\/p>\n\n\n\n<p>5. <strong>Lip Sync<\/strong>: Automatically align mouth movements to audio<\/p>\n\n\n\n<ol class=\"wp-block-list\"><\/ol>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"why-combine-live-portrait-with-gaga-ai\" style=\"font-size:24px\"><strong>Why Combine LivePortrait with Gaga AI<\/strong><\/h3>\n\n\n\n<p><\/p>\n\n\n\n<p><strong>LivePortrait alone<\/strong> provides visual animation. <a href=\"https:\/\/gaga.art\/en\/\"><strong>Gaga AI<\/strong><\/a> adds audio-visual synchronization for:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Multilingual content creation<\/li>\n\n\n\n<li>Personalized video messages<\/li>\n\n\n\n<li>Automated video narration<\/li>\n\n\n\n<li>Virtual assistant interfaces<\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<div class=\"wp-block-buttons is-content-justification-center is-layout-flex wp-container-core-buttons-is-layout-a89b3969 wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button\"><a class=\"wp-block-button__link wp-element-button\" href=\"http:\/\/gaga.art\/app\" target=\"_blank\" rel=\"noreferrer noopener\">Generate Video Free<\/a><\/div>\n\n\n\n<div class=\"wp-block-button\"><a class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/gaga.art\/\">Learn Gaga AI<\/a><\/div>\n<\/div>\n\n\n\n<p><\/p>\n\n\n\n<p><strong>Workflow example<\/strong>:<\/p>\n\n\n\n<p>[Your Photo] \u2192 LivePortrait \u2192 [Animated Video] \u2192 Gaga AI TTS + Voice Clone \u2192 [Talking Avatar]<\/p>\n\n\n\n<p>This combination enables full-stack avatar generation without switching between multiple platforms.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"conclusion-why-live-portrait-matters-in-2025\"><strong>Conclusion: Why LivePortrait Matters in 2025<\/strong><\/h2>\n\n\n\n<p><\/p>\n\n\n\n<p>LivePortrait represents a paradigm shift from cloud-dependent, expensive portrait animation to <strong>accessible, real-time, local processing<\/strong>. With 17,000+ GitHub stars and integration into major platforms, it has proven its value beyond academic research.<\/p>\n\n\n\n<p>Whether you&#8217;re a content creator seeking quick avatar generation, a developer building virtual assistant interfaces, or a researcher exploring portrait animation techniques, LivePortrait offers production-ready tools without the computational overhead of diffusion models.<\/p>\n\n\n\n<p><strong>Start your first animation today<\/strong>:<\/p>\n\n\n\n<p>1. Try the<a href=\"https:\/\/huggingface.co\/spaces\/KlingTeam\/LivePortrait\" rel=\"nofollow noopener\" target=\"_blank\"> Hugging Face Space<\/a> (no installation)<\/p>\n\n\n\n<p>2. Install locally following the guide above<\/p>\n\n\n\n<p>3. Integrate into ComfyUI workflows for advanced control<\/p>\n\n\n\n<ol class=\"wp-block-list\"><\/ol>\n\n\n\n<p>The future of portrait animation is open-source, efficient, and accessible\u2014LivePortrait is leading that future.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"frequently-asked-questions-faq\"><strong>Frequently Asked Questions (FAQ)<\/strong><\/h2>\n\n\n\n<p><\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"what-is-live-portrait-used-for\" style=\"font-size:24px\"><strong>What is LivePortrait used for?<\/strong><\/h3>\n\n\n\n<p><\/p>\n\n\n\n<p>LivePortrait animates static portrait photos by transferring facial expressions from driving videos. Primary use cases include virtual avatars, digital human creation, video dubbing, social media content, and live streaming enhancements.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"is-live-portrait-free-to-use\" style=\"font-size:24px\"><strong>Is LivePortrait free to use?<\/strong><\/h3>\n\n\n\n<p><\/p>\n\n\n\n<p>Yes, LivePortrait is completely free and open-source under the MIT-style license. You can use it commercially without licensing fees, though attribution to Kuaishou\/KwaiVGI is appreciated.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"can-live-portrait-work-with-videos-as-source-input\" style=\"font-size:24px\"><strong>Can LivePortrait work with videos as source input?<\/strong><\/h3>\n\n\n\n<p><\/p>\n\n\n\n<p>Yes, LivePortrait supports video-to-video (v2v) portrait editing. Use the -s flag with a video file to reanimate existing video content with new expressions while preserving original identity features.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"does-live-portrait-require-an-internet-connection\" style=\"font-size:24px\"><strong>Does LivePortrait require an internet connection?<\/strong><\/h3>\n\n\n\n<p><\/p>\n\n\n\n<p>No, once installed locally with pretrained weights downloaded, LivePortrait runs entirely offline. Internet is only needed for initial setup and model downloads.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"whats-the-difference-between-live-portrait-and-deep-fake-tools\" style=\"font-size:24px\"><strong>What&#8217;s the difference between LivePortrait and DeepFake tools?<\/strong><\/h3>\n\n\n\n<p><\/p>\n\n\n\n<p>LivePortrait focuses on expression transfer while preserving source identity, whereas DeepFake swaps entire faces. LivePortrait does not replace identities\u2014it animates existing portraits with new expressions.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"can-i-use-live-portrait-on-cpu-without-a-gpu\" style=\"font-size:24px\"><strong>Can I use LivePortrait on CPU without a GPU?<\/strong><\/h3>\n\n\n\n<p><\/p>\n\n\n\n<p>Theoretically yes, but performance will be 50-100x slower (several seconds per frame). A CUDA-compatible NVIDIA GPU is strongly recommended for practical use.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"how-do-i-improve-animation-quality-on-challenging-photos\" style=\"font-size:24px\"><strong>How do I improve animation quality on challenging photos?<\/strong><\/h3>\n\n\n\n<p><\/p>\n\n\n\n<p>Ensure source photos meet these criteria:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Front-facing pose with visible facial features<\/li>\n\n\n\n<li>Good lighting without harsh shadows<\/li>\n\n\n\n<li>Minimal occlusions (no hands covering face, large accessories)<\/li>\n\n\n\n<li>High resolution (512&#215;512 pixels minimum)<\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<p>Use &#8211;flag_do_whitening to normalize color spaces between source and driving inputs.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"does-live-portrait-work-with-non-human-subjects\" style=\"font-size:24px\"><strong>Does LivePortrait work with non-human subjects?<\/strong><\/h3>\n\n\n\n<p><\/p>\n\n\n\n<p>Yes, the Animals mode supports cats and dogs. Generic object animation is not currently supported\u2014the model is trained specifically for portraits with recognizable facial landmarks.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"what-file-formats-does-live-portrait-support\" style=\"font-size:24px\"><strong>What file formats does LivePortrait support?<\/strong><\/h3>\n\n\n\n<p><\/p>\n\n\n\n<p><strong>Input<\/strong>: JPG, PNG (images), MP4, AVI (videos) <strong>Output<\/strong>: MP4 video files <strong>Templates<\/strong>: PKL files for reusable expression data<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"where-can-i-find-driving-videos-for-live-portrait\" style=\"font-size:24px\"><strong>Where can I find driving videos for LivePortrait?<\/strong><\/h3>\n\n\n\n<p><\/p>\n\n\n\n<p>The GitHub repository includes sample driving videos in assets\/examples\/driving\/. You can also record your own using any camera\u2014just ensure the first frame shows a frontal neutral face.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>LivePortrait AI transforms static photos into dynamic videos with facial animations. Free open-source tool from Kuaishou. Works with ComfyUI &amp; Hugging Face.<\/p>\n","protected":false},"author":2,"featured_media":1482,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-1479","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-audio"],"_links":{"self":[{"href":"https:\/\/gaga.art\/blog\/wp-json\/wp\/v2\/posts\/1479","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/gaga.art\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/gaga.art\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/gaga.art\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/gaga.art\/blog\/wp-json\/wp\/v2\/comments?post=1479"}],"version-history":[{"count":1,"href":"https:\/\/gaga.art\/blog\/wp-json\/wp\/v2\/posts\/1479\/revisions"}],"predecessor-version":[{"id":1483,"href":"https:\/\/gaga.art\/blog\/wp-json\/wp\/v2\/posts\/1479\/revisions\/1483"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/gaga.art\/blog\/wp-json\/wp\/v2\/media\/1482"}],"wp:attachment":[{"href":"https:\/\/gaga.art\/blog\/wp-json\/wp\/v2\/media?parent=1479"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/gaga.art\/blog\/wp-json\/wp\/v2\/categories?post=1479"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/gaga.art\/blog\/wp-json\/wp\/v2\/tags?post=1479"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}