SadTalker and NVIDIA GeForce RTX 5060 Ti

Hello,

Can SadTalker work with the NVIDIA GeForce RTX 5060 Ti with 8GB VRAM?

Thank you.

1 Like

SadTalker will likely run with 8GB VRAM on low graphics settings. It’s mostly a speed issue. Once it boots up…

However, like all older Python libraries, the surrounding libraries’ maintenance has stopped, so resolving dependencies might be a pain. Especially on 50x0 systems, PyTorch is locked to newer versions only, so that will likely be a hassle.

In such cases, feeding the error message as is to a generative AI can be effective. Alternatively, isolating older Python versions in a virtual environment is the quickest solution.


SadTalker runs on a GeForce RTX 5060 Ti with 8 GB VRAM if you use the 256-px face model and current CUDA-12-series PyTorch. The 512-px model can work too, but start at 256 for stability and headroom. (GitHub)

What this means in practice

  • GPU support: RTX 5060 Ti is an official 50-series card (8 GB or 16 GB). Any recent NVIDIA driver + CUDA-12-compatible PyTorch works. (NVIDIA)
  • VRAM headroom: Community runs show 6 GB often OOMs. 8 GB is a workable floor when you keep resolution and extras modest. If you use the A1111/Forge extension, headroom is tighter than the standalone script. (GitHub)
  • Model size switch: SadTalker exposes --size 256|512. Use 256 first to minimize memory; 512 is heavier. The official Space UI also defaults to 256. (GitHub)

Beginner-safe setup for RTX 5060 Ti (8 GB)

Use a clean env, install PyTorch first, then SadTalker.

# 0) New environment
conda create -n sadtalker python=3.10 -y
conda activate sadtalker

# 1) Install PyTorch that matches your driver (CUDA 12.x)
# Use the official selector and copy the command it shows for your OS/CUDA.
# https://pytorch.org/get-started/locally/
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128  # example: CUDA 12.8

# 2) Get SadTalker
git clone https://github.com/OpenTalker/SadTalker.git  # repo: https://github.com/OpenTalker/SadTalker
cd SadTalker
pip install -r requirements.txt
bash scripts/download_models.sh

# 3) First run: lowest VRAM footprint
python inference.py \
  --source_image face.png \
  --driven_audio voice.wav \
  --size 256 \             # release notes mention 256/512  # https://github.com/OpenTalker/SadTalker/releases
  --preprocess crop \      # keep crops tight
  --result_dir results

Why this works: PyTorch’s current wheels target CUDA-12.x (e.g., cu126/cu128), which covers new-generation GPUs. SadTalker’s 256-px face model is explicitly supported and lighter. (PyTorch)

Low-VRAM operating tips

  • Keep --size 256 and batch size = 1. Try 512 only after 256 is stable. (GitHub)

  • Turn enhancers (e.g., GFPGAN) off during generation; run them afterward if needed.

  • If you use the AUTOMATIC1111/Forge extension, expect more VRAM use than the standalone CLI; close other heavy extensions. (GitHub)

  • If you hit borderline OOM, set the PyTorch allocator hint before launching:

    • Linux/macOS: export PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128
    • Windows (cmd): set PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb=128
      This reduces fragmentation that can trigger OOMs. (General PyTorch guidance.) (PyTorch)

Common dependency pitfalls and quick fixes

  • NumPy conflicts: If the run fails after pip install -r requirements.txt, pin numpy<2 and retry. This resolves many SadTalker reports where libs were built against NumPy 1.x. (GitHub)
  • Gradio/Space drift: Old Spaces and tutorials used older Gradio and Torch pins; prefer current PyTorch and keep the Space/Gradio versions consistent. The official Space shows 256/512 and preprocess options you can mirror locally. (Hugging Face)

If you don’t want to fight drivers

  • Hugging Face Spaces: Your local GPU is irrelevant in the cloud. The free CPU Basic tier runs but is slow; pick a GPU Space like T4 16 GB or L4 24 GB for smooth 512-px runs. Hardware tables and pricing are public. (Hugging Face)

Context you asked for

  • SadTalker capabilities and size switch are documented in the repo’s releases and reflected in the Gradio Space UI. (GitHub)
  • The RTX 5060 Ti exists in 8 GB and 16 GB variants and launched in April 2025; the family page is live. (NVIDIA)
  • Community VRAM experience: 6 GB often OOMs; 8 GB is workable if you keep settings conservative. (GitHub)

Curated starter materials

Core docs and knobs

  • SadTalker GitHub (README, CLI flags) and Releases (shows --size 256|512, preprocess modes). (GitHub)
  • Official HF Space source (shows 256 default, preprocess radio). Useful to mirror UI choices locally. (Hugging Face)
  • PyTorch “Get Started” installer (pick CUDA-12.x command for your OS). (PyTorch)

Cloud options

  • Spaces hardware tables and pricing if you prefer a managed GPU. (Hugging Face)

Troubleshooting

  • NumPy version breakage threads for SadTalker and general NumPy-2 migration notes. (GitHub)

Bottom line: With an RTX 5060 Ti 8 GB you can run SadTalker today. Use PyTorch CUDA-12.x, start at --size 256, keep batch=1 and enhancers off, then scale up only if memory allows. (PyTorch)

1 Like

Hello,

Can I use SadTalker on ComfyUI?

1 Like

Seems barely you can…?

Hello,

Do I need to manually create the customـnodes\Comfyui-SadTalker\SadTalker\checkpoint\ and (comfyui root)\gfpgan\weights\ directories?

How do I install the program?

1 Like

Probably just install it like this…? I’m not a ComfyUI user, so I don’t really know…


Here’s a setup for Comfyui-SadTalker in ComfyUI.

  1. Install or open ComfyUI

    • If ComfyUI isn’t installed yet, follow the official manual install and confirm you can launch python main.py. (ComfyUI)
  2. Install the Comfyui-SadTalker node

    • Method A: ComfyUI-Manager (recommended): In ComfyUI click Manager → Install Custom Nodes, search Comfyui-SadTalker, click Install, then Restart. (GitHub)

    • Method B: Manual clone:

      # Windows PowerShell or macOS/Linux terminal
      cd /path/to/ComfyUI/custom_nodes
      git clone https://github.com/haomole/Comfyui-SadTalker.git
      

      You should now have ComfyUI/custom_nodes/Comfyui-SadTalker/. (GitHub)

  3. Add the four SadTalker model files

    • Create this folder if missing:
      ComfyUI/custom_nodes/Comfyui-SadTalker/SadTalker/checkpoints/

    • Put all four files in it:

      SadTalker_V0.0.2_256.safetensors
      SadTalker_V0.0.2_512.safetensors
      mapping_00109-model.pth.tar
      mapping_00229-model.pth.tar
      
    • These names and this path are listed in the node README. Download them from the SadTalker Releases page. (GitHub)

  4. Add the GFPGAN/facexlib weights

    • Create this folder if missing:
      ComfyUI/gfpgan/weights/

    • Put these files inside:

      GFPGANv1.4.pth
      alignment_WFLW_4HG.pth
      detection_Resnet50_Final.pth
      parsing_parsenet.pth
      
    • facexlib publishes the alignment/detection/parsing weights; GFPGAN v1.4 is provided by TencentARC. (GitHub)

  5. Install FFmpeg and verify PATH

    • Install FFmpeg for your OS and ensure the ffmpeg executable is on PATH.
    • Verify with: ffmpeg -version. The node’s README points to PATH errors if FFmpeg is missing. (ffmpeg.org)
  6. Restart ComfyUI

    • Fully quit and relaunch ComfyUI so it loads the new node and weights.
    • If you used ComfyUI-Manager, click Restart when prompted. (ComfyUI)
  7. Load the sample workflow and test

    • Open the bundled sample from Comfyui-SadTalker/workflow/.
    • Drop in a face image and a short audio clip; queue the graph.
    • If you see “audio … undefined,” update ComfyUI and refresh the browser. (GitHub)
  8. Low-VRAM defaults for an 8 GB GPU

    • Use the 256-px model first; try 512 later.
    • Keep batch size 1 and disable enhancers during generation.
    • The 256/512 option is part of SadTalker’s official model set. (GitHub)

Where the files come from (for double-checking):

  • Node README: exact folders and filenames for checkpoints and weights. (GitHub)
  • SadTalker Releases: 256/512 face models and the two MappingNet files. (GitHub)
  • facexlib + GFPGAN: official weight downloads. (GitHub)

Hello,

I’m really thankful for your help.

When I opened the workflow, I got the following error:

1 Like

Hmm… The necessary prerequisite programs or data are missing or cannot be found (especially common in Windows environments), the versions are incompatible (too new or too old), or they are not installed correctly…


You opened a workflow that references nodes your ComfyUI does not have loaded. The pop-up lists them: SadTalker, ShowAudio, LoadRefVideo, ShowVideo, ShowText. In ComfyUI, a “missing node” means the custom node that defines that class is not installed or failed to import. Fix = install the right custom nodes, their weights and Python deps, ensure FFmpeg is available, then restart.

Why this happens

  • Workflows are JSON that name exact node types. If ComfyUI can’t import a node class, it shows this dialog. Causes: the node repo isn’t installed, it failed to import due to missing Python deps, or your ComfyUI build is too old for that node. (ComfyUI Official Documentation)
  • The Comfyui-SadTalker node’s README explicitly warns about two common symptoms when it loads but parts fail: “audio … undefined → update ComfyUI” and “mp4 save error → FFmpeg not on PATH.” It also shows the exact folders for the required model and GFPGAN/facexlib weights. (GitHub)
  • Video I/O nodes like Load Video / Video Combine live in ComfyUI-VideoHelperSuite and require FFmpeg for audio mux. (GitHub)
  • ShowText comes from ComfyUI-Custom-Scripts; many workflows rely on it for UI text. (GitHub)

Step-by-step fix (beginner-safe)

  1. Update ComfyUI
    Update to a recent build, then restart. Older builds trigger the “audio undefined” issue with the SadTalker node. (GitHub)

  2. Use ComfyUI-Manager to install what the workflow needs
    Open Manager → Install Custom Nodes, search and install:

    • Comfyui-SadTalker
    • ComfyUI-VideoHelperSuite (VHS)
    • ComfyUI-Custom-Scripts (for ShowText)
      Then click Restart in Manager. If any nodes are still missing, use Manager → Install Missing Custom Nodes to scan the open workflow and auto-suggest installers. (ComfyUI Official Documentation)
  3. Put the SadTalker model files in the exact folder it expects
    Create the folder if missing:
    ComfyUI/custom_nodes/Comfyui-SadTalker/SadTalker/checkpoints/
    Place all four files:

    SadTalker_V0.0.2_256.safetensors
    SadTalker_V0.0.2_512.safetensors
    mapping_00109-model.pth.tar
    mapping_00229-model.pth.tar
    

    These names and the path come from the node’s README. The official SadTalker Releases page hosts these exact assets. (GitHub)

  4. Put the GFPGAN/facexlib weights where ComfyUI looks for them
    Create: ComfyUI/gfpgan/weights/
    Add:

    GFPGANv1.4.pth
    alignment_WFLW_4HG.pth
    detection_Resnet50_Final.pth
    parsing_parsenet.pth
    

    The SadTalker README points to this folder, and facexlib/GFPGAN publish the canonical weights. (GitHub)

  5. Install Python dependencies for the SadTalker node
    In the same Python env ComfyUI uses:

    cd /path/to/ComfyUI/custom_nodes/Comfyui-SadTalker
    pip install -r requirements.txt
    

    This resolves import errors that keep the node from loading. (The repo ships the requirements file.) (GitHub)

  6. Install FFmpeg and verify PATH
    FFmpeg is required by VHS (VideoHelperSuite) to mux audio when combining frames to MP4. After installation, verify with: ffmpeg -version. If missing, video save nodes will fail. (GitHub)

  7. Restart ComfyUI fully
    Close the server, relaunch, and reload the workflow. If a node type still shows missing, run Manager → Install Missing Custom Nodes again. (ComfyUI Official Documentation)

If your workflow mentions LoadRefVideo

Some workflows use differently named video-loader nodes. If LoadRefVideo stays missing, replace it with VHS → Load Video or Load Video (Path) from VideoHelperSuite; they provide equivalent video-to-frames I/O in most graphs. (GitHub)

Quick verification

  • SadTalker node appears in the node list. No startup import errors in the console.
  • Weights present at the exact paths above.
  • FFmpeg OK: ffmpeg -version prints a version and VHS nodes can save video with audio. (GitHub)

References

  • Comfyui-SadTalker README: required folders, model names, FFmpeg/PATH and “audio undefined → update ComfyUI.” (GitHub)
  • SadTalker Releases: SadTalker_V0.0.2_256/512.safetensors, mapping_00109/00229 assets. (GitHub)
  • ComfyUI-VideoHelperSuite README: video I/O nodes and FFmpeg requirement. (GitHub)
  • ComfyUI Manager docs and issue thread: how to install nodes, “Install Missing Custom Nodes,” and restart. (ComfyUI Official Documentation)
  • ShowText node in ComfyUI-Custom-Scripts. (GitHub)

Thanks again.

Unfortunately, there is no such thing as Comfyui-SadTalker:

About the files and directories:

$ pwd
/home/localai/comfy/ComfyUI/custom_nodes/Comfyui-SadTalker/SadTalker/checkpoints
localai@LocalAI:~/comfy/ComfyUI/custom_nodes/Comfyui-SadTalker/SadTalker/checkpoints$ ls
mapping_00109-model.pth.tar mapping_00229-model.pth.tar SadTalker_V0.0.2_256.safetensors SadTalker_V0.0.2_512.safetensors

And:

$ pwd
/home/localai/comfy/ComfyUI/gfpgan/weights
localai@LocalAI:~/comfy/ComfyUI/gfpgan/weights$ ls
alignment_WFLW_4HG.pth detection_Resnet50_Final.pth GFPGANv1.4.pth parsing_parsenet.pth

The ffmpeg version is:

$ ffmpeg -version
ffmpeg version 7.1.2-0+deb13u1 Copyright (c) 2000-2025 the FFmpeg developers
built with gcc 14 (Debian 14.2.0-19)

The problem was not solved.

1 Like

Hmm…


Your weights and FFmpeg are fine. The node pack itself isn’t loading, and your workflow also references nodes that live in other packs. Fix it in this order.

What’s happening (context)

  • Manager shows 0 results because the registry view is channel-based and not every repo is indexed. Use Install via Git URL or clone manually. Manager docs explain the DB modes and “Try Install.” (GitHub)
  • Comfyui-SadTalker targets a specific ComfyUI build (“ComfyUI-aki v1.3”). The README lists the exact checkpoint locations and common errors (audio undefined → update ComfyUI; MP4 error → add FFmpeg to PATH). Your folders match those paths, so the remaining blocker is Python deps or API drift. (GitHub)
  • Your workflow also expects video I/O/preview and text display nodes that are not in SadTalker: they come from VideoHelperSuite and a Show Text extension. (GitHub)

Do this (Linux, matches your paths)

  1. Start ComfyUI from a terminal and read import errors
cd ~/comfy/ComfyUI
python3 main.py

If a custom node fails, ComfyUI prints a ModuleNotFoundError or similar during “loading custom nodes…”. This is the fastest way to see what dependency is missing. (GitHub)

  1. Make sure the SadTalker node is actually installed and up to date
cd ~/comfy/ComfyUI/custom_nodes
# if you already cloned it, pull; otherwise clone it
[ -d Comfyui-SadTalker ] && (cd Comfyui-SadTalker && git pull) || git clone https://github.com/haomole/Comfyui-SadTalker.git

The repo is the correct one. (GitHub)

  1. Install SadTalker node dependencies in the same Python env you run ComfyUI with
cd ~/comfy/ComfyUI/custom_nodes/Comfyui-SadTalker
python3 -m pip install -r requirements.txt

This repo ships a requirements.txt; installing it fixes most import failures. (GitHub)

  1. Install the two supporting packs your workflow needs
cd ~/comfy/ComfyUI/custom_nodes
git clone https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite.git
git clone https://github.com/fairy-root/ComfyUI-Show-Text.git   # or: pythongosssss/ComfyUI-Custom-Scripts
# VHS deps:
cd ComfyUI-VideoHelperSuite
python3 -m pip install -r requirements.txt
  • VideoHelperSuite (VHS) provides Load Video, Load Audio, Video Combine, and animated previews. It calls ffmpeg for previews and for combining frames + audio. (GitHub)
  • Show Text provides the ShowText node used by many workflows. (GitHub)
  1. Keep your weights where they already are
  • SadTalker checkpoints:
    ~/comfy/ComfyUI/custom_nodes/Comfyui-SadTalker/SadTalker/checkpoints/SadTalker_V0.0.2_256.safetensors, SadTalker_V0.0.2_512.safetensors, mapping_00109-model.pth.tar, mapping_00229-model.pth.tar. (GitHub)
  • GFPGAN/facexlib weights:
    ~/comfy/ComfyUI/gfpgan/weights/GFPGANv1.4.pth, alignment_WFLW_4HG.pth, detection_Resnet50_Final.pth, parsing_parsenet.pth. (GitHub)
  1. Confirm FFmpeg from the same shell that launches ComfyUI
ffmpeg -version

VHS passes ffmpeg arguments for video formats and advanced previews. Your version 7.1 is fine. (GitHub)

  1. Restart ComfyUI, then reload the workflow
  • If SadTalker still doesn’t appear in the node list, read the terminal output and pip install any module it complains about.
  • If the workflow still shows LoadRefVideo missing, replace it with VHS → Load Video (Upload/Path). VHS covers the same role for most graphs. (GitHub)
  1. Optional: use Manager effectively
  • Manager has DB modes; switch to Channel (remote) and Fetch Updates if lists look empty. Use Try Install or Install via Git URL for repos not in the index. (GitHub)

Why this works

  • The SadTalker README confirms the exact folder layout, the four SadTalker files, and the four GFPGAN/facexlib weights you already placed, plus the two common runtime errors. That leaves node import as the blocker, which steps 2–4 address. (GitHub)
  • VHS supplies the video/audio nodes and uses ffmpeg for combining frames and audio and for previews, matching the missing LoadRefVideo/ShowVideo/ShowAudio functionality. (GitHub)
  • Manager’s registry can be empty by channel; direct Git install avoids that failure mode, and its README documents the channel system and “Try Install.” (GitHub)

Thank you so much.

I restart ComfyUI and got the following error:

…
0.0 seconds (IMPORT FAILED): /home/localai/comfy/ComfyUI/custom_nodes/comfyui-minimal-workflow-image
0.0 seconds: /home/localai/comfy/ComfyUI/custom_nodes/comfyui-text-randomizer
0.0 seconds: /home/localai/comfy/ComfyUI/custom_nodes/comfyui-show-text
0.0 seconds: /home/localai/comfy/ComfyUI/custom_nodes/ComfyUI-GGUF
0.0 seconds (IMPORT FAILED): /home/localai/comfy/ComfyUI/custom_nodes/Comfyui-SadTalker
0.0 seconds (IMPORT FAILED): /home/localai/comfy/ComfyUI/custom_nodes/ComfyUI-Vaja-Ai4thai
0.0 seconds: /home/localai/comfy/ComfyUI/custom_nodes/ComfyUI-Custom-Scripts
0.0 seconds: /home/localai/comfy/ComfyUI/custom_nodes/comfyui_ultimatesdupscale
0.0 seconds: /home/localai/comfy/ComfyUI/custom_nodes/comfyui-impact-subpack
0.0 seconds: /home/localai/comfy/ComfyUI/custom_nodes/comfyui-ollama-describer
0.1 seconds: /home/localai/comfy/ComfyUI/custom_nodes/ComfyUI-Manager
0.1 seconds: /home/localai/comfy/ComfyUI/custom_nodes/ComfyUI-Impact-Pack
0.1 seconds: /home/localai/comfy/ComfyUI/custom_nodes/ComfyUI-Copilot
0.2 seconds: /home/localai/comfy/ComfyUI/custom_nodes/comfyui-videohelpersuite
0.2 seconds: /home/localai/comfy/ComfyUI/custom_nodes/ComfyUI_fsdymy
0.2 seconds: /home/localai/comfy/ComfyUI/custom_nodes/ComfyUI-Easy-Use
0.4 seconds: /home/localai/comfy/ComfyUI/custom_nodes/ComfyUI-Text
0.5 seconds: /home/localai/comfy/ComfyUI/custom_nodes/ComfyUI-WanVideoWrapper
0.6 seconds: /home/localai/comfy/ComfyUI/custom_nodes/ComfyUI-SwissArmyKnife

Context impl SQLiteImpl.
Will assume non-transactional DDL.
No target revision found.
Starting server

To see the GUI go to: http://172.21.50.69:8188

I tried to install Comfyui-SadTalker via the Install via Git URL option but I got the following error:

This action is not allowed with this security level configuration.

And about the dependencies:

$ python3 -m pip install -r requirements.txt
Collecting numpy==1.23.4 (from -r requirements.txt (line 1))
Using cached numpy-1.23.4.tar.gz (10.7 MB)
Installing build dependencies … done
Getting requirements to build wheel … error
error: subprocess-exited-with-error

× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [32 lines of output]
Traceback (most recent call last):
File “/home/localai/ComfyUI/comfy-env/lib/python3.13/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py”, line 389, in
main()
~~~~^^
File “/home/localai/ComfyUI/comfy-env/lib/python3.13/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py”, line 373, in main
json_out[“return_val”] = hook(**hook_input[“kwargs”])
~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/localai/ComfyUI/comfy-env/lib/python3.13/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py”, line 137, in get_requires_for_build_wheel
backend = _build_backend()
File “/home/localai/ComfyUI/comfy-env/lib/python3.13/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py”, line 70, in _build_backend
obj = import_module(mod_path)
File “/usr/lib/python3.13/importlib/init.py”, line 88, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “”, line 1387, in _gcd_import
File “”, line 1360, in _find_and_load
File “”, line 1310, in _find_and_load_unlocked
File “”, line 488, in _call_with_frames_removed
File “”, line 1387, in _gcd_import
File “”, line 1360, in _find_and_load
File “”, line 1331, in _find_and_load_unlocked
File “”, line 935, in _load_unlocked
File “”, line 1026, in exec_module
File “”, line 488, in _call_with_frames_removed
File “/tmp/pip-build-env-4se_n3yz/overlay/lib/python3.13/site-packages/setuptools/init.py”, line 16, in
import setuptools.version
File “/tmp/pip-build-env-4se_n3yz/overlay/lib/python3.13/site-packages/setuptools/version.py”, line 1, in
import pkg_resources
File “/tmp/pip-build-env-4se_n3yz/overlay/lib/python3.13/site-packages/pkg_resources/init.py”, line 2172, in
register_finder(pkgutil.ImpImporter, find_on_path)
^^^^^^^^^^^^^^^^^^^
AttributeError: module ‘pkgutil’ has no attribute ‘ImpImporter’. Did you mean: ‘zipimporter’?
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed to build ‘numpy’ when getting requirements to build wheel
1 Like

As of today, Python 3.13 is a landmine when using generative AI-related software. Library compatibility is still poor. This isn’t limited to ComfyUI… Python 3.10-3.12 is recommended.

While newer Python versions seem to offer better execution speed, it’s meaningless if it doesn’t work.


Root cause: your ComfyUI runtime is Python 3.13, but the Comfyui-SadTalker node pins numpy==1.23.4. NumPy 1.23.x has wheels for Python ≤3.11, not for 3.12/3.13, so pip tries to build from source and crashes because Python ≥3.12 removed pkgutil.ImpImporter which old build tooling still touches. Result: “IMPORT FAILED” for Comfyui-SadTalker and a numpy build error. (PyPI)

Secondary blocker: ComfyUI-Manager refused “Install via Git URL” due to its security level. You must lower it temporarily to install or fix nodes. (GitHub)

Also note: the Comfyui-SadTalker README says it was built against an older ComfyUI build and may not be adapted to “new versions,” so running it on modern ComfyUI requires a compatible Python and deps. (GitHub)


Fix in 3 steps

1) Switch ComfyUI to Python 3.11 (new venv)

Do not reuse the 3.13 env.

Debian/Ubuntu approach

# install Python 3.11 and venv (if not present)
sudo apt-get update
sudo apt-get install -y python3.11 python3.11-venv

# create a fresh env for ComfyUI
cd ~/comfy/ComfyUI
python3.11 -m venv ~/comfy/venvs/comfy311
source ~/comfy/venvs/comfy311/bin/activate

# base ComfyUI deps
python -m pip install -U pip setuptools wheel
python -m pip install -r requirements.txt

If apt can’t give you 3.11 use pyenv quickly:

curl https://pyenv.run | bash
export PATH="$HOME/.pyenv/bin:$PATH"; eval "$(pyenv init -)"; eval "$(pyenv virtualenv-init -)"
pyenv install 3.11.9
pyenv virtualenv 3.11.9 comfy311
pyenv activate comfy311
cd ~/comfy/ComfyUI
python -m pip install -U pip setuptools wheel
python -m pip install -r requirements.txt

Why 3.11: NumPy 1.23.4 has prebuilt wheels for Python 3.8–3.11, so installs cleanly. (PyPI)

2) Let Manager install node dependencies (temporarily weaken security)

Edit Manager config and restart ComfyUI.

  • Open: ~/comfy/ComfyUI/user/default/ComfyUI-Manager/config.ini

  • Set:

    [default]
    security_level = weak
    
  • Save, restart ComfyUI, then in Manager click Check Missing on the SadTalker node to auto-install its Python requirements. Revert security_level to normal after installing. (GitHub)

If you can’t edit through the UI, edit the file on disk at the same path and restart. (GitHub)

3) Manually install missing bits (if Manager can’t)

You already placed the model files correctly. Now finish Python deps in the 3.11 venv:

# still inside comfy311 venv
# SadTalker node deps
python -m pip install -r custom_nodes/Comfyui-SadTalker/requirements.txt

# Video + audio helpers often used by the workflow
python -m pip install -r custom_nodes/comfyui-videohelpersuite/requirements.txt
python -m pip install imageio-ffmpeg

VideoHelperSuite and SadTalker expect a working ffmpeg in PATH; your system ffmpeg 7.x is fine. If nodes still complain, ensure which ffmpeg returns a path visible to the venv, or install OS ffmpeg. (GitHub)


Quick sanity checks

# 1) confirm Python and NumPy versions
python - << 'PY'
import platform, numpy; print("Python", platform.python_version(), "NumPy", numpy.__version__)
PY
# Expect: Python 3.11.x, NumPy 1.23.4 (or 1.23.x)

# 2) start ComfyUI with the 3.11 venv
python main.py --listen 0.0.0.0 --port 8188

Load your SadTalker workflow again. The “Some Nodes Are Missing” list should be gone once Comfyui-SadTalker and display nodes load.


Why the previous installs failed

  • numpy==1.23.4 is too old for Python 3.12/3.13, so pip tried to build from source and hit the Python 3.12+ removal of pkgutil.ImpImporter, yielding the exact error you pasted. Using Python 3.11 resolves this by using a wheel. (PyPI)
  • ComfyUI-Manager blocked Git installs because of its security level; change it to weak to install, then restore. (GitHub)
  • The node itself warns it’s built for an older ComfyUI package, so a conservative Python/runtime is safer. (GitHub)

Reference links

  • Comfyui-SadTalker repo and notes about older ComfyUI compatibility and required model files. (GitHub)
  • NumPy release info showing Python support boundaries; 1.23.x supports up to 3.11. (PyPI)
  • Python ≥3.12 removes pkgutil.ImpImporter, which triggers the pip/setuptools error you saw. (Stack Overflow)
  • ComfyUI-Manager config paths and the security_level fix. (GitHub)
  • VideoHelperSuite ffmpeg expectation. (GitHub)

If you must stay on Python 3.13, the workaround is to edit custom_nodes/Comfyui-SadTalker/requirements.txt and replace numpy==1.23.4 with numpy>=1.26 and then reinstall. It can work, but it’s unsupported by the node author and may break other pinned deps; 3.11 is the low-risk route. (numpy.org)

1 Like

Thanks again.
For the previous installation, I followed the following procedure. I added the following lines to the .bashrc file:

export PYENV_ROOT="$HOME/.pyenv"
[[ -d $PYENV_ROOT/bin ]] && export PATH="$PYENV_ROOT/bin:$PATH"
eval "$(pyenv init - bash)"
eval "$(pyenv virtualenv-init -)"

Then:

$ sudo apt install python3 python3-pip python3-venv git
$ python3 -m venv comfy-env
$ source comfy-env/bin/activate
$ pip install comfy-cli
$ comfy install
$ pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124

Now I want to install another ComfyUI, but I don’t want it to interfere with the previous one. What is the solution?

1 Like

Hmm… Like this?


Run a second ComfyUI in its own folder, its own Python venv, and on a different port. You can also share the models/ tree via a symlink to save disk. This keeps both installs isolated.

Below is a clean Linux plan that matches your setup.

Plan A — Use comfy-cli with a separate workspace (easiest)

Why this works: comfy-cli lets you target a specific install path with --workspace, and you can launch each instance on its own port. (GitHub)

  1. Create a new venv (use Python 3.11 for widest custom-node compatibility)
# keep your existing venv untouched
python3.11 -m venv ~/venvs/comfy-sadtalker
source ~/venvs/comfy-sadtalker/bin/activate
pip install -U pip setuptools wheel
pip install comfy-cli

(If python3.11 isn’t present, install it or use pyenv to add 3.11.)

  1. Install ComfyUI into a new workspace directory
# this will create ~/comfy/sadtalker/ComfyUI
comfy --workspace=~/comfy/sadtalker install

--workspace=<path> installs ComfyUI under that path instead of reusing your first install. (GitHub)

  1. Launch it on a different port (example 8190)
# listen locally and avoid port clash with your first instance
comfy --workspace=~/comfy/sadtalker launch -- --listen 127.0.0.1 --port 8190

--listen and --port are standard ComfyUI server flags; running two instances on different ports is supported. (GitHub)

  1. (Optional) Make this workspace the default only when you want to
comfy set-default ~/comfy/sadtalker
# undo later by setting your other workspace as default

The CLI supports selecting which workspace subsequent comfy commands act on. (GitHub)

  1. (Optional) Share the big models/ folder to save disk
# inside the new ComfyUI folder
cd ~/comfy/sadtalker/ComfyUI
rm -rf models
ln -s ~/comfy/ComfyUI/models models   # point to your first install's models

ComfyUI’s folder layout includes a top-level models/ directory; a symlink lets both installs see the same checkpoints. (comfyui-wiki.com)

Plan B — Manual second install (no comfy-cli)

  1. New venv + new folder
python3.11 -m venv ~/venvs/comfy-sadtalker
source ~/venvs/comfy-sadtalker/bin/activate
git clone https://github.com/comfyanonymous/ComfyUI.git ~/comfy/sadtalker/ComfyUI
cd ~/comfy/sadtalker/ComfyUI
pip install -U pip setuptools wheel
pip install -r requirements.txt

Manual install in its own directory keeps it separate by design. (ComfyUI)

  1. Launch on another port
python main.py --listen 127.0.0.1 --port 8190

These flags are the standard way to bind address/port. (Reddit)

  1. (Optional) Share models/ via symlink as shown above. (comfyui-wiki.com)

Tips that prevent cross-talk

  • Never reuse the same venv. One venv per ComfyUI keeps Python packages isolated. Official docs recommend virtual environments for this reason. (ComfyUI)

  • Different ports. Keep the first on 8188 and the second on 8190 (or any free port). Running multiple servers on different ports is a common pattern; several users do this. (GitHub)

  • Use --workspace with comfy-cli so node installs and updates target the right copy. You can check which workspace you’re operating on with comfy which or prefix commands with --workspace=.... (GitHub)

  • FFmpeg and GPU libs are system-wide and can be shared. Only Python packages and ComfyUI files need isolation.

  • If you run both concurrently, start each from its own venv and folder:

    # terminal A
    source ~/venvs/comfy-env/bin/activate
    cd ~/comfy/ComfyUI
    python main.py --listen 127.0.0.1 --port 8188
    
    # terminal B
    source ~/venvs/comfy-sadtalker/bin/activate
    cd ~/comfy/sadtalker/ComfyUI
    python main.py --listen 127.0.0.1 --port 8190
    

    The --port flag is the only thing you need to avoid collisions. (Reddit)

Why this is safe

  • Workspaces let comfy-cli install and run ComfyUI in a path you choose, so nothing touches your first install. (GitHub)
  • Virtual environments isolate Python deps (a frequent source of conflicts with custom nodes like SadTalker). Official Comfy docs also advocate venvs for clean installs. (ComfyUI)
  • Binding to different ports cleanly separates running servers; ComfyUI supports --listen/--port and the community uses this for multiple instances. (Reddit)

Can two ComfyUIs run at the same time? I guess the first ComfyUI is taking over the GPU and the other version can’t run!

1 Like

Hello,
To make sure there was no interference, I created another username and installed ComfyUI and SadTalker as follows:

$ git clone https://github.com/pyenv/pyenv.git ~/.pyenv
$ git clone https://github.com/pyenv/pyenv-virtualenv.git ~/.pyenv/plugins/pyenv-virtualenv

$ cp ~/.bashrc ~/.bashrc.backup 
$ grep -v 'pyenv\|PYENV' ~/.bashrc > ~/.bashrc.temp
$ mv ~/.bashrc.temp ~/.bashrc

$ cat >> ~/.bashrc << 'EOF'
export PYENV_ROOT="$HOME/.pyenv"
export PATH="$PYENV_ROOT/bin:$PATH"
eval "$(pyenv init - bash)"
eval "$(pyenv virtualenv-init -)"
EOF

$ source ~/.bashrc

$ pyenv install 3.11.9
$ pyenv global 3.11.9
$ python3 -m venv comfy-env
$ source comfy-env/bin/activate
$ pip install -U pip setuptools wheel
$ pip install comfy-cli
$ comfy install
$ pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
$ comfy --workspace=~/comfy/sadtalker install
$ comfy --workspace=~/comfy/sadtalker launch -- --listen 172.21.50.69 --port 8190

Now:

(comfy-env) sadtalker@LocalAI:~/comfy$ ls
ComfyUI  sadtalker

Now what are the next steps? Should I create the checkpoints directory?

1 Like

Yeah. There’s not much point in running them simultaneously. It’s just that you can. What’s important is that you can install two or three separately. Well, you probably wouldn’t use them at the same time anyway…:sweat_smile:


You can run two ComfyUIs concurrently. They must use different folders, different Python venvs, and different ports. The GPU is shared. If both instances run heavy graphs at the same time, VRAM contention can cause OOM in one or both. One-GPU hosts: stagger jobs or expect queueing. Multi-GPU hosts: pin one ComfyUI process per GPU. (Reddit)

What to do next in your new workspace (~/comfy/sadtalker/ComfyUI)

You already launched on --port 8190. Finish node install and weights, then relaunch.

1) Install the SadTalker node and its deps

# in the new workspace
cd ~/comfy/sadtalker/ComfyUI/custom_nodes
git clone https://github.com/haomole/Comfyui-SadTalker
# deps (same venv you launched with)
python -m pip install -r Comfyui-SadTalker/requirements.txt

The repo expects an older ComfyUI but works if deps are satisfied. (GitHub)

2) Install helpers your workflow expects

# Video I/O + MP4 muxing
git clone https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite
python -m pip install -r ComfyUI-VideoHelperSuite/requirements.txt

# Text display node (if your workflow references ShowText)
git clone https://github.com/fairy-root/ComfyUI-Show-Text

VHS needs a working ffmpeg in PATH; you already have it. (GitHub)

3) Create the SadTalker checkpoints folder (yes, you need it here too)

Option A—copy the four model files into the new workspace:

mkdir -p ~/comfy/sadtalker/ComfyUI/custom_nodes/Comfyui-SadTalker/SadTalker/checkpoints
cp ~/comfy/ComfyUI/custom_nodes/Comfyui-SadTalker/SadTalker/checkpoints/* \
   ~/comfy/sadtalker/ComfyUI/custom_nodes/Comfyui-SadTalker/SadTalker/checkpoints/

Option B—symlink to avoid duplication:

mkdir -p ~/comfy/sadtalker/ComfyUI/custom_nodes/Comfyui-SadTalker/SadTalker/checkpoints
ln -s ~/comfy/ComfyUI/custom_nodes/Comfyui-SadTalker/SadTalker/checkpoints/* \
      ~/comfy/sadtalker/ComfyUI/custom_nodes/Comfyui-SadTalker/SadTalker/checkpoints/

Required files in that folder:

SadTalker_V0.0.2_256.safetensors
SadTalker_V0.0.2_512.safetensors
mapping_00109-model.pth.tar
mapping_00229-model.pth.tar

These are the official release assets. (GitHub)

4) Add the GFPGAN/facexlib weights again (or symlink)

mkdir -p ~/comfy/sadtalker/ComfyUI/gfpgan/weights
# copy or symlink from the other install:
cp ~/comfy/ComfyUI/gfpgan/weights/* ~/comfy/sadtalker/ComfyUI/gfpgan/weights/
# or
ln -s ~/comfy/ComfyUI/gfpgan/weights/* ~/comfy/sadtalker/ComfyUI/gfpgan/weights/

VHS and many face-enhance pipelines assume these standard filenames and location. (GitHub)

5) Relaunch the second ComfyUI cleanly

# still in your 3.11 venv
comfy --workspace=~/comfy/sadtalker launch -- --listen 172.21.50.69 --port 8190
# or bind locally:
comfy --workspace=~/comfy/sadtalker launch -- --listen 127.0.0.1 --port 8190

CLI flags --listen and --port are standard. (Comfy Docs)

Running two ComfyUIs on one GPU without stepping on each other

  • It works, but VRAM is first-come, first-served. One instance doing a large render can starve the second. On a single GPU, prefer one heavy job at a time. Monitor with nvidia-smi. PyTorch allocators do not “pool VRAM” across processes. (PyTorch Forums)

  • Pin by GPU on multi-GPU hosts. Run one instance per GPU. Use ComfyUI’s device flag or environment variable:

    # example: pin instance to GPU 1
    CUDA_VISIBLE_DEVICES=1 comfy --workspace=... launch -- --port 8191
    

    Recent builds expose a --cuda-device flag in CLI wrappers as well.

  • Reserve headroom if needed. Some builds support --reserve-vram to keep a margin for the OS/encoders, but behavior varies; test on your driver.

  • Production pattern: one ComfyUI process per GPU, each with its own queue; route jobs externally. There is no fair-share scheduler inside a single process.

Quick health checks

# In the second venv/workspace
python - << 'PY'
import torch, os
print("CUDA available:", torch.cuda.is_available(), "CUDA:", torch.version.cuda)
print("VISIBLE:", os.environ.get("CUDA_VISIBLE_DEVICES"))
PY

# Confirm nodes are registered
grep -E "custom_nodes|IMPORT FAILED" -n ~/comfy/sadtalker/ComfyUI/log* || true

Install PyTorch wheels that match CUDA 12.x if you haven’t already. Verify torch.cuda.is_available() is True. (GitHub)

Summary

  • Two ComfyUIs can run at once. Use different venvs, folders, and ports. Queue contention and VRAM are the limits on one GPU. (Reddit)
  • In the new workspace, install the Comfyui-SadTalker node, VideoHelperSuite, and Show-Text; then create or symlink the checkpoints and gfpgan/weights directories. (GitHub)
  • For multi-user reliability, prefer “one process per GPU” and route jobs; ComfyUI does not share VRAM or schedule fairly by itself.

Hello,
I copied the files:

$ ls ~/comfy/sadtalker/ComfyUI/custom_nodes/Comfyui-SadTalker/SadTalker/checkpoints
mapping_00109-model.pth.tar  mapping_00229-model.pth.tar  SadTalker_V0.0.2_256.safetensors  SadTalker_V0.0.2_512.safetensors
$
$ ls ~/comfy/sadtalker/ComfyUI/gfpgan/weights
alignment_WFLW_4HG.pth  detection_Resnet50_Final.pth  GFPGANv1.4.pth  parsing_parsenet.pth

And I installed the following sections through Custom Node Manager:

ComfyUI-VideoHelperSuite
ComfyUI-Custom-Scripts
ComfyUI-Show-Text

But I got the same error as before:

1 Like

I don’t really know about ComfyUI errors… Anyway, since it’s not behaving as expected, it seems best to isolate the problem first.

In cases like this, there’s usually a pitfall somewhere unrelated, and we’re falling into it…


You already did the “data” part correctly (checkpoints + GFPGAN weights).
The remaining problem is code, not files: the SadTalker plugin itself is still not loading, so ComfyUI never registers these node types:

  • SadTalker
  • ShowAudio
  • LoadRefVideo
  • ShowVideo

Copying model files does not affect that. To isolate and fix this, you want to debug plugin import step-by-step.

Below is a safe, beginner-friendly debugging procedure.


1. Understand what the popup actually means

When ComfyUI shows:

“Some Nodes Are Missing … SadTalker, ShowAudio, LoadRefVideo, ShowVideo”

it means:

  • On startup, ComfyUI did not register any node with those names.

  • That happens only if:

    • The plugin folder is missing/incorrect or
    • The plugin crashed while loading (Python import error) or
    • ComfyUI is running from a different Python environment than the one where you installed its dependencies.

The official advice from ComfyUI docs and the SadTalker Reddit thread is:
look at the terminal and find the import error for that custom node. (ComfyUI)

The checkpoints/weights are used later, at runtime. They do not affect whether the node exists or not.


2. Make sure you’re in the right ComfyUI + venv

You now have two installs. For debugging, focus on only the new one.

# 1) activate the venv you created for the new ComfyUI
source ~/comfy/comfy-env/bin/activate   # adjust if your venv path is different

# 2) go to the new workspace
cd ~/comfy/sadtalker/ComfyUI

From now on, every pip install and every comfy launch should be run with this venv active.


3. Confirm the plugin folder is correct

Still in that directory:

cd ~/comfy/sadtalker/ComfyUI/custom_nodes
ls

You should see Comfyui-SadTalker in the list.

Then:

ls Comfyui-SadTalker

You should see (at least):

  • SadTalker/
  • nodes/
  • web/
  • workflow/
  • __init__.py
  • requirements.txt

This is the layout from the official repo. (GitHub)

If something big is missing (for example __init__.py or nodes/), delete the folder and re-clone:

rm -rf Comfyui-SadTalker
git clone https://github.com/haomole/Comfyui-SadTalker.git

Your checkpoints/weights paths are already correct and can stay as they are; they live under that folder.


4. Start ComfyUI from the terminal and capture the real error

Now run ComfyUI from the terminal, not from a desktop launcher:

# venv should still be active
cd ~/comfy/sadtalker
comfy --workspace=~/comfy/sadtalker launch -- --listen 127.0.0.1 --port 8190

On startup you will see a list like:

0.0 seconds: .../custom_nodes/comfyui-videohelpersuite
0.0 seconds: .../custom_nodes/ComfyUI-Show-Text
0.0 seconds (IMPORT FAILED): .../custom_nodes/Comfyui-SadTalker   <-- problem
...

Right above the (IMPORT FAILED) line you will see a Python traceback. For this plugin, a very common one is:

ComfyUI/custom_nodes/Comfyui-SadTalker module for custom nodes:
No module named 'moviepy.editor'

This exact case is reported in a SadTalker ComfyUI Reddit thread with the same “Missing Node Types: SadTalker, ShowAudio, LoadRefVideo, ShowVideo” error. (Reddit)

Whatever text you see just above (IMPORT FAILED) tells you which dependency is missing.

Think of this as “find the first red (IMPORT FAILED) line; read the line or two above; that string is what you must fix”.


5. Install the missing Python dependency in this venv

5-A. If the error mentions moviepy.editor

This is the most common one:

No module named 'moviepy.editor'

Install the exact version the Reddit thread recommends inside your ComfyUI venv:

# venv: comfy-env must be active
cd ~/comfy/sadtalker/ComfyUI/custom_nodes/Comfyui-SadTalker
python -m pip install moviepy==1.0.3

That thread solved the “missing node types” problem by doing exactly this, but using the ComfyUI Python executable on Windows instead; on Linux it’s the same equivalent idea: install moviepy into the ComfyUI environment, not system Python. (Reddit)

5-B. If the error mentions some other module

For example:

  • No module named 'librosa'
  • No module named 'soundfile'
  • No module named 'imageio_ffmpeg'

Then:

python -m pip install librosa soundfile imageio[ffmpeg]

Or, to be safe, you can use the plugin’s requirements.txt in this venv:

cd ~/comfy/sadtalker/ComfyUI/custom_nodes/Comfyui-SadTalker
python -m pip install -r requirements.txt

This file is provided by the plugin author for exactly this purpose. (GitHub)

If pip install -r ever fails again, install whatever module it complains about manually (same venv), then retry.


6. Restart ComfyUI and re-check the import line

After installing dependencies:

cd ~/comfy/sadtalker
comfy --workspace=~/comfy/sadtalker launch -- --listen 127.0.0.1 --port 8190

Check the “Import times for custom nodes” section again. You want to see:

0.0 seconds: .../custom_nodes/Comfyui-SadTalker

(without (IMPORT FAILED) and without a traceback above it).

Official ComfyUI troubleshooting docs also say: if you suspect a custom node, disable others and then re-enable half at a time until you find the broken one. In your case, we already know the broken one (Comfyui-SadTalker), so you use the same idea but focused: fix whatever import error that node prints until it no longer fails. (ComfyUI)


7. Verify in the UI that the nodes now exist

Once Comfyui-SadTalker imports cleanly:

  1. Open ComfyUI in your browser on port 8190.
  2. Right-click on the canvas, or use the node search.
  3. Type SadTalker.

If the plugin loaded, you should see:

  • SadTalker
  • ShowAudio
  • LoadRefVideo
  • ShowVideo

and be able to add them to the graph. These node names are defined by this plugin and documented on comfyai.run’s node pages. (ComfyAI)

If you can add them manually, loading the workflow will no longer show the “Some Nodes Are Missing” dialog.


8. Optional: isolate with “only one node pack active”

If you ever suspect that a different custom node is interfering (rare, but possible):

  1. Temporarily rename other custom node folders:

    cd ~/comfy/sadtalker/ComfyUI/custom_nodes
    mv ComfyUI-VideoHelperSuite _ComfyUI-VideoHelperSuite.off
    mv ComfyUI-Custom-Scripts  _ComfyUI-Custom-Scripts.off
    mv ComfyUI-Show-Text       _ComfyUI-Show-Text.off
    # leave only: Comfyui-SadTalker
    
  2. Restart ComfyUI and see if Comfyui-SadTalker still says (IMPORT FAILED).

This is the same “binary search on custom nodes” method the official docs recommend, just simplified to “test SadTalker alone first”. (ComfyUI)

If it loads fine alone, then some other pack is conflicting and you can re-enable them one by one.


9. Mental model summary (beginner-safe)

  • The popup cares about node classes, not model files.
    Until Comfyui-SadTalker imports successfully, SadTalker / ShowAudio / LoadRefVideo / ShowVideo do not exist at all.

  • Your checkpoints and GFPGAN weights are correct.
    Those matter later, when the node runs. They will not fix a missing-node error.

  • Isolation recipe:

    1. Run ComfyUI from terminal.
    2. Find (IMPORT FAILED): ...Comfyui-SadTalker.
    3. Read the error text directly above (e.g., No module named 'moviepy.editor').
    4. python -m pip install <that package> inside the ComfyUI venv.
    5. Restart and repeat until there is no (IMPORT FAILED) line.

Once that import line is clean, your SadTalker workflow should load without any “Some Nodes Are Missing” dialog.

Hello,
Please take a look at this:

(comfy-env) sadtalker@LocalAI:~/comfy/sadtalker/ComfyUI$ cd ~/comfy/sadtalker/ComfyUI/custom_nodes
(comfy-env) sadtalker@LocalAI:~/comfy/sadtalker/ComfyUI/custom_nodes$ ls
Comfyui-SadTalker
(comfy-env) sadtalker@LocalAI:~/comfy/sadtalker/ComfyUI/custom_nodes$ ls Comfyui-SadTalker
SadTalker

I went into the SadTalker directory and the checkpoints directory was there.

1 Like