question about the lora : body2img_V6_kisekaeichi_dim4_1e-3-000140.safetensors
#1
by
shaoniana1997
- opened
Hi, tori29umai0123, thank you for your nice job in Oneframe_kisekaeichi.json ComfyUI workflow. I’d like to ask you a question about the training process of this LoRA.
Was this LoRA trained using the hv_train_network.py or fpack_train_network.py script from the musubi_tuner repository? Or to put it another way, is this LoRA a FramePack LoRA?"
In musubi_tuner repository training guide, it mentions that the dataset should be constructed like this—as shown in the figure below. I'm wondering—if I want to achieve the same effect as the output images from this workflow, how should I set the image_path, control_path_0, and control_path_1?