-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How could I convert a LoRA .safetensors or .ckpt file into the format that diffusers can process? #2363
Comments
I would pull down |
@jndietz Thanks, but do you know is convert_original_stable_diffusion_to_diffusers.py this script the correct way to convert the LoRA file from *.safetensors to diffusers format? I though it cannot be used to convert the LoRA file but as i cannot find any other scipts related, I give it a try and failed |
@garyhxfang Based on this: https://huggingface.co/spaces/diffusers/sd-to-diffusers/blob/main/app.py It seems like, yes, |
Looking at the |
@jndietz Thanks, let me have a try again. |
Still cannot convert the safetensors LoRA file, could master @patrickvonplaten or @patil-suraj help to have a look on this issue? Also I found that we don't have solution on convert/load the textual inverse embedding in .pt format, I spent all they searching for the solution but doesn't find any approach. |
Hey @garyhxfang , Note that the conversion script you used won't work as it expects a full model checkpoint. Could you upload your LoRA embeddings somewhere so that we could write a new conversion script? How did you get the LoRA checkpoints for this? Which notebook did you use for training? |
Thanks a lot for your response! @patrickvonplaten I download the LoRA file from civitai.com, you can also have a check on the website, it is one of the most important websites/communities that people share their stable diffusion and LoRA models. I think many of them train the LoRA model with AUTOMATIC111/stable-diffusion-webui. The LoRA model I would like to convert are Since many many popular LoRA models are shared in .safetensors format, and the end users love LoRA very much as it really perform very well in generating great images, it's quite important for us developers that diffusers could have a generic approach/script to convert the .safetensors LoRA models into the format can be loaded by diffusers. So that developers using diffusers could make use of these great models already in the communities to build better models or applications. Thanks very much. |
BTW this seems related: #2326 |
Better yet, support |
I ran into this too trying to load or convert LatentLabs360 safetensors LoRA file so I can use it with the new Panorama pipeline, but no matter what I tried in the convert script or loading pretrained, I couldn't get it to work. Model is here https://civitai.com/models/10753/latentlabs360 if anyone wants to try... I now see that convert_lora_safetensor_to_diffusers.py is being worked on, so I guess I can wait for that to get merged. |
look this: https://github.com/kohya-ss/sd-scripts/blob/main/networks/merge_lora_old.py |
it seems like this will soon be merged into diffusers library #2448 and in the meantime this script may be useful: https://github.com/haofanwang/Lora-for-Diffusers/blob/main/format_convert.py |
So it it currently possible to train a LoRA model for SD 2.1 and then convert it to a safetensor or ckpt/pt? I see this PR added a safetensor to diffusers script, but what about the reverse? #2403 |
@jndietz @garyhxfang Hi, I am a real novice for coding and AIGC, and all I know about Python is its basic syntax. I can't understant what is "pull down |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
Hey @garyhxfang, We just added a |
cc @sayakpaul Let's maybe add this for the LoRA loading as well no? |
quick summarize
cc. @sayakpaul |
When I try
I'm currently using |
Could you update safetensors to the latest stable version and retry? |
@sayakpaul Getting the same error with the latest safetensors stable v0.3.0...
Alternatively is there a huggingspace space that can perform this conversion to a .bin file? |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
actions-github |
Gentle ping here @sayakpaul - this is also related to the other LoRA issues no? E.g.: #2326 (comment) |
Maybe the #3490 PR will fix this. Will see. CC: @garyhxfang @athenawisdoms |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
Gentle ping in the issue to see if this is still an issue given all the above discussions considered. |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
can someone summarize and point to the real solution (if implemented yet) as a bookend to the discussion? |
See if this section of the documentation helps: |
How fascinating! This is my first time seeing a new feature being requested and it being added, from start to finish. Wow ! |
Describe the bug
I got some LoRA model in .safetensors format, and tried to convert in to the format that can be used in diffusers.
But nowhere I can find any document or scripts to achieve that.
So I try to convert the file with the convert_original_stable_diffusion_to_diffusers.py scripts, but it didn't work.
Could somebody provide a guideline or script about how should I covert the LoRAs?
Reproduction
python convert_original_stable_diffusion_to_diffusers.py --checkpoint_path /xxx/yyy/zzz.safetensors --scheduler_type euler-ancestral --dump_path /aaa/bbb/ccc --from_safetensors
and i got the following error
───────────────────── Traceback (most recent call last) ──────────────────────╮
│ /root/imagine/notebook/convert.py:105 in │
│ │
│ 102 │ parser.add_argument("--device", type=str, help="Device to use (e.g │
│ 103 │ args = parser.parse_args() │
│ 104 │ │
│ ❱ 105 │ pipe = load_pipeline_from_original_stable_diffusion_ckpt( │
│ 106 │ │ checkpoint_path=args.checkpoint_path, │
│ 107 │ │ original_config_file=args.original_config_file, │
│ 108 │ │ image_size=args.image_size, │
│ │
│ /root/miniconda3/lib/python3.8/site-packages/diffusers/pipelines/stable_diff │
│ usion/convert_from_ckpt.py:945 in │
│ load_pipeline_from_original_stable_diffusion_ckpt │
│ │
│ 942 │ unet_config["upcast_attention"] = upcast_attention │
│ 943 │ unet = UNet2DConditionModel(**unet_config) │
│ 944 │ │
│ ❱ 945 │ converted_unet_checkpoint = convert_ldm_unet_checkpoint( │
│ 946 │ │ checkpoint, unet_config, path=checkpoint_path, extract_ema=ex │
│ 947 │ ) │
│ 948 │
│ │
│ /root/miniconda3/lib/python3.8/site-packages/diffusers/pipelines/stable_diff │
│ usion/convert_from_ckpt.py:335 in convert_ldm_unet_checkpoint │
│ │
│ 332 │ │
│ 333 │ new_checkpoint = {} │
│ 334 │ │
│ ❱ 335 │ new_checkpoint["time_embedding.linear_1.weight"] = unet_state_dic │
│ 336 │ new_checkpoint["time_embedding.linear_1.bias"] = unet_state_dict[ │
│ 337 │ new_checkpoint["time_embedding.linear_2.weight"] = unet_state_dic │
│ 338 │ new_checkpoint["time_embedding.linear_2.bias"] = unet_state_dict[ │
╰──────────────────────────────────────────────────────────────────────────────╯
KeyError: 'time_embed.0.weight'
Logs
No response
System Info
diffusers 0.11.0 python3.8
The text was updated successfully, but these errors were encountered: