-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support LoRA for clip text encoder in diffusers #2479
Conversation
The documentation is not available anymore as the PR was closed or merged. |
I have run |
…ace PEFT library. Based on huggingface/diffusers#2479 from haofanwang, then modified to get working.
thanks @haofanwang ❤ |
@patrickvonplaten can u take a look at this PR, this support should be very useful for training a LoRA. |
lora_dropout=args.lora_text_encoder_dropout, | ||
bias=args.lora_text_encoder_bias, | ||
) | ||
text_encoder = LoraModel(config, text_encoder) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem is that we cannot really load LoraModel
later for inference currently as it's created in a somewhat hacky way here: https://github.com/huggingface/peft/blob/8358b2744555e8c18262f7befd7ef040527a6f0f/src/peft/tuners/lora.py#L90
Could we maybe move everything to the research_folder
project instead of adding it to the "easy" LoRA example script?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
make senses to me
Maybe cc @williamberman here as well |
will move this to |
Fix #2469.
Made a PR in transformers first, @sgugger suggests that LoRA has been supported in peft. This PR leans on peft to further support adding LoRA in text encoder.