[LoRA] feat: support non-diffusers lumina2 LoRAs.#10909
Conversation
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
|
For https://github.com/huggingface/diffusers/actions/runs/13544909753/job/37854074489?pr=10909, I have opened #10911 For 2nd and third failures, pinged @DN6 internally. I don't think they are triggered by the changes of this PR. Fourth one seems Hub-related. Any objections to going ahead with merging? |
|
Awesome, thank you @sayakpaul . Looking forward to getting it merged. |
|
@DN6 the failing tests seem to be unrelated to this PR? |
| # conversion. | ||
| non_diffusers = any(k.startswith("diffusion_model.") for k in state_dict) | ||
| if non_diffusers: | ||
| state_dict = _convert_non_diffusers_lumina2_lora_to_diffusers(state_dict) |
There was a problem hiding this comment.
Is this prefix specific to Lumina? Should we always just remove it?
There was a problem hiding this comment.
It is not specific to Lumina2 but specific to external trainer libraries. In all the past iterations where we have supported non-diffusers LoRA checkpoints, we have removed it because it's not in the diffusers compatible format.
We are not removing the prefix. We are using it to detect if the state dict is non-diffusers. If so, we are converting the state dict.
This is how rest of the non-diffusers checkpoints across different models have been supported in diffusers.
What does this PR do?
Fixes:
#10866
Cc: @nitinmukesh
Results (under the same seed)
Code