feat(backend): add support for xlabs Flux LoRA format#8686
Merged
lstein merged 5 commits intoinvoke-ai:mainfrom Dec 24, 2025
Merged
feat(backend): add support for xlabs Flux LoRA format#8686lstein merged 5 commits intoinvoke-ai:mainfrom
lstein merged 5 commits intoinvoke-ai:mainfrom
Conversation
Add support for loading Flux LoRA models in the xlabs format, which uses
keys like `double_blocks.X.processor.{qkv|proj}_lora{1|2}.{down|up}.weight`.
The xlabs format maps:
- lora1 -> img_attn (image attention stream)
- lora2 -> txt_attn (text attention stream)
- qkv -> query/key/value projection
- proj -> output projection
Changes:
- Add FluxLoRAFormat.XLabs enum value
- Add flux_xlabs_lora_conversion_utils.py with detection and conversion
- Update formats.py to detect xlabs format
- Update lora.py loader to handle xlabs format
- Update model probe to accept recognized Flux LoRA formats
- Add unit tests for xlabs format detection and conversion
lstein
approved these changes
Dec 24, 2025
Collaborator
lstein
left a comment
There was a problem hiding this comment.
I tested four different XLabs-format Flux LoRAs downloaded from the HuggingFace XLabs AI site. All were correctly recognized as Flux LoRAs and had the expected influence on generated images. In addition, I compared an XLabs format LoRA (disney_lora) to one that XLabs preconverted to standard format (disney_lora_comfy_converted), and both generated exactly the same image, confirming that the PR's conversion code is working properly.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Add support for loading Flux LoRA models in the xlabs format.
The xlabs format uses a different key structure than other Flux LoRA formats:
double_blocks.X.processor.{qkv|proj}_lora{1|2}.{down|up}.weightWhere:
lora1→ image attention stream (img_attn)lora2→ text attention stream (txt_attn)qkv→ query/key/value projectionproj→ output projectionChanges
FluxLoRAFormat.XLabsenum value to taxonomyflux_xlabs_lora_conversion_utils.pywith format detection and conversionformats.pyto include xlabs in the detection cascadelora.pyloader to handle xlabs formatconfigs/lora.pyto accept recognized Flux LoRA formats (fixes installation of xlabs LoRAs)Related Issues / Discussions
Adds support for xlabs-format Flux LoRAs which were previously rejected with "model does not match LyCORIS LoRA heuristics".
Example LoRA using this format: Flux Realism LoRA
QA Instructions
flux-RealismLora.safetensorsfrom XLabs-AI)Merge Plan
Standard merge, no special considerations.
Checklist
What's Newcopy (if doing a release after this PR)