-
Notifications
You must be signed in to change notification settings - Fork 6.3k
[tests] cache non lora pipeline outputs. #12298
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
This fixture will be executed once per test class and will populate | ||
the cached_non_lora_outputs dictionary. | ||
""" | ||
for scheduler_cls in self.scheduler_classes: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems like we mostly use a single scheduler for testing with a few exceptions (CogVideoX)
The LoRA test where scheduler type matters would be LCM LoRAs, and in those tests it looks like LCM scheduler is set directly?
diffusers/tests/lora/test_lora_layers_sdxl.py
Line 227 in 4067d6c
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) |
So why not refactor this entirely to just work with a single scheduler, and we can use the pipeline class name as the cache key?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer to keep the scope of this PR fixed to caching only.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For SD and SDXL, we need to test for both schedulers which is why we default to:
Line 112 in 4345907
scheduler_classes = [DDIMScheduler, LCMScheduler] |
When the utils.py was added, we didn't have many models with LoRA support, hence this was the default to test with. But that has changed. I don't think we have to test for both schedulers even for SD and SDXL since we have:
diffusers/tests/lora/test_lora_layers_sdxl.py
Line 223 in 4345907
def test_sdxl_lcm_lora(self): |
But two schedulers are also used for CogVideoX as you mentioned. Do you propose to have a separate test class for that? Something like CogVideoXDPMSchedulerLoRATests
?
I agree to this suggestion, generally. But would really prefer to do that in a separate PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For CogVideo, does the choice of scheduler affect the LoRA features (loading, fusing etc) in any way? We can do it in a separate PR that's fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It shouldn't. As I said, I generally agree to your suggestion.
What does this PR do?
As discussed.