Skip to content

Conversation

sayakpaul
Copy link
Member

What does this PR do?

As discussed.

@sayakpaul sayakpaul requested a review from DN6 September 8, 2025 06:16
This fixture will be executed once per test class and will populate
the cached_non_lora_outputs dictionary.
"""
for scheduler_cls in self.scheduler_classes:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like we mostly use a single scheduler for testing with a few exceptions (CogVideoX)

The LoRA test where scheduler type matters would be LCM LoRAs, and in those tests it looks like LCM scheduler is set directly?

pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)

So why not refactor this entirely to just work with a single scheduler, and we can use the pipeline class name as the cache key?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer to keep the scope of this PR fixed to caching only.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For SD and SDXL, we need to test for both schedulers which is why we default to:

scheduler_classes = [DDIMScheduler, LCMScheduler]

When the utils.py was added, we didn't have many models with LoRA support, hence this was the default to test with. But that has changed. I don't think we have to test for both schedulers even for SD and SDXL since we have:

def test_sdxl_lcm_lora(self):

But two schedulers are also used for CogVideoX as you mentioned. Do you propose to have a separate test class for that? Something like CogVideoXDPMSchedulerLoRATests?

I agree to this suggestion, generally. But would really prefer to do that in a separate PR.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For CogVideo, does the choice of scheduler affect the LoRA features (loading, fusing etc) in any way? We can do it in a separate PR that's fine.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It shouldn't. As I said, I generally agree to your suggestion.

@sayakpaul sayakpaul requested a review from DN6 September 11, 2025 04:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants