enable unidiffuser test cases on xpu#11444
Conversation
Signed-off-by: Yao Matrix <matrix.yao@intel.com>
Signed-off-by: Yao Matrix <matrix.yao@intel.com>
|
@a-r-r-o-w @DN6 , pls help review, thx. |
|
the failed cases seem not related to my changes. |
| pipe.set_progress_bar_config(disable=None) | ||
|
|
||
| pipe.enable_model_cpu_offload(device=torch_device) | ||
| pipe.enable_model_cpu_offload() |
There was a problem hiding this comment.
Was going to merge but just realized this was removed. Change looks good, but is there a particular reason for this to be removed for XPU case? The enable_model_cpu_offload method correctly gets passed torch_device="xpu" here, no?
There was a problem hiding this comment.
I changed it just because this is introduced in this PR #9399, it puts explicit torch_device to enable_model_cpu_offload to try to make it work on device other than CUDA(which is XPU in that case). But it doesn't work because diffusers internal logic also call enable_model_cpu_offload which cannot fixed by only setting device in application code, I fixed that issue in this PR #11288, and the PR was merged.
So, with latest code, we don't need application/test code to explicitly set the device, so I changed such changes in this test back to original. And another motivation I changed back is: I can see this usage is the recommended way in diffusers docs, so to align w/ the recommendation and test the most common used pattern.
To be short, in current codebase, both(with torch_device and without torch_device) work, I just changed back the way in original test code.
Hope it explains. thx.
There was a problem hiding this comment.
Thanks for the explanation. I've triggered the tests and will merge once they pass
|
@a-r-r-o-w , pls feel free to let me know if you need more clarification or you have more comments, thx |
|
@a-r-r-o-w seems my rebase to main interrupted your test, sorry, pls help re-trigger. |
w/
pytest -rA tests/pipelines/unidiffuser/test_unidiffuser.py, all cases passed except below 2:these 2 cases fail both on cuda and xpu, track through this issue #11443.