Fix SVD bug (shape of time_context)#7268
Conversation
|
@DN6 a gentle ping here. |
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
|
@KimbingNg Sorry, but I'm not quite sure what this is solving? The tensor shapes do match? Are you referring to the ordering of the elements inside the tensor? |
@DN6 Yes, I am referring to the ordering issue. The data arrangements in this line and this line are inconsistent, which could cause data mismatch during the cross-attention operation in this line when |
|
Ah I see. Looks okay to merge to me. cc @patil-suraj for visibility. Could you run |
…by `make style && make quality`
|
@DN6 I have run |
* Fix SVD bug (shape of `time_context`) * Formatting code * Formatting src/diffusers/models/transformers/transformer_temporal.py by `make style && make quality` --------- Co-authored-by: kevinkhwu <kevinkhwu@tencent.com> Co-authored-by: Sayak Paul <spsayakpaul@gmail.com> Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
* Fix SVD bug (shape of `time_context`) * Formatting code * Formatting src/diffusers/models/transformers/transformer_temporal.py by `make style && make quality` --------- Co-authored-by: kevinkhwu <kevinkhwu@tencent.com> Co-authored-by: Sayak Paul <spsayakpaul@gmail.com> Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
What does this PR do?
Fixes issue
The data arrangements in this line and this line are inconsistent, which could cause data mismatch during the cross-attention operation in this line when
batch_size> 1.Thus, it should be
(b h w) n cinstead of(h w b) n c.Before submitting
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.