fix simple attention processor encoder hidden states dimension ordering#3014
Merged
williamberman merged 1 commit intohuggingface:mainfrom Apr 10, 2023
Conversation
|
The documentation is not available anymore as the PR was closed or merged. |
patrickvonplaten
approved these changes
Apr 9, 2023
Contributor
patrickvonplaten
left a comment
There was a problem hiding this comment.
Do we need to update any slow tests here or not really?
Contributor
Author
Nope ran them manually to double check |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
I accidentally flipped the sequence and hidden dimensions of the encoder hidden states in the text projection model for the unclip pipeline. This will standardize all attention processors to use
(batch, seq len, hidden dimension).The
encoder_hidden_states.transpose(1,2)in the added kv attention processor is extraneous. I ran a script against the hub and confirmed that the karlo pipelines are the only pipelines which use the simple attention blocks, so this is a safe change to make.I separately ran the unclip integration tests and confirmed it works 👍