Skip to content

Possible Bug in DDIMScheduler when prediction_type = 'sample' #1486

@JianxinMa

Description

@JianxinMa

Describe the bug

https://github.com/huggingface/diffusers/blob/v0.9.0/src/diffusers/schedulers/scheduling_ddim.py#L303
https://github.com/huggingface/diffusers/blob/v0.9.0/src/diffusers/schedulers/scheduling_ddim.py#L278

It seems that the ``model_output'' at L303 should be the predicted epsilon. However, it is actually the predicted original sample when prediction_type == "sample".

A workaround is to use ``use_clipped_model_output=True''. And the results are much worse if use_clipped_model_output=False, even when clip_sample=False.

Reproduction

No response

Logs

No response

System Info

  • diffusers version: 0.9.0
  • Platform: macOS-12.6-x86_64-i386-64bit
  • Python version: 3.9.7
  • PyTorch version (GPU?): 1.10.0 (False)
  • Huggingface_hub version: 0.10.1
  • Transformers version: 4.23.1
  • Using GPU in script?:
  • Using distributed or parallel set-up in script?:

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingstaleIssues that haven't received updates

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions