Skip to content

[Scheduler] fix: EDM schedulers when using the exp sigma schedule.#8385

Merged
yiyixuxu merged 8 commits intomainfrom
cosxl-fix
Jun 5, 2024
Merged

[Scheduler] fix: EDM schedulers when using the exp sigma schedule.#8385
yiyixuxu merged 8 commits intomainfrom
cosxl-fix

Conversation

@sayakpaul
Copy link
Copy Markdown
Member

What does this PR do?

Otherwise the following would fail:

import torch
from huggingface_hub import hf_hub_download
from diffusers import StableDiffusionXLInstructPix2PixPipeline, EDMEulerScheduler
from diffusers.utils import load_image

edit_file = hf_hub_download(repo_id="stabilityai/cosxl", filename="cosxl_edit.safetensors")

pipe_edit = StableDiffusionXLInstructPix2PixPipeline.from_single_file(edit_file, num_in_channels=8, is_cosxl_edit=True)

pipe_edit.scheduler = EDMEulerScheduler(
    sigma_min=0.002, sigma_max=120.0, sigma_data=1.0, prediction_type="v_prediction", sigma_schedule="exponential"
)
pipe_edit.to("cuda")

prompt = "make it a cloudy day"
image = load_image("https://huggingface.co/datasets/multimodalart/genai-book-images/resolve/main/mountain.png")
pipe_edit(prompt=prompt, image=image).images[0]

@apolinario FYI.

@sayakpaul sayakpaul requested a review from yiyixuxu June 3, 2024 09:41
@HuggingFaceDocBuilderDev
Copy link
Copy Markdown

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

sigma_min = sigma_min or self.config.sigma_min
sigma_max = sigma_max or self.config.sigma_max
sigmas = torch.linspace(math.log(sigma_min), math.log(sigma_max), len(ramp)).exp().flip(0)
if not torch.is_tensor(ramp):
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we make the ramp variable here and it is a numpy

ramp = np.linspace(0, 1, self.num_inference_steps)

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I didn't get your point.

We compute ramp using torch.linspace() here:

ramp = torch.linspace(0, 1, num_train_timesteps)

And then we compute it with np.linspace() here:

ramp = np.linspace(0, 1, self.num_inference_steps)

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we make ramp we can decide what they are - so there is no need to have the if here :)

@sayakpaul sayakpaul requested a review from yiyixuxu June 4, 2024 06:56
@sayakpaul
Copy link
Copy Markdown
Member Author

@yiyixuxu LMK if the recent changes work for you.

@yiyixuxu
Copy link
Copy Markdown
Collaborator

yiyixuxu commented Jun 4, 2024

test still fails though

let's update it here to be a torch tensor and remove this line

so it is consistent between set_timesteps and __init__ (I think it's causing the error) and also, we don't need to convert it to numpy and back to torch

@yiyixuxu yiyixuxu merged commit 48207d6 into main Jun 5, 2024
@yiyixuxu yiyixuxu deleted the cosxl-fix branch June 5, 2024 05:31
sayakpaul added a commit that referenced this pull request Dec 23, 2024
…8385)

* fix: euledm when using the exp sigma schedule.

* fix-copies

* remove print.

* reduce friction

* yiyi's suggestioms
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants