Conversation
…ove it from configuration utils
|
The documentation is not available anymore as the PR was closed or merged. |
| class StableDiffusionSafetyChecker(PreTrainedModel): | ||
| config_class = CLIPConfig | ||
|
|
||
| _no_split_modules = ["CLIPEncoderLayer"] |
There was a problem hiding this comment.
needed to allow to use device_map="auto"
|
This PR allows the following to be fully functional once the two following are merged: |
…sers into piEspositoMain
|
looks like isort needs to be ran to make the linter happy. |
| import numpy as np | ||
| import torch | ||
|
|
||
| import accelerate |
There was a problem hiding this comment.
Need to sort the imports to make the linter happy, running isort locally gives this order,
import os
import random
import tempfile
import tracemalloc
import unittest
import accelerate
import numpy as np
import PIL
import torch
import transformers
|
|
||
| import accelerate | ||
| import PIL | ||
| import transformers |
There was a problem hiding this comment.
see above comment for isort import grouping and ordering.
tests/test_pipelines.py
Outdated
| from diffusers.pipeline_utils import DiffusionPipeline | ||
| from diffusers.schedulers.scheduling_utils import SCHEDULER_CONFIG_NAME | ||
| from diffusers.utils import CONFIG_NAME, WEIGHTS_NAME, floats_tensor, load_image, slow, torch_device | ||
| from packaging import version |
There was a problem hiding this comment.
might also need to check with isort, locally it makes changes to their original commit also (makes the multiple item imports into a multiline tuple, and sorts PIL to the top), so my isort might be different than their config.
|
PRs on Think we can merge here. @piEsposito, again very sorry about fiddeling with your PR so much - ok if we merge this PR (you'll be a co-author of it)? |
|
@patrickvonplaten no problem at all, things like that happen all the time. I really appreciate how you handled it and your kindness of keeping me as a co-author. IMO you can merge this. |
* add accelerate to load models with smaller memory footprint * remove low_cpu_mem_usage as it is reduntant * move accelerate init weights context to modelling utils * add test to ensure results are the same when loading with accelerate * add tests to ensure ram usage gets lower when using accelerate * move accelerate logic to single snippet under modelling utils and remove it from configuration utils * format code using to pass quality check * fix imports with isor * add accelerate to test extra deps * only import accelerate if device_map is set to auto * move accelerate availability check to diffusers import utils * format code * add device map to pipeline abstraction * lint it to pass PR quality check * fix class check to use accelerate when using diffusers ModelMixin subclasses * use low_cpu_mem_usage in transformers if device_map is not available * NoModuleLayer * comment out tests * up * uP * finish * Update src/diffusers/pipelines/stable_diffusion/safety_checker.py * finish * uP * make style Co-authored-by: Pi Esposito <piero.skywalker@gmail.com>
* add accelerate to load models with smaller memory footprint * remove low_cpu_mem_usage as it is reduntant * move accelerate init weights context to modelling utils * add test to ensure results are the same when loading with accelerate * add tests to ensure ram usage gets lower when using accelerate * move accelerate logic to single snippet under modelling utils and remove it from configuration utils * format code using to pass quality check * fix imports with isor * add accelerate to test extra deps * only import accelerate if device_map is set to auto * move accelerate availability check to diffusers import utils * format code * add device map to pipeline abstraction * lint it to pass PR quality check * fix class check to use accelerate when using diffusers ModelMixin subclasses * use low_cpu_mem_usage in transformers if device_map is not available * NoModuleLayer * comment out tests * up * uP * finish * Update src/diffusers/pipelines/stable_diffusion/safety_checker.py * finish * uP * make style Co-authored-by: Pi Esposito <piero.skywalker@gmail.com>
Co-Authored-By: Pi Esposito piero.skywalker@gmail.com