this post was submitted on 23 Jul 2023
11 points (100.0% liked)

Stable Diffusion

1487 readers
1 users here now

Welcome to the Stable Diffusion community, dedicated to the exploration and discussion of the open source deep learning model known as Stable Diffusion.

Introduced in 2022, Stable Diffusion uses a latent diffusion model to generate detailed images based on text descriptions and can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by text prompts. The model was developed by the startup Stability AI, in collaboration with a number of academic researchers and non-profit organizations, marking a significant shift from previous proprietary models that were accessible only via cloud services.

founded 1 year ago
MODERATORS
 

I was curious, do you run Stable Diffusion locally? On someone else's server? What kind of computer do you need to run SD locally?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 1 year ago (1 children)

I run it locally. I prefer having the most control I can over the install, what extensions I want to use, etc.

The most important thing to run it in my opinion is VRAM. The more the better, as much as you can get.

[–] [email protected] 2 points 1 year ago (2 children)

I run locally too. I have a 10gb 3080.

I haven’t had vram issues could you elaborate on your statement?

I know on local llama I have been limited to 13b models

[–] [email protected] 2 points 1 year ago

Stable Diffusion loves VRAM. The larger and more complex the images you're trying to produce, the more it'll eat.

My line of thinking is that if you have a slower GPU it'll generate slower, sure, but if you run out of VRAM it'll straight up fail and shout at you.

I'm not an expert in this field though, so grain of salt, YMMV, all that.

[–] [email protected] 1 points 1 year ago

I know on local llama I have been limited to 13b models

You can run llama.cpp on the CPU with reasonable speeds making full use of normal RAM to run much larger models.

As for 10GB in SD, I run into lack of VRAM quite constantly when overdoing it, e.g. 1024x768 with multiple ControlNets and some other stuff is pretty much guaranteed to overflow it. I have to reduce the resolution when making use of ControlNet. Dreambooth training didn't even work at all for me due to lack of VRAM (might be possible to work around, but at least the defaults weren't usable).

10GB is still very much usable with SD, but one has to be aware of the limitations. The new SDXL will also increase the VRAM requirements a good bit.