this post was submitted on 03 Jul 2023
14 points (100.0% liked)

Stable Diffusion

4305 readers
11 users here now

Discuss matters related to our favourite AI Art generation technology

Also see

Other communities

founded 1 year ago
MODERATORS
 

I have been toying around with Stable Diffusion for some time now and have been able to get out great images.

However, as I dive deeper I want to really get images that match as closely to what I imagine and I'm kinda struggling to get there.

For now, I work with Control Net and Inpainting, which help a lot, but I have yet to produce images I'm really satisfied with.

How's your workflow when composing specific images? Do you complement it with Photoshop (or similar)?

top 2 comments
sorted by: hot top controversial new old
[โ€“] [email protected] 2 points 1 year ago

You might find this useful, allows for regional prompting: https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111

Main thing I do is just lots of inpainting passes. Sketch things out in another program first if I'm adding or subtracting something big.

[โ€“] [email protected] 1 points 1 year ago

For composition I use Semantic Segmentation ControlNet - Sketch loosely, then inpaint or more ControlNet. Of course I use GIMP or any other tool to finetune the image or to "force the model's hand" a little during inpainting.