this post was submitted on 11 Jul 2023
26 points (100.0% liked)
Stable Diffusion
1487 readers
1 users here now
Welcome to the Stable Diffusion community, dedicated to the exploration and discussion of the open source deep learning model known as Stable Diffusion.
Introduced in 2022, Stable Diffusion uses a latent diffusion model to generate detailed images based on text descriptions and can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by text prompts. The model was developed by the startup Stability AI, in collaboration with a number of academic researchers and non-profit organizations, marking a significant shift from previous proprietary models that were accessible only via cloud services.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
How much detail did you put into the prompt? I had a play around with simple (one sentence) prompts and the results looked impressive. The prompt database was really helpful too.
I think the most important "trick" was to loop back the refiner a couple of times. The refiner can both remove and add details, or reinforce a particular art style. By piping the latent model output into another ksampler, and repeat this 2-3 times would (for some prompts) consistently greatly improve images.
I don't know how detailed people have prompts, but these one is about 20 or so descriptive and weighted. It is very consistent in the quality and visual aesthetic, yet creative in the creature design. I'm absolutely amazed by SDXL.
Example of repeated iterations with the refiner:
it looks like it gets a 3rd / 4th leg
Indeed. I usually mix down multiple iterations manually and pick the features I like.