I found this post on the Comfy GitHub about weights and such: https://github.com/comfyanonymous/ComfyUI/discussions/521
So it seems like I need to use AdvancedClipEncode to keep the same behavior as A1111. Will try later today.
Discuss matters related to our favourite AI Art generation technology
I found this post on the Comfy GitHub about weights and such: https://github.com/comfyanonymous/ComfyUI/discussions/521
So it seems like I need to use AdvancedClipEncode to keep the same behavior as A1111. Will try later today.
After using this module and "CLIP Set Last Layer" (i.e. Skip CLIP) I was able to generate an image nearly identical to A1111. I think "proper" interpretation of the prompts has helped immensely (since most ppl share their A1111 prompts). Sad to say though there are a few prompts I use that just look better with the GPU seed and AFAIK I can't turn that on for Comfy.
I'm thinking one day I'll do everything in Comfy, writing nodes looks really fun, but for now I'll probably straddle the two.
Regarding different outputs for the same seed: have You changed seed source to CPU in A1111? The noise You get that way is consistent across different hardware vendors and different from the GPU-sourced one.
I have used A1111 just now to check and got very simmilar results to ComfyUI that way.
I tried setting A1111 to CPU/GPU/NV generator sources and none were close to the Comfy outputs. You have a pretty vanilla A1111 install? Maybe I've tweaked too many things that I've forgotten about.
I am also using Python 3.11 which isn't officially supported by A1111 so that could cause issues maybe? Never saw any issues with using the newer version and I'm using the recommended library versions.
Pretty much vanilla, only --xformers --medvram arguments and CPU seed. In that case, someone with more A1111 experience should weigh in, mine is limited, I mainly use comfy nowadays and probably am not up-to-date with development.