Hey people of Perchance and to whoever developed this generator,
I know people keep saying, “The new model is better, just move on,” but I need to say something clearly and honestly: I loved the old model.
The old model was consistent.
If I described a character — like a guy in a blue jumper, red jeans, and purple hair — the old model actually gave me that. It might sound ridiculous, but at least I could trust it to follow the prompt. When I used things like double brackets ((like this)), the model respected my input.
And when I asked for 200 images, the results looked like the same character across the whole batch. It was amazing for making characters, building stories, and exploring different poses or angles. The style was consistent. That mattered to me. That was freedom.
Now with the new model, I try to recreate those characters I used to love and they just don’t look right anymore. The prompts don’t land. The consistency is gone. The faces change, the outfits get altered, and it often feels like the model is doing its own thing no matter what I ask.
I get that the new model might be more advanced technically — smoother lines, better faces, fewer mistakes. But better in one way doesn’t mean better for everyone. Especially not for those of us who care about creative control and character accuracy. Sometimes the older tool fits the job better.
That’s why I’m asking for one thing, and I know I’m not alone here:
Let us choose. Bring back the old model or give us the option to toggle between the old and the new. Keep both. Don’t just replace something people loved.
I’ve seen a lot of people online saying the same thing. People who make comics, visual novels, storyboards, or just love creating characters — we lost something when the old model was removed. The new one might look nice, but it doesn’t offer the same creative control.
This isn’t about resisting change. This is about preserving what worked and giving users a real choice. You made a powerful tool. Let us keep using it the way we loved.
Thanks for reading this. I say it with full respect. Please bring the old model back — or at least give us a way to use it again.
please
Well, that was a syntax from SD, while the new model is Flux. It requires different prompting; it doesn't accept the same syntax, from what people tested. Some have had success reinforcing desired aspects with more adjectives, or even repeating specific parts of the prompt.
As I explained in another thread, you can use the seed system to preserve the some details of the image while changing others: https://lemmy.world/post/30084425/17214873
With a seed, notice the pose and general details remain. One of them had glasses on, while others were clean shaven. But the prompt wasn't very descriptive on the face.
If I keep the same seed, but change a detail in the prompt, it preserves a lot of what was there before:
Even then, the result will try to be what you describe. You can be as detailed as you want with the face. On that thread I showed that you can still get similar faces if you describe them.
Keeping two models hosted at once would very likely involve additional costs. While it might be possible, it seems unlikely due to this reason.
On the discord server, I've seen people create all of these. A lot of it is a matter of prompting. People on the discord are very helpful and quite active at experimenting styles, seeds, prompts, and I've had a lot of help with getting good results there.
With the new model, everyone started on the same footing. We don't know the new best practices on the prompting, but people are experimenting, and many have managed to recreate images they made before.
I understand what you're saying, but that’s not the point. Let me explain properly.
Yes, if I write something like “a guy in a blue jumper, red jeans, and purple hair wearing dark sunglasses,” I get that the new model will try to follow that. That’s not the issue.
The issue isn’t about what the prompt says — it’s about how the characters come out.
With the old model, when I created characters using the same prompt across multiple generations, I got images that looked like the same character every time — same face, same style, same feeling, with only small variations. That’s what I loved. That consistency mattered. I could trust it. It made character creation easy, fun, and powerful for storytelling.
Now with the new model, I use the exact same prompts, same settings, and even the same seed structure — and yet the results look completely different. The style shifts, the faces change, and it feels like I’m getting a new person each time. Even the framing is inconsistent — for example, the old model would show the full torso, while the new one sometimes crops too close, like it’s focusing only on the top half.
Sure, I’ll admit: the new model is prettier. It’s technically cleaner, with sharper rendering and fewer artifacts. But that doesn’t mean it’s better for everyone. For me, the old model’s simplicity and reliability made it far more useful.
I’m not saying throw out the new model. I’m saying: give us the option to choose. Let those of us who found value in the old system keep using what worked for us.
This isn’t about resisting change. It’s about not losing something that genuinely helped creative people get consistent, dependable results — especially for things like comics, visual novels, or animation projects.
Please don’t dismiss this as just a prompting issue. It’s a model behavior issue. And I really hope the devs take this feedback seriously.
Please demonstrate this. What are the prompts and seeds you are using here? What results you were expecting? What results you got? I posted examples previously.
I answered this before. To make this request more likely, you need to show that what you got before or what you want isn't reasonably achievable with the new model.
For this to be taken as a model behavior issue, you need to provide information. What are the prompts, seeds, results you are getting? You are only talking in abstract terms here. Please provide some actual examples here.
Let me explain where I’m coming from.
When it comes to the old model, I liked the anime style it gave. Not just the general "anime" look — I mean that clean, consistent, almost retro-modern feel it had. Yeah, the new model still looks anime, but it’s way more detailed and painterly. That’s not bad — it’s actually gorgeous — but it doesn’t fit the style I’ve been using for a long time to make my characters.
Here’s the two big problems:
“Let’s switch to Fate/Zero style now.” Sure, both are anime. But they are totally different in tone, shading, energy, and presentation. You just don’t do that mid-project. That’s what the shift to the new model feels like — jarring.
“Hmm... not quite... not quite... ooh, that one’s perfect.” Each one felt like a variant of the same person, and that made it easy and fun to find the right frame, pose, or mood.
But with the new model? Forget it. Every image feels like a completely different character. It’s like I’m suddenly in a different anime entirely. That makes it impossible to build a scene, comic, or reference set like I used to.
So yeah — I’m not bashing the new model. It’s beautiful. But it’s like being forced to paint with oil when I just want to use clean inks. All I’m asking is: Give us the option to choose the model that fits the style we built everything around.
That’s all.
Took me a bit to reply to this. Anyway, if you're not willing to show examples of what you're trying to achieve, there's nothing to see here. You are just being abstract and that doesn't help proving to anybody that what you want is not achievable on this model.
I have already shown you examples of how to use seeds to achieve consistency, and yet we don't know anything about what you're trying. Not much to see here as constructive criticism if you're not providing examples of what you tried.