




Alright, I need to get something off my chest because I’ve been testing, experimenting, and re-trying the same thing over and over again, and no matter what I do, it just doesn’t hit like it used to.
Let me be clear from the start:
I used the exact same method to describe what I wanted.
Same wording, same structure, same prompts — nothing changed in how I approached it.
But the results I’m getting now are completely different. And not in a good way.
Let’s Talk About These Two Batches of Images
The two image batches I posted above were all meant to show the same character.
Same outfit: black tank top, tight jeans
Same setting: outdoors, in motion
Same identity: blonde woman, serious or determined expression
But that’s not what I got.
What I got were completely inconsistent renderings:
Faces that don’t match at all from one image to the next
Body types and proportions jumping all over the place
Lighting and tone shifting randomly
Styles flipping between semi-realistic and full-on plastic
It doesn’t feel like one woman doing different things.
It feels like ten different women who vaguely meet the same description.
And if you’re trying to build a character-based scene, this is a dealbreaker.
It’s not useful. It destroys continuity, and it makes actual storytelling almost impossible.
Now Look at This One Image That Worked
That one image I keep pointing out — the one I actually like — that’s how it’s supposed to be.
Same hair
Same style
Same expression tone
And most importantly, she feels like a real character, not just a generated output.
That image came from the older model.
The model that actually understood what I was trying to do.
It didn’t try to give me endless “fresh” variations.
It gave me a consistent look. It gave me her.
And This Isn’t Just About One Character
I’ve got more characters like her. Full sets.
Different faces, different personalities, different stories — but all generated using the same approach I used for that one character.
The old model handled them all just fine.
It let me build a cast of recognizable characters that stayed visually consistent across scenes.
I wasn’t making random image prompts.
I was building a story panel by panel, and I needed those characters to hold their identities from frame to frame.
The older model made that possible.
The current one does not.
The New Model Doesn’t Respect Identity
This new model gives me something like:
“Here’s a blonde woman. That’s close enough, right?”
No. That’s not close enough.
Because I’m not asking for random beauty renders.
I’m asking for a character.
The one I designed.
The one I wrote for.
The one I want to appear in multiple images in a consistent, believable way.
And the new system doesn’t treat characters as something to preserve — it treats every prompt like a brand new concept.
That completely breaks the flow for anyone trying to build a visual narrative.
Want an Example Everyone Understands? Look at Ben 10.
Look at Ben 10 — the original series. Sharp art, clear lines, a consistent design that made the characters feel real and memorable.
Then look at the new Ben 10 reboot. Same names, same basic characters — but the style is all over the place.
Watered down, simplified, cartoonish to the point of being unrecognizable.
That’s exactly what this new image model feels like.
It’s not that it’s completely broken — it’s that it lost the essence that made the old one powerful.
It forgot that consistency matters just as much as creativity.
What I Actually Want
I want the ability to:
Lock in a character’s look
Control how she acts and moves
Adjust her facial expression
Place her in a sequence of scenes
Keep everything in the same tone and style
I don’t want to re-roll twenty images and pray one of them looks close enough.
I want to build scenes, not hope for lucky pulls.
This isn’t about complexity — it’s about continuity.
Are There Slight Differences? Of Course.
Nothing’s perfect — I get that.
Even the old model had quirks.
But at least with it, I could generate 200 images in a session, scroll through them, and actually find what I wanted.
I'd go:
“…ummm nope… nah… mmmmm maybe… huh… oh that one… and… oh, I like that one… that one… ummm no…”
And that process worked — because they were all still in the same style, tone, and form.
There was cohesion.
Now? Everything’s scattered.
Side by side.
Two batches of failed generations
5 images that got it right ( the singles)
Same prompt. Same intent. Same user
This isn’t a user problem — it’s a generation model problem.
And if the new system can’t deliver character consistency anymore, then someone needs to step up and either bring back the old capability or build something better.
Because this isn’t a nitpick.
This is the difference between random images and real storytelling.
Some of us came here to tell stories — not settle for scrambled visual noise.
for the love of god give us back the old model
And just to be clear — it’s not just this one character.
I have other characters I created using the old model, and if I tried to describe them the exact same way now, I’d get the same broken results like in those inconsistent batches.
The old model helped me a lot with my storytelling.
It gave me characters I could build with — characters I could rely on to stay consistent.
So please, with all due respect:
Bring the old one back.
I don’t like this new model.
It’s lost what made the old one so effective.
Bring back the version that worked — the one that understood character design, visual continuity, and story-driven consistency.
Bring back the ((())).
ps the one at the bottom is supposed to be at the top with the others