14
submitted 2 weeks ago by [email protected] to c/[email protected]

Hey people of Perchance and to whoever developed this generator,

I know people keep saying, “The new model is better, just move on,” but I need to say something clearly and honestly: I loved the old model.

The old model was consistent.

If I described a character — like a guy in a blue jumper, red jeans, and purple hair — the old model actually gave me that. It might sound ridiculous, but at least I could trust it to follow the prompt. When I used things like double brackets ((like this)), the model respected my input.

And when I asked for 200 images, the results looked like the same character across the whole batch. It was amazing for making characters, building stories, and exploring different poses or angles. The style was consistent. That mattered to me. That was freedom.

Now with the new model, I try to recreate those characters I used to love and they just don’t look right anymore. The prompts don’t land. The consistency is gone. The faces change, the outfits get altered, and it often feels like the model is doing its own thing no matter what I ask.

I get that the new model might be more advanced technically — smoother lines, better faces, fewer mistakes. But better in one way doesn’t mean better for everyone. Especially not for those of us who care about creative control and character accuracy. Sometimes the older tool fits the job better.

That’s why I’m asking for one thing, and I know I’m not alone here:

Let us choose. Bring back the old model or give us the option to toggle between the old and the new. Keep both. Don’t just replace something people loved.

I’ve seen a lot of people online saying the same thing. People who make comics, visual novels, storyboards, or just love creating characters — we lost something when the old model was removed. The new one might look nice, but it doesn’t offer the same creative control.

This isn’t about resisting change. This is about preserving what worked and giving users a real choice. You made a powerful tool. Let us keep using it the way we loved.

Thanks for reading this. I say it with full respect. Please bring the old model back — or at least give us a way to use it again.

please

you are viewing a single comment's thread
view the rest of the comments
[-] [email protected] 2 points 1 week ago

Let me explain where I’m coming from.

When it comes to the old model, I liked the anime style it gave. Not just the general "anime" look — I mean that clean, consistent, almost retro-modern feel it had. Yeah, the new model still looks anime, but it’s way more detailed and painterly. That’s not bad — it’s actually gorgeous — but it doesn’t fit the style I’ve been using for a long time to make my characters.

Here’s the two big problems:

  1. The new style doesn’t fit my flow. It’s like if you were animating a whole show in the Kill la Kill style and suddenly halfway through someone said,

“Let’s switch to Fate/Zero style now.” Sure, both are anime. But they are totally different in tone, shading, energy, and presentation. You just don’t do that mid-project. That’s what the shift to the new model feels like — jarring.

  1. The consistency is gone. With the old model, I could generate 200 images, and while they weren’t identical, they were consistent enough that I could go,

“Hmm... not quite... not quite... ooh, that one’s perfect.” Each one felt like a variant of the same person, and that made it easy and fun to find the right frame, pose, or mood.

But with the new model? Forget it. Every image feels like a completely different character. It’s like I’m suddenly in a different anime entirely. That makes it impossible to build a scene, comic, or reference set like I used to.

So yeah — I’m not bashing the new model. It’s beautiful. But it’s like being forced to paint with oil when I just want to use clean inks. All I’m asking is: Give us the option to choose the model that fits the style we built everything around.

That’s all.

[-] [email protected] 1 points 1 week ago

Took me a bit to reply to this. Anyway, if you're not willing to show examples of what you're trying to achieve, there's nothing to see here. You are just being abstract and that doesn't help proving to anybody that what you want is not achievable on this model.

I have already shown you examples of how to use seeds to achieve consistency, and yet we don't know anything about what you're trying. Not much to see here as constructive criticism if you're not providing examples of what you tried.

[-] [email protected] 1 points 8 hours ago* (last edited 8 hours ago)

**

___ “I’m Not Asking for Magic — Just Consistency, Like Before”

Alright, I need to get something off my chest because I’ve been testing, experimenting, and re-trying the same thing over and over again, and no matter what I do, it just doesn’t hit like it used to.

Let me be clear from the start: I used the exact same method to describe what I wanted. Same wording, same structure, same prompts — nothing changed in how I approached it. But the results I’m getting now are completely different. And not in a good way.

Let’s Talk About These Two Batches of Images

The two image batches I posted above were all meant to show the same character. Same outfit: black tank top, tight jeans. Same setting: outdoors, in motion. Same identity: blonde woman, serious or determined expression.

But that’s not what I got.

What I got were completely inconsistent renderings:

Faces that don’t match at all from one image to the next

Body types and proportions jumping all over the place

Lighting and tone shifting randomly

Styles flipping between semi-realistic and full-on plastic

It doesn’t feel like one woman doing different things. It feels like ten different women who vaguely meet the same description.

If you're trying to build a character-based scene, this is a dealbreaker. It’s not useful. It destroys continuity, and it makes actual storytelling almost impossible.

Now Look at This One Image That Worked

That one image I keep pointing out — the one I actually like — that’s how it’s supposed to be.

Same hair. Same style. Same expression tone. And most importantly, she feels like a real character, not just a generated output.

That one image came from the older model. The model that actually understood what I was trying to do. It didn’t try to give me endless “fresh” variations. It gave me a consistent look. It gave me her.

And This Isn’t Just About One Character

I’ve got more characters like her. Full sets. Different faces, different personalities, different stories — but all generated using the same approach I used for that one character. The old model handled them all just fine. It let me build a cast of recognizable characters that stayed visually consistent across scenes.

I wasn’t making random image prompts. I was building a story panel by panel, and I needed those characters to hold their identities from frame to frame.

The older model made that possible. The current one does not.

The New Model Doesn’t Respect Identity

This new model gives me something like: “Here’s a blonde woman. That’s close enough, right?”

No. That’s not close enough.

Because I’m not asking for random beauty renders. I’m asking for a character. The one I designed. The one I wrote for. The one I want to appear in multiple images in a consistent, believable way.

And the new system doesn’t treat characters as something to preserve — it treats every prompt like a brand new concept. That completely breaks the flow for anyone trying to build a visual narrative.

Want an Example Everyone Understands? Look at Ben 10.

Look at Ben 10 — the original series. Sharp art, clear lines, a consistent design that made the characters feel real and memorable. Then look at the new Ben 10 reboot. Same names, same basic characters — but the style is all over the place. Watered down, simplified, cartoonish to the point of being unrecognizable.

That’s exactly what this new image model feels like. It’s not that it’s completely broken — it’s that it lost the essence that made the old one powerful. It forgot that consistency matters just as much as creativity.

What I Actually Want

I want the ability to:

Lock in a character’s look

Control how she acts and moves

Adjust her facial expression

Place her in a sequence of scenes

Keep everything in the same tone and style

I don’t want to re-roll twenty images and pray one of them looks close enough. I want to build scenes, not hope for lucky pulls. This isn’t about complexity — it’s about continuity.

Side by side.

Two batches of failed generations. One image that got it right. Same prompt. Same intent. Same user.

This isn’t a user problem — it’s a generation model problem. And if the new system can’t deliver character consistency anymore, then someone needs to step up and either bring back the old capability or build something better.

Because this isn’t a nitpick. This is the difference between random images and real storytelling.

Some of us came here to tell stories — not settle for scrambled visual noise.

this post was submitted on 24 May 2025
14 points (81.8% liked)

Perchance - Create a Random Text Generator

776 readers
32 users here now

⚄︎ Perchance

This is a Lemmy Community for perchance.org, a platform for sharing and creating random text generators.

Feel free to ask for help, share your generators, and start friendly discussions at your leisure :)

This community is mainly for discussions between those who are building generators. For discussions about using generators, especially the popular AI ones, the community-led Casual Perchance forum is likely a more appropriate venue.

See this post for the Complete Guide to Posting Here on the Community!

Rules

1. Please follow the Lemmy.World instance rules.

2. Be kind and friendly.

  • Please be kind to others on this community (and also in general), and remember that for many people Perchance is their first experience with coding. We have members for whom English is not their first language, so please be take that into account too :)

3. Be thankful to those who try to help you.

  • If you ask a question and someone has made a effort to help you out, please remember to be thankful! Even if they don't manage to help you solve your problem - remember that they're spending time out of their day to try to help a stranger :)

4. Only post about stuff related to perchance.

  • Please only post about perchance related stuff like generators on it, bugs, and the site.

5. Refrain from requesting Prompts for the AI Tools.

  • We would like to ask to refrain from posting here needing help specifically with prompting/achieving certain results with the AI plugins (text-to-image-plugin and ai-text-plugin) e.g. "What is the good prompt for X?", "How to achieve X with Y generator?"
  • See Perchance AI FAQ for FAQ about the AI tools.
  • You can ask for help with prompting at the 'sister' community Casual Perchance, which is for more casual discussions.
  • We will still be helping/answering questions about the plugins as long as it is related to building generators with them.

6. Search through the Community Before Posting.

  • Please Search through the Community Posts here (and on Reddit) before posting to see if what you will post has similar post/already been posted.

founded 2 years ago
MODERATORS