14
submitted 2 weeks ago by [email protected] to c/[email protected]

Hey people of Perchance and to whoever developed this generator,

I know people keep saying, “The new model is better, just move on,” but I need to say something clearly and honestly: I loved the old model.

The old model was consistent.

If I described a character — like a guy in a blue jumper, red jeans, and purple hair — the old model actually gave me that. It might sound ridiculous, but at least I could trust it to follow the prompt. When I used things like double brackets ((like this)), the model respected my input.

And when I asked for 200 images, the results looked like the same character across the whole batch. It was amazing for making characters, building stories, and exploring different poses or angles. The style was consistent. That mattered to me. That was freedom.

Now with the new model, I try to recreate those characters I used to love and they just don’t look right anymore. The prompts don’t land. The consistency is gone. The faces change, the outfits get altered, and it often feels like the model is doing its own thing no matter what I ask.

I get that the new model might be more advanced technically — smoother lines, better faces, fewer mistakes. But better in one way doesn’t mean better for everyone. Especially not for those of us who care about creative control and character accuracy. Sometimes the older tool fits the job better.

That’s why I’m asking for one thing, and I know I’m not alone here:

Let us choose. Bring back the old model or give us the option to toggle between the old and the new. Keep both. Don’t just replace something people loved.

I’ve seen a lot of people online saying the same thing. People who make comics, visual novels, storyboards, or just love creating characters — we lost something when the old model was removed. The new one might look nice, but it doesn’t offer the same creative control.

This isn’t about resisting change. This is about preserving what worked and giving users a real choice. You made a powerful tool. Let us keep using it the way we loved.

Thanks for reading this. I say it with full respect. Please bring the old model back — or at least give us a way to use it again.

please

you are viewing a single comment's thread
view the rest of the comments
[-] [email protected] 3 points 2 weeks ago

The old model was consistent.

If I described a character — like a guy in a blue jumper, red jeans, and purple hair — the old model actually gave me that. It might sound ridiculous, but at least I could trust it to follow the prompt.

Prompt Result

When I used things like double brackets ((like this)), the model respected my input.

Well, that was a syntax from SD, while the new model is Flux. It requires different prompting; it doesn't accept the same syntax, from what people tested. Some have had success reinforcing desired aspects with more adjectives, or even repeating specific parts of the prompt.

Now with the new model, I try to recreate those characters I used to love and they just don’t look right anymore. The prompts don’t land. The consistency is gone. The faces change, the outfits get altered, and it often feels like the model is doing its own thing no matter what I ask.

As I explained in another thread, you can use the seed system to preserve the some details of the image while changing others: https://lemmy.world/post/30084425/17214873

With a seed, notice the pose and general details remain. One of them had glasses on, while others were clean shaven. But the prompt wasn't very descriptive on the face.

Seed1

If I keep the same seed, but change a detail in the prompt, it preserves a lot of what was there before:

a guy in a blue jumper, red jeans, and purple hair, he is wearing dark sunglasses (seed:::1067698885)

Seed2

Even then, the result will try to be what you describe. You can be as detailed as you want with the face. On that thread I showed that you can still get similar faces if you describe them.

Let us choose. Bring back the old model or give us the option to toggle between the old and the new. Keep both. Don’t just replace something people loved.

Keeping two models hosted at once would very likely involve additional costs. While it might be possible, it seems unlikely due to this reason.

I’ve seen a lot of people online saying the same thing. People who make comics, visual novels, storyboards, or just love creating characters — we lost something when the old model was removed. The new one might look nice, but it doesn’t offer the same creative control.

On the discord server, I've seen people create all of these. A lot of it is a matter of prompting. People on the discord are very helpful and quite active at experimenting styles, seeds, prompts, and I've had a lot of help with getting good results there.

With the new model, everyone started on the same footing. We don't know the new best practices on the prompting, but people are experimenting, and many have managed to recreate images they made before.

[-] [email protected] 5 points 2 weeks ago

I understand what you're saying, but that’s not the point. Let me explain properly.

Yes, if I write something like “a guy in a blue jumper, red jeans, and purple hair wearing dark sunglasses,” I get that the new model will try to follow that. That’s not the issue.

The issue isn’t about what the prompt says — it’s about how the characters come out.

With the old model, when I created characters using the same prompt across multiple generations, I got images that looked like the same character every time — same face, same style, same feeling, with only small variations. That’s what I loved. That consistency mattered. I could trust it. It made character creation easy, fun, and powerful for storytelling.

Now with the new model, I use the exact same prompts, same settings, and even the same seed structure — and yet the results look completely different. The style shifts, the faces change, and it feels like I’m getting a new person each time. Even the framing is inconsistent — for example, the old model would show the full torso, while the new one sometimes crops too close, like it’s focusing only on the top half.

Sure, I’ll admit: the new model is prettier. It’s technically cleaner, with sharper rendering and fewer artifacts. But that doesn’t mean it’s better for everyone. For me, the old model’s simplicity and reliability made it far more useful.

I’m not saying throw out the new model. I’m saying: give us the option to choose. Let those of us who found value in the old system keep using what worked for us.

This isn’t about resisting change. It’s about not losing something that genuinely helped creative people get consistent, dependable results — especially for things like comics, visual novels, or animation projects.

Please don’t dismiss this as just a prompting issue. It’s a model behavior issue. And I really hope the devs take this feedback seriously.

[-] [email protected] 3 points 2 weeks ago

With the old model, when I created characters using the same prompt across multiple generations, I got images that looked like the same character every time — same face, same style, same feeling, with only small variations. That’s what I loved. That consistency mattered. I could trust it. It made character creation easy, fun, and powerful for storytelling.

Now with the new model, I use the exact same prompts, same settings, and even the same seed structure — and yet the results look completely different. The style shifts, the faces change, and it feels like I’m getting a new person each time. Even the framing is inconsistent — for example, the old model would show the full torso, while the new one sometimes crops too close, like it’s focusing only on the top half.

Please demonstrate this. What are the prompts and seeds you are using here? What results you were expecting? What results you got? I posted examples previously.

I’m not saying throw out the new model. I’m saying: give us the option to choose. Let those of us who found value in the old system keep using what worked for us.

I answered this before. To make this request more likely, you need to show that what you got before or what you want isn't reasonably achievable with the new model.

Please don’t dismiss this as just a prompting issue. It’s a model behavior issue. And I really hope the devs take this feedback seriously.

For this to be taken as a model behavior issue, you need to provide information. What are the prompts, seeds, results you are getting? You are only talking in abstract terms here. Please provide some actual examples here.

[-] [email protected] 2 points 1 week ago

Let me explain where I’m coming from.

When it comes to the old model, I liked the anime style it gave. Not just the general "anime" look — I mean that clean, consistent, almost retro-modern feel it had. Yeah, the new model still looks anime, but it’s way more detailed and painterly. That’s not bad — it’s actually gorgeous — but it doesn’t fit the style I’ve been using for a long time to make my characters.

Here’s the two big problems:

  1. The new style doesn’t fit my flow. It’s like if you were animating a whole show in the Kill la Kill style and suddenly halfway through someone said,

“Let’s switch to Fate/Zero style now.” Sure, both are anime. But they are totally different in tone, shading, energy, and presentation. You just don’t do that mid-project. That’s what the shift to the new model feels like — jarring.

  1. The consistency is gone. With the old model, I could generate 200 images, and while they weren’t identical, they were consistent enough that I could go,

“Hmm... not quite... not quite... ooh, that one’s perfect.” Each one felt like a variant of the same person, and that made it easy and fun to find the right frame, pose, or mood.

But with the new model? Forget it. Every image feels like a completely different character. It’s like I’m suddenly in a different anime entirely. That makes it impossible to build a scene, comic, or reference set like I used to.

So yeah — I’m not bashing the new model. It’s beautiful. But it’s like being forced to paint with oil when I just want to use clean inks. All I’m asking is: Give us the option to choose the model that fits the style we built everything around.

That’s all.

[-] [email protected] 1 points 1 week ago

Took me a bit to reply to this. Anyway, if you're not willing to show examples of what you're trying to achieve, there's nothing to see here. You are just being abstract and that doesn't help proving to anybody that what you want is not achievable on this model.

I have already shown you examples of how to use seeds to achieve consistency, and yet we don't know anything about what you're trying. Not much to see here as constructive criticism if you're not providing examples of what you tried.

load more comments (2 replies)
load more comments (2 replies)
this post was submitted on 24 May 2025
14 points (81.8% liked)

Perchance - Create a Random Text Generator

776 readers
16 users here now

⚄︎ Perchance

This is a Lemmy Community for perchance.org, a platform for sharing and creating random text generators.

Feel free to ask for help, share your generators, and start friendly discussions at your leisure :)

This community is mainly for discussions between those who are building generators. For discussions about using generators, especially the popular AI ones, the community-led Casual Perchance forum is likely a more appropriate venue.

See this post for the Complete Guide to Posting Here on the Community!

Rules

1. Please follow the Lemmy.World instance rules.

2. Be kind and friendly.

  • Please be kind to others on this community (and also in general), and remember that for many people Perchance is their first experience with coding. We have members for whom English is not their first language, so please be take that into account too :)

3. Be thankful to those who try to help you.

  • If you ask a question and someone has made a effort to help you out, please remember to be thankful! Even if they don't manage to help you solve your problem - remember that they're spending time out of their day to try to help a stranger :)

4. Only post about stuff related to perchance.

  • Please only post about perchance related stuff like generators on it, bugs, and the site.

5. Refrain from requesting Prompts for the AI Tools.

  • We would like to ask to refrain from posting here needing help specifically with prompting/achieving certain results with the AI plugins (text-to-image-plugin and ai-text-plugin) e.g. "What is the good prompt for X?", "How to achieve X with Y generator?"
  • See Perchance AI FAQ for FAQ about the AI tools.
  • You can ask for help with prompting at the 'sister' community Casual Perchance, which is for more casual discussions.
  • We will still be helping/answering questions about the plugins as long as it is related to building generators with them.

6. Search through the Community Before Posting.

  • Please Search through the Community Posts here (and on Reddit) before posting to see if what you will post has similar post/already been posted.

founded 2 years ago
MODERATORS