11
submitted 1 week ago by [email protected] to c/[email protected]

Alright, I need to get something off my chest because I’ve been testing, experimenting, and re-trying the same thing over and over again, and no matter what I do, it just doesn’t hit like it used to.

Let me be clear from the start: I used the exact same method to describe what I wanted. Same wording, same structure, same prompts — nothing changed in how I approached it. But the results I’m getting now are completely different. And not in a good way.

Let’s Talk About These Two Batches of Images The two image batches I posted above were all meant to show the same character.

Same outfit: black tank top, tight jeans

Same setting: outdoors, in motion

Same identity: blonde woman, serious or determined expression

But that’s not what I got.

What I got were completely inconsistent renderings:

Faces that don’t match at all from one image to the next

Body types and proportions jumping all over the place

Lighting and tone shifting randomly

Styles flipping between semi-realistic and full-on plastic

It doesn’t feel like one woman doing different things. It feels like ten different women who vaguely meet the same description.

And if you’re trying to build a character-based scene, this is a dealbreaker. It’s not useful. It destroys continuity, and it makes actual storytelling almost impossible.

Now Look at This One Image That Worked That one image I keep pointing out — the one I actually like — that’s how it’s supposed to be.

Same hair

Same style

Same expression tone

And most importantly, she feels like a real character, not just a generated output.

That image came from the older model. The model that actually understood what I was trying to do.

It didn’t try to give me endless “fresh” variations. It gave me a consistent look. It gave me her.

And This Isn’t Just About One Character I’ve got more characters like her. Full sets. Different faces, different personalities, different stories — but all generated using the same approach I used for that one character.

The old model handled them all just fine. It let me build a cast of recognizable characters that stayed visually consistent across scenes.

I wasn’t making random image prompts. I was building a story panel by panel, and I needed those characters to hold their identities from frame to frame.

The older model made that possible. The current one does not.

The New Model Doesn’t Respect Identity This new model gives me something like: “Here’s a blonde woman. That’s close enough, right?”

No. That’s not close enough.

Because I’m not asking for random beauty renders. I’m asking for a character.

The one I designed. The one I wrote for. The one I want to appear in multiple images in a consistent, believable way.

And the new system doesn’t treat characters as something to preserve — it treats every prompt like a brand new concept. That completely breaks the flow for anyone trying to build a visual narrative.

Want an Example Everyone Understands? Look at Ben 10. Look at Ben 10 — the original series. Sharp art, clear lines, a consistent design that made the characters feel real and memorable.

Then look at the new Ben 10 reboot. Same names, same basic characters — but the style is all over the place. Watered down, simplified, cartoonish to the point of being unrecognizable.

That’s exactly what this new image model feels like. It’s not that it’s completely broken — it’s that it lost the essence that made the old one powerful. It forgot that consistency matters just as much as creativity.

What I Actually Want I want the ability to:

Lock in a character’s look

Control how she acts and moves

Adjust her facial expression

Place her in a sequence of scenes

Keep everything in the same tone and style

I don’t want to re-roll twenty images and pray one of them looks close enough. I want to build scenes, not hope for lucky pulls. This isn’t about complexity — it’s about continuity.

Are There Slight Differences? Of Course. Nothing’s perfect — I get that. Even the old model had quirks.

But at least with it, I could generate 200 images in a session, scroll through them, and actually find what I wanted.

I'd go: “…ummm nope… nah… mmmmm maybe… huh… oh that one… and… oh, I like that one… that one… ummm no…” And that process worked — because they were all still in the same style, tone, and form. There was cohesion.

Now? Everything’s scattered.

Side by side.

Two batches of failed generations

5 images that got it right ( the singles)

Same prompt. Same intent. Same user

This isn’t a user problem — it’s a generation model problem.

And if the new system can’t deliver character consistency anymore, then someone needs to step up and either bring back the old capability or build something better.

Because this isn’t a nitpick. This is the difference between random images and real storytelling.

Some of us came here to tell stories — not settle for scrambled visual noise.

for the love of god give us back the old model

And just to be clear — it’s not just this one character. I have other characters I created using the old model, and if I tried to describe them the exact same way now, I’d get the same broken results like in those inconsistent batches.

The old model helped me a lot with my storytelling. It gave me characters I could build with — characters I could rely on to stay consistent.

So please, with all due respect: Bring the old one back.

I don’t like this new model. It’s lost what made the old one so effective.

Bring back the version that worked — the one that understood character design, visual continuity, and story-driven consistency.

Bring back the ((())). ps the one at the bottom is supposed to be at the top with the others

[-] [email protected] 1 points 1 week ago* (last edited 1 week ago)

**

___ “I’m Not Asking for Magic — Just Consistency, Like Before”

Alright, I need to get something off my chest because I’ve been testing, experimenting, and re-trying the same thing over and over again, and no matter what I do, it just doesn’t hit like it used to.

Let me be clear from the start: I used the exact same method to describe what I wanted. Same wording, same structure, same prompts — nothing changed in how I approached it. But the results I’m getting now are completely different. And not in a good way.

Let’s Talk About These Two Batches of Images

The two image batches I posted above were all meant to show the same character. Same outfit: black tank top, tight jeans. Same setting: outdoors, in motion. Same identity: blonde woman, serious or determined expression.

But that’s not what I got.

What I got were completely inconsistent renderings:

Faces that don’t match at all from one image to the next

Body types and proportions jumping all over the place

Lighting and tone shifting randomly

Styles flipping between semi-realistic and full-on plastic

It doesn’t feel like one woman doing different things. It feels like ten different women who vaguely meet the same description.

If you're trying to build a character-based scene, this is a dealbreaker. It’s not useful. It destroys continuity, and it makes actual storytelling almost impossible.

Now Look at This One Image That Worked

That one image I keep pointing out — the one I actually like — that’s how it’s supposed to be.

Same hair. Same style. Same expression tone. And most importantly, she feels like a real character, not just a generated output.

That one image came from the older model. The model that actually understood what I was trying to do. It didn’t try to give me endless “fresh” variations. It gave me a consistent look. It gave me her.

And This Isn’t Just About One Character

I’ve got more characters like her. Full sets. Different faces, different personalities, different stories — but all generated using the same approach I used for that one character. The old model handled them all just fine. It let me build a cast of recognizable characters that stayed visually consistent across scenes.

I wasn’t making random image prompts. I was building a story panel by panel, and I needed those characters to hold their identities from frame to frame.

The older model made that possible. The current one does not.

The New Model Doesn’t Respect Identity

This new model gives me something like: “Here’s a blonde woman. That’s close enough, right?”

No. That’s not close enough.

Because I’m not asking for random beauty renders. I’m asking for a character. The one I designed. The one I wrote for. The one I want to appear in multiple images in a consistent, believable way.

And the new system doesn’t treat characters as something to preserve — it treats every prompt like a brand new concept. That completely breaks the flow for anyone trying to build a visual narrative.

Want an Example Everyone Understands? Look at Ben 10.

Look at Ben 10 — the original series. Sharp art, clear lines, a consistent design that made the characters feel real and memorable. Then look at the new Ben 10 reboot. Same names, same basic characters — but the style is all over the place. Watered down, simplified, cartoonish to the point of being unrecognizable.

That’s exactly what this new image model feels like. It’s not that it’s completely broken — it’s that it lost the essence that made the old one powerful. It forgot that consistency matters just as much as creativity.

What I Actually Want

I want the ability to:

Lock in a character’s look

Control how she acts and moves

Adjust her facial expression

Place her in a sequence of scenes

Keep everything in the same tone and style

I don’t want to re-roll twenty images and pray one of them looks close enough. I want to build scenes, not hope for lucky pulls. This isn’t about complexity — it’s about continuity.

Side by side.

Two batches of failed generations. One image that got it right. Same prompt. Same intent. Same user.

This isn’t a user problem — it’s a generation model problem. And if the new system can’t deliver character consistency anymore, then someone needs to step up and either bring back the old capability or build something better.

Because this isn’t a nitpick. This is the difference between random images and real storytelling.

Some of us came here to tell stories — not settle for scrambled visual noise.

[-] [email protected] 3 points 2 weeks ago
[-] [email protected] 1 points 3 weeks ago

wishful thinking

[-] [email protected] 2 points 3 weeks ago

this this this is what im talking about

[-] [email protected] 2 points 3 weeks ago

Let me explain where I’m coming from.

When it comes to the old model, I liked the anime style it gave. Not just the general "anime" look — I mean that clean, consistent, almost retro-modern feel it had. Yeah, the new model still looks anime, but it’s way more detailed and painterly. That’s not bad — it’s actually gorgeous — but it doesn’t fit the style I’ve been using for a long time to make my characters.

Here’s the two big problems:

  1. The new style doesn’t fit my flow. It’s like if you were animating a whole show in the Kill la Kill style and suddenly halfway through someone said,

“Let’s switch to Fate/Zero style now.” Sure, both are anime. But they are totally different in tone, shading, energy, and presentation. You just don’t do that mid-project. That’s what the shift to the new model feels like — jarring.

  1. The consistency is gone. With the old model, I could generate 200 images, and while they weren’t identical, they were consistent enough that I could go,

“Hmm... not quite... not quite... ooh, that one’s perfect.” Each one felt like a variant of the same person, and that made it easy and fun to find the right frame, pose, or mood.

But with the new model? Forget it. Every image feels like a completely different character. It’s like I’m suddenly in a different anime entirely. That makes it impossible to build a scene, comic, or reference set like I used to.

So yeah — I’m not bashing the new model. It’s beautiful. But it’s like being forced to paint with oil when I just want to use clean inks. All I’m asking is: Give us the option to choose the model that fits the style we built everything around.

That’s all.

[-] [email protected] 4 points 3 weeks ago

Appreciate the technical insight — I think you’re half right, but still missing the core issue.

Yeah, I get that it might not just be the model itself — changes in things like llama.cpp, token handling, softmax behavior, and temperature tuning could totally affect how the model generates images or text. I'm not saying you’re wrong on that.

But even with tweaking — temperature, repetition penalties, seed control, all of that — what I’m saying is that the feel and functionality of the old model is still missing. Even with the same prompt and same seed, the new system doesn’t give me the same results in terms of styling, framing, and consistency across batches. It's like asking for a toolbox and getting a magic wand — powerful, but unpredictable.

I’m not trying to get exact copies of old patterns — I just want the same level of control and stability I had before. I’ve already tried building from scratch, resetting seed behavior, prompt front-loading, etc. It still doesn’t replicate the experience the old model gave me.

So again — I’m not dismissing the technical updates. But for people like me who rely on visual consistency for characters across dozens of images, the user-facing behavior changed in a way that broke that workflow. That’s what I’m asking to have restored — whether through old model access or a toggle that emulates the old behavior.

[-] [email protected] 4 points 3 weeks ago

I hear you — but we’re talking about different goals here.

You’re focusing on raw output quality, and I get that. Yes, the new model (like Flux or SDXL) does look cleaner, more polished, and overall more modern. If your goal is one-off images or artistic flair, I totally understand preferring it.

But for people like me — who use these models to create consistent characters across batches for things like comics, visual novels, or storyboarding — the older model had a huge advantage: it stayed consistent.

It wasn’t about the exact prompt. It was about how the results felt connected, like they were from the same world, same artist, same character — with minor differences, not total redesigns every time.

Right now, I’m using the same prompt and seed structure I used before, and I’m getting characters that vary a lot — even with careful tuning. That’s the core of what I’m missing.

Also, saying “wait for training” is fine, but why should we have to wait at all when we already had something that worked? Why not offer both options — the new polished one and the old consistent one?

So no hard feelings, but I’m not “absolutely wrong” just because our use cases are different. I’m just asking for a choice, not a replacement.

[-] [email protected] 5 points 3 weeks ago

I understand what you're saying, but that’s not the point. Let me explain properly.

Yes, if I write something like “a guy in a blue jumper, red jeans, and purple hair wearing dark sunglasses,” I get that the new model will try to follow that. That’s not the issue.

The issue isn’t about what the prompt says — it’s about how the characters come out.

With the old model, when I created characters using the same prompt across multiple generations, I got images that looked like the same character every time — same face, same style, same feeling, with only small variations. That’s what I loved. That consistency mattered. I could trust it. It made character creation easy, fun, and powerful for storytelling.

Now with the new model, I use the exact same prompts, same settings, and even the same seed structure — and yet the results look completely different. The style shifts, the faces change, and it feels like I’m getting a new person each time. Even the framing is inconsistent — for example, the old model would show the full torso, while the new one sometimes crops too close, like it’s focusing only on the top half.

Sure, I’ll admit: the new model is prettier. It’s technically cleaner, with sharper rendering and fewer artifacts. But that doesn’t mean it’s better for everyone. For me, the old model’s simplicity and reliability made it far more useful.

I’m not saying throw out the new model. I’m saying: give us the option to choose. Let those of us who found value in the old system keep using what worked for us.

This isn’t about resisting change. It’s about not losing something that genuinely helped creative people get consistent, dependable results — especially for things like comics, visual novels, or animation projects.

Please don’t dismiss this as just a prompting issue. It’s a model behavior issue. And I really hope the devs take this feedback seriously.

14
submitted 3 weeks ago by [email protected] to c/[email protected]

Hey people of Perchance and to whoever developed this generator,

I know people keep saying, “The new model is better, just move on,” but I need to say something clearly and honestly: I loved the old model.

The old model was consistent.

If I described a character — like a guy in a blue jumper, red jeans, and purple hair — the old model actually gave me that. It might sound ridiculous, but at least I could trust it to follow the prompt. When I used things like double brackets ((like this)), the model respected my input.

And when I asked for 200 images, the results looked like the same character across the whole batch. It was amazing for making characters, building stories, and exploring different poses or angles. The style was consistent. That mattered to me. That was freedom.

Now with the new model, I try to recreate those characters I used to love and they just don’t look right anymore. The prompts don’t land. The consistency is gone. The faces change, the outfits get altered, and it often feels like the model is doing its own thing no matter what I ask.

I get that the new model might be more advanced technically — smoother lines, better faces, fewer mistakes. But better in one way doesn’t mean better for everyone. Especially not for those of us who care about creative control and character accuracy. Sometimes the older tool fits the job better.

That’s why I’m asking for one thing, and I know I’m not alone here:

Let us choose. Bring back the old model or give us the option to toggle between the old and the new. Keep both. Don’t just replace something people loved.

I’ve seen a lot of people online saying the same thing. People who make comics, visual novels, storyboards, or just love creating characters — we lost something when the old model was removed. The new one might look nice, but it doesn’t offer the same creative control.

This isn’t about resisting change. This is about preserving what worked and giving users a real choice. You made a powerful tool. Let us keep using it the way we loved.

Thanks for reading this. I say it with full respect. Please bring the old model back — or at least give us a way to use it again.

please

[-] [email protected] 2 points 4 weeks ago
2
submitted 1 month ago by [email protected] to c/[email protected]

Hey all, I wanted to bring up something that I feel is really missing (or at least underdeveloped) in the current AI model — body modifications and non-standard humanoid designs.

When I make characters, I often want to give them unique or surreal features — like making their bodies transparent, where you either:

See a white outline of the character while invisible (to show shape and detail, kind of like a ghostly silhouette)

Or just make them fully see-through, but still humanoid and clearly designed

Both ideas are great depending on the scene, and I'd love to have consistent control over them. Right now it’s either hit-or-miss or just doesn’t come out the way I imagine.

But it’s not just invisibility. There are other transformations and body types I’d love to see more support for, like:

Characters made of slime, but still with a defined humanoid shape

Giantess characters (size scaling is still tricky)

Half-animal transformations or hybrids

Basically, creative body mod concepts that go beyond the standard anime/human look but still stay stylish and coherent

I think this kind of stuff would open up a whole new level of creative storytelling and character design — especially for people like me who enjoy writing sci-fi, fantasy, or surreal scenes.

Would be cool to hear if anyone else is into this idea or has found tricks to make it work better with the current model. And devs, if you’re reading — I think there’s a big space here that this AI model could explore more deeply.

[-] [email protected] 5 points 1 month ago

I get where you're coming from — and yeah, for people looking for high-res or doing experimental stuff, I can see why the update feels exciting. But for users like me, who need consistent character generation and traditional anime styles for ongoing projects, this change has been a major blow.

I’ve already built whole characters and stories using the older model. I wasn’t trying to push boundaries — I was just trying to make my characters look the same across different scenes and poses. Now, they don’t even resemble themselves. Every image looks exaggerated or off-style, and no amount of prompt tweaking fixes it. And while I understand that maybe more tools will eventually come out that work well with this model… that doesn’t help right now.

The loss of variety and bracket control is also a massive downgrade. Before, I could fine-tune things subtly — different expressions, slightly different heights or builds — and still have it feel like the same person. Now it’s like I’m rolling a dice every time I hit “generate.”

So yeah, maybe it’s “modern,” but it’s not practical for storytelling at the moment. I wish we could just have the option to use the old model alongside the new one. Let the people who like this new style enjoy it, but don’t leave the rest of us behind.

[-] [email protected] 4 points 1 month ago

you and me both brother

[-] [email protected] 2 points 1 month ago

dude this new model sucks balls

2
submitted 1 month ago by [email protected] to c/[email protected]

I really don’t like this new update to the model.

Just being honest — the old version was way better for what I was doing.

I’ve been using this model mainly to create anime-style characters for my stories. Since the update, everything feels off. The art style is totally different — more exaggerated, stylized in a weird way, and it doesn’t match the classic anime look I was working with before. The characters I built and grew attached to just don’t look right anymore.

Yeah, I get that even with the old model there were small differences from image to image — like a character might be a little shorter, taller, or their expression would shift slightly. But that was fine. It actually gave them more personality and made them feel unique without overdoing it. It was subtle. It wasn’t all over the place like it is now.

With the old model, I could generate the characters how I wanted — and I’d end up with over 200 stills of the same character that I could use for storytelling. It was almost like building a storyboard. And if one didn’t come out quite right, I could just tweak the prompt a little and get a better version without losing the look.

Now with this update? I can’t even get the same character to look consistent from one image to the next. Each picture looks exaggeratedly different, like it's a different art style or an entirely different character. The faces, proportions, vibe — all over the place. They go way off model.

And worst of all, I’ve lost that fine control we had. The brackets like ((( ))) actually worked before — to fine-tune things like bust size, height, body shape, emotions, you name it. Now, even without the brackets, it feels like the model is jumping to extremes for no reason. I don’t want my characters to be cartoonishly altered just because I said “tall” or “serious.”

All I’m saying is: for the love of god, bring the old model back. Or at the very least, give us the option to use the old one. Some people might love this new version, and that’s fine. But for people like me who were relying on the older model for consistent, story-based character generation, this is a real setback.

Anyone else feeling the same?

view more: next ›

mrraccoon

0 post score
0 comment score
joined 10 months ago