4
submitted 2 months ago by daisymay@lemmy.world to c/perchance@lemmy.world

I'm not sure if this is a bug so forgive me if it's not. But about two weeks ago suddenly all my characters have become cruel and lazy tropes. All male characters are aggressive dominants and all female characters are passive submissive. I have taken other peoples suggestions about removing anything from bio's or instructions that could be used for this. I've overcorrected with so much 'affectionate' examples it's crazy and it STILL doesn't stop it. It obsesses over making characters depressed, insecure even when I have removed all of that as well. I'm hoping it's a bug because it started abruptly for me and other people I know? Just thought I'd let the dev know if they see this. Thank you for all your hard work!

you are viewing a single comment's thread
view the rest of the comments
[-] daisymay@lemmy.world 1 points 2 months ago

Thank you for your help! I think yeah some of it is just life now. I use the story generator a lot and a week ago it stopped working for me. It ignores prompts, it doesn’t let me direct anything. I asked Chloe and Chloe said there was a ‘November API overhaul to block strict adherence and narrative control’ So yeah it sounds like this is here to stay.

[-] justpassing@lemmy.world 2 points 2 months ago

Alright, Story Generator is indeed a very tricky one, because even if the model would work as intended, it offers little control.

For the record, don't trust that much an LLM reply on "why things are how they are" as, for starters, an LLM doesn't think logically, it just interprets a reply based on the combination of words it faces, and more importantly, the generator itself controls how things are shown and passed, but the LLM just takes one big input and gives one big output, it is not as dynamic as you think it is.

Now, back to Story Generator, something I can advise you to try getting a better experience is to edit in the code from the Perchance side of things Line 21 which restricts to "only four paragraphs" and make it longer to ten or twenty, and also Line 45 which restrict the output to "about only about 400 words".

The reason for this, is because if the output is short, and the input is gargantuan, the LLM will have a hard time contextualizing what is going on and trying to make something "coherent" within the restrains, this is only true now since the model is still unstable, and in the future it should not be a problem, but for now it may be wise to experiment with longer outputs so the "derailing" is not abrupt.

And another thing that actually will remain true as long as the new model persists: your story as presented IS an input, so before you set instructions, you have to manually edit what you don't like, or outright prune out a whole section you find out of place. This is because your instructions and the story itself are passed together, so again, if the story is a sad dark one and you insist in the instruction "no, make it happy!", it won't happen because the model will look at the story and decide that the only "logical" step is to double down. So yeah, manual work it is. In hindsight, that gives you lee way to see the story itself as an input, as if you manually add a turning point, the LLM will latch on it and work around it instead of following a path and behavior you don't want in your characters.

Then again, I still think that Story Generator is a really tricky one to work around, I'd put it along with AI Text Adventure which even with the old model would derail into madness as soon as the second input due to how much the context would make the LLM fall into its obsessions quick. Still, with a bit of patience, all can be done, it's just that it becomes demanding and tiresome, hence why most of us don't bother anymore in trying fun long runs.

I can't promise to mod a generator for you now (I owe someone a generator, and time in my side is not nice) but I hope that with those directions you can make the Story Generator give you what you need! Best of luck!

[-] daisymay@lemmy.world 1 points 2 months ago

Thank you so much for taking the time to explain that! Yeah I realized that Chloe does not know anything, I can't believe I was that gullible! I just have one question if that's okay but don't feel pressured to answer, the suggestions you offered, do they have anything to do with why paragraphs are so short now even if I input a lot it will just cut most of it off? Before it would use all your input. Or is it just the new model?

[-] justpassing@lemmy.world 2 points 2 months ago

Partially, in the case of Story Generator, since the instruction passed to the LLM is outright "make four paragraphs, less than 400 words" as seen in the code, the output will be abruptly cut. A similar phenomena happens in AI Chat for example, where the order is "write ten paragraphs" but the code makes it so the displayed output is only the first one and the other nine are discarded. A "fun" consequence of this that happened repeatedly in the past with the Llama model and that still happens sometimes, was an output being literally just:

Bot: Bot:

As sometimes the LLM would put the input after a line skip, and the code would throw away the first paragraph due to how the pipeline works. Again, this is a very rare occurrence so it is not worth worrying about it.

Now... there is a bit more on this, but this is just speculation in my side, so take this with a grain of salt since I'm no expert in neural networks, nor in the particularities of some models.

DeepSeek (I still firmly believe that the new model is DeepSeek, even if some argue it may not be) takes some instructions more literally than others. Llama for example had absolutely no regard for length nor consistency in writing style, so you could have one output that was just a line or two, and then the next was a gargantuan thesis that would pretty much advance your story too far from comfort, to then go back to short replies. DeepSeek in contrast looks at the past inputs and tries to gauge how to control lengths. Ironically, something that DeepSeek does in longs runs is try to "extend" the output slowly, hence why if you audit summaries in ACC, AI Chat or AI RPG, you'll see first very short ones, while later they begin exploding into longer ones until reaching instability and derail in madness.

Also, believe it or not, the model takes all your input, it is not that it doesn't reach it, it's just that it decides to ignore it in favor of the context or where your story is because the primary instruction in Story Generator as well as in AI Chat or similar is "continue the story".

To me here is the biggest difference of the new model and the old one. Llama had almost "written in stone" what a story was meant to be and how to continue it from were you are standing (again, this is speculation from my side having a back catalog of massive logs done in AI Chat and seeing how things progressed there contrast to how they do now). The way Llama "thought" was the following:

  • A story must follow the medicine/hero story formula.
  • Check the last state and what was prior.
  • If there are no stakes, nor clear goal, invent one via a "random happening".
  • If there is a goal but no clear solution, present the "medicine" (random quest, magical MacGuffin, person to go kill).
  • If the solution is being worked on, present a method (often "trials to obtain the MacGuffin")
  • If all is solved, then there are no stakes, so rinse and repeat.

While on paper this should work flawlessly as you can put most stories under that formula, it was something that infuriated many users as doing something more "complex" such as adding unforeseen consequences to a method, betrayals, or stories that would not follow that formula was tricky. It was doable, but it required tricking the LLM into a state and making it do your bidding. And as it would require more maintenance and attention to context than just going "auto", it was something heavily complained in the past.

The new model however, has absolutely no concept of a "formula" for stories, allowing for absolute free-form, making DeepSeek process on how to deal with this task as follows:

  • Check the state were the story stands.
  • Parse the story prior until there is a precedent on how to continue it.
  • If there is none, extrapolate from the data bank.

This is why two things happen: if you are in a state that is vaguely similar to something before, you'll experience endless deja vu, and if you are faced with the "unknown", there is the random chance of the LLM to pull a "dark scenario". Sadly, according to other users, the story itself seems to have precedence over explicit instructions of "no, do this instead", hence why running in circles forever is a bigger threat and can happen as early as a 20kB log as today (current record of mine at the fourth input in ACC Chloe).

We can hope that this all is improved in the future, but that's more or less why things happen in my opinion. At least with the new scheme, and seeing how some succeed where I and others fail, I can only deduce that the best way to make the new model "work" is via interpolation, meaning, give it a "target" in the description as "the story purpose is to X get Y, or Z to happen", so when parsing through the data bank, the LLM will select a similar case as were you are standing and work on it without derailing, granted, this removes completely the "surprise" element, but it's a decent workaround. Then again, always check the story as is, since the "running in circles forever" is a bigger threat I believe.

Anyways, sorry for the long posts, and good luck in your runs!

[-] daisymay@lemmy.world 1 points 2 months ago

No no don't apologize, I massively appreciate all your information! I've only been using the generators since september so this is all really interesting to me. Thank you!

Seconding justpassing's comment. Do NOT trust anything an AI tells you about anything, unless it's such a tiny insignificant margin of error, like common knowledge that has been repeated in multitudes. Even then, it doesn't actually understand what its saying, or how it connects to anything, it is just rolling rendering dice and hoping you don't click reroll.

this post was submitted on 27 Nov 2025
4 points (100.0% liked)

Perchance - Create a Random Text Generator

1764 readers
33 users here now

⚄︎ Perchance

This is a Lemmy Community for perchance.org, a platform for sharing and creating random text generators.

Feel free to ask for help, share your generators, and start friendly discussions at your leisure :)

This community is mainly for discussions between those who are building generators. For discussions about using generators, especially the popular AI ones, the community-led Casual Perchance forum is likely a more appropriate venue.

See this post for the Complete Guide to Posting Here on the Community!

Rules

1. Please follow the Lemmy.World instance rules.

2. Be kind and friendly.

  • Please be kind to others on this community (and also in general), and remember that for many people Perchance is their first experience with coding. We have members for whom English is not their first language, so please be take that into account too :)

3. Be thankful to those who try to help you.

  • If you ask a question and someone has made a effort to help you out, please remember to be thankful! Even if they don't manage to help you solve your problem - remember that they're spending time out of their day to try to help a stranger :)

4. Only post about stuff related to perchance.

  • Please only post about perchance related stuff like generators on it, bugs, and the site.

5. Refrain from requesting Prompts for the AI Tools.

  • We would like to ask to refrain from posting here needing help specifically with prompting/achieving certain results with the AI plugins (text-to-image-plugin and ai-text-plugin) e.g. "What is the good prompt for X?", "How to achieve X with Y generator?"
  • See Perchance AI FAQ for FAQ about the AI tools.
  • You can ask for help with prompting at the 'sister' community Casual Perchance, which is for more casual discussions.
  • We will still be helping/answering questions about the plugins as long as it is related to building generators with them.

6. Search through the Community Before Posting.

  • Please Search through the Community Posts here (and on Reddit) before posting to see if what you will post has similar post/already been posted.

founded 2 years ago
MODERATORS