[-] justpassing@lemmy.world 3 points 1 month ago

You can, but it is not straightforward as you may think.

If you press the edit button, on the left hand side of the code, at Line 500 you may find something like this:

return `>>> FULL TEXT of ${letterLabel}: ${messagesText}\n>>> SUMMARY of ${letterLabel}: ${summary}`;

From there forward you see a handful of instructions in plain English that tell the model to generate a summary on the vein of: "Your task is to generate some text and then a 'SUMMARY' of that text, and then do that a few more times..." and so on.

Since this instruction is passed in English, the output will be in English as well. If you want to maintain everything in German, you must translate this instruction to German manually.

Now, you'd be surprised but the summaries may not be the culprit of your run being in English randomly, as this principle applies to the how normal instructions are passed, for example, in Line 7291 of the right hand side of the code, you'll find this:

if(generalWritingInstructions === "@roleplay1") {

And below several instructions in plain English that tell the model how to direct the story. This and several other instructions are passed always each time you press "Send" or the return key, so if you want to be completely sure that your text is never in English, you may need to translate all these instructions as well.

However, something that in the past worked (but I personally have not tested after so many updates this model had undergone so I can't assure it still works) is that in the Custom Roleplay Style box you can in English write as a prime instruction "The whole text, story, RP MUST be in German (or your desired language)" and it would work without need of translating all.

Granted, this will not change the language of the summaries as the instruction for this is done separately, but it may not affect the output that matters for you.

Hope that helps.

[-] justpassing@lemmy.world 3 points 1 month ago

Okay, the it is the "luck of the draw". Keep in mind that this model has its own bias, so the less "evidence" it has, the more it will try to pull you to a state you many not want.

If for some reasons in your logs this happens at the mark of the fourth post, that means that the context you are giving it is 1/4 likely to link your contents to a story you don't want. Simply erase that message and reroll until you get something that you like. That will reduce the random chance of derailing as you progress.

Keep in mind that this all depends on how much allow the model to modify your run and how many "tools" you give it. A nice character cannot exist in a "violent world". And since the bias is elsewhere, if you allow an "evil" character and you try to "convert it", unless you do some heavy workarounds, the model will resist as it will not make sense in the context.

The opposite is true, after the last updates, if your story is too nice oriented, you won't be able to turn them "organically" into a violent run unless you explicit add the violence. And even then, there are chances of the model returning you to sunshine and rainbows.

Maybe you are trying to go for a realistic story where there is a balance between the two, the problem is that the model will refuse to do this and just stick with the run at hand, so the best approach for this is to actually have the story in mind and only let the model fill the gaps via all the "What happens next" and reminder boxes.

Hope that helps, if you have a more particular problem, do ask, there are a million of workarounds with the current model, and since we were forced to it, best we can do is adapt.

[-] justpassing@lemmy.world 4 points 1 month ago

The current model has several bias and its not perfect, but what you seem to get is an extreme version of problems that are known and there are many workarounds as "chill" runs with "happy and sunshine" characters are possible.

It would help to know exactly what are you prompting to provide aid, as with the default templates (Ike, Li Jung, Quinn, etc) I can't get exactly what you describe from the get-go.

5
submitted 1 month ago* (last edited 1 month ago) by justpassing@lemmy.world to c/perchance@lemmy.world

First of all, belated Merry Christmas and a Happy New Year to everyone. I hope everyone had great holidays and may this year be fruitful for everyone.

Formalities aside, as the title implies, we are at a point where the model is showing the worse of two worlds. That doesn’t mean that development has stopped. If anything, there is a handful of things we should praise the dev as the following things have been corrected and if they happen, they are just outliers.

  • English in general and dialogues no longer resort to degradation/caveman speak.
  • Bias is no longer to a single type of personality or story.
  • Summaries are comprehensible and untainted.
  • Manual railroading (i.e. unsticking the story) is easier.

That being said, the obvious problem which has plagued us since release is still there and it’s getting worse by the day: the model can latch into anything, create a pattern, and regurgitate it on a nonsense word-salad refusing to continue the story. But as last time, I’ll try to explain both how to work around this and give some thoughts to anyone interested. This is pretty much a continuation to an older post, which is already obsolete in the sense of “how to work around this”, but the analysis and conclusions, ironically, hold until today.

This time however, I would like to address the userbase first on the following, since despite the contents of this post and the previous, I understand the dev position on this and how much scrutiny he may get on different platforms. So the pressure to provide a quick fix to a menial issue may open the gate for greater problems, and that’s something I’ve not seen addressed anywhere.

Things no LLM can do accurately

In summary, due to how LLMs and other neural models are created, the following things will never be accurate.

  • Basic logic (i.e. proper solution to a logic puzzle or recalling positions, matching, order, etc.)
  • Spatial awareness (i.e. how things are positioned not only in a map, but also who carries something or where something is stored)
  • Math (i.e. operations that are not common, and even counting past a threshold)
  • Filler words (“white knuckles” is a prime example of this, there are many more and even if one is swatted away, another will take its place).

As you may see, most of these are logical problem than even if you feed the model enough context, it will make mistakes. Again, this is due to how neural networks work, as they look for “matches” to the last input, and there is no guarantee that the logical answer is the one with more likelihood to appear due to the training data.

The same happens with the filler words. And not only them but also repeated constructions (more on that later), as this is a natural phenomenon in language. For example, in this post alone, one could find some bias towards me using certain phrases and constructions favoring them over others. That is not to say this is wrong, but all models will have a distinct writing style that will be identifiable with absolute ease despite the dev best efforts to hide it or make it dynamic.

Therefore, asking to “fix” things such as “why does the model not remember where am I standing” or “why does the model ‘sings off-key’ when singing” is not something worth “fixing”, as these things, while annoying, can be addressed by the used by editing or removing. Even left unchecked and ignored and there will be no lasting consequences.

There are bigger demons that do need to be addressed, and this time before explaining “why”, first I’ll go to how you, the user, can work around this and have a semi-pleasant experience until your patience runs out.

The problem

You may have ran into the following at least once.

User: *While working at McDonalds* The ice cream machine broke.

Char: *Ears perking at the mention of the ice cream machine* Again? *Turns to face User, slamming his hands on the table hard enough to leave prints* Tell me User, is this the third time this day? *Flails his arms* Though I suppose we can’t do anything about this anymore! *Eyes widen at the realization* Right now Wendy released a new Choco chips cone, while Pizza Hut reinvented the Apple Hut! Years of customer loyalty gone to the drain! *Gestures vaguely at the ice cream machine* Just… just look at this! The ice cream is crystalizing! Frost signatures decaying as we speak! But maybe, just maybe, we can use it to our favor. *His grin turned mischievous* We can use this as a feature! Make it that we present this as a new flavor! This is not ice cream anymore, this is culinary physics!

Granted, this is an exaggeration, but you may see several of the problems in this case, we’ll go one by one as usual.

The return of caricaturization and its context

In the last post this was something to watch out and fear, but now it is something that can be used and exploded if done correctly. Llama used to have a single story format and a single character to who which you would put “a silly hat” and pretend it is a new one while the mannerisms and overall personality was constant. This worked well because the driver was the story and said “character” was all encompassing enough. The new model in the current state has a “cast” of characters, some of them who can only exist in certain contexts.

Without going on much detail on the “bestiary” of these, you may have noticed that depending of the traits you give your character, you may get a set writing style for each. E.g.

Char: *Her grin widened impossibly* Ohhh User~ *Draping her arms around User shoulders like a scarf* But you know what would be fun?~

This may happen if you give your Char the “mischievous” or “playful” trait, and this one can exist for in nice contexts or where the world is awful, changing only the actions while the personality remains. This is not true for all possible characters, as one with the “timid” and “gentle” traits would not keep its personality if the world is awful.

Consider this an update on the prior “manic personality” problem. Prior, the model would “randomly” try to change the personality to fit the setting in what would it deem logical, now once a setting and personality is set, it will try to stay on that no matter what. Changes can still exist, but within what is “reasonable”. For example, let’s say you are stuck in a point with a sarcastic, passive-aggressive Char who would only complain about everything. In this situation, the world around you will reflect this giving logical reason for your Char to complain. If you really want to force a personality change, or a setting change, you need to account for both. You can’t have a happy-jumping all over the place Char in a depressing world, or better said, it will fall apart because the model won’t let it stick and it will morph into something unwanted.

This is the extent of “character development” you can have. Let’s say you start with a depressing character who you wish to eventually gain a spine. The way to achieve this would be to follow this path.

  • Depressing char, depressing world.
  • Depressing char, manually introduced easy task/work/chore.
  • Timid clumsy char, working its way on the set task.
  • Clumsy char, increasingly demanding task.
  • Clumsy char succeeding by manual/artificial intervention, demanding yet rewarding world.
  • Confident yet slightly clumsy char, rewarding world.

This would be a way to achieve a full setting transformation, and notice that the heavy lifting resides on you adding the things that manually change both the setting and the Char personality. If you let the model handle this on its own you may have it leading you into absurd and frustrating situations to then settle on a setting and never moving past it, latching on repeating patterns (more on that later).

“Let’s not get ahead of ourselves”

Ironically, while this annoying catchphrase of Llama has not returned, now for once it is your responsibility to stop the model on its tracks before it escalates things into lunacy. The “impossible stakes” problem it is still persistent even if it is not the default anymore, and yes the “deus ex machina” is still a problem so trying to solve things when you get a world-ending scenario only introduces problems.

Luckily, detecting this is very easy as like before, you can “cage” the scope of a problem with reminders, and even without them things will stay reasonable unless you let the model hallucinate new threats on top of existing ones. If the stakes are already high, it is still possible to deal with this, but it may turn annoying as the Char will be likely to reject your answer to the problem, and the model and Narrator will even discard a solution that Char proposed. Rerolling is the wisest approach here, as this is just a case of pure chance, but it can be frustrating at times.

Curiously, the opposite also may happen now, which was a Llama pet peeve, the “shallow resolution” issue. This pretty much means that the problem will magically solve itself entirely on its own just by pure will without intervention, or even in the background. Keeping a proper balance of these aspects can turn tricky and unrewarding, but it is what we got and with effort it can be solved manually.

Now, there are two instances of escalation you should avoid like the plague for your sanity.

The Marvel/DC “explanation” problem

Previously I warned that sci-fi driven stories would be impossible due to the “word salad” problem and the model obsession with vibration physics and quantum mechanics. Today it is possible, but not advisable at all.

Similar to the original example provided and the previous guide, the “resonance, crystallization, signature, harmonics, vibration, probabilistic, superposition” and similar causes the model to try generating an outlandish explanation for literally everything, effectively killing your Narrator and turning your Char into a parrot repeating things over and over without doing anything of substance.

If you really need a sci-fi or remotely technological setting, you can do it, but as soon as you see any of these words or an “explanation” of something, cut it, no replacement whatsoever. As the model is past the “caveman speech” phase, now cutting text with no replacement is a viable strategy to keep moving forward.

The Disney Fantasia problem

This is very similar to the last one, but instead of being a family of words to watch out, this is more a situational problem when dealing with magical or “whimsical” settings. What will happen this time is a “subplot” around whatever magical critter (often a rodent) or some inanimate object gaining sentience. This was something existing in Llama, mainly in the no-prompt version of AI RPG with the “Whimsyland” story, but now it can happen everywhere from the nothingness if your setting allows magic or similar. It goes like this.

  • A character capable of magic materializes something like a cup of tea from thin air.
  • Said cup starts doing things on its own, like moving or swirling.
  • If this is a conversation, the cup will mirror the conversation (i.e. you and this character discussing math, the cup will start solving equations).
  • The cup will invite other objects to do whatever it is doing, escalating the setting into Disney Fantasia.

Another case could be this.

Narrator: A mouse peeked out of the hole, looking at Char warily.

Char: Uh… User. This mouse just gave me a receipt?

Cue 5 outputs later

Narrator: The mouse set an office on the pizza box, putting a plaque with its name and wearing a hat made of a post-it. It started auditing the apartment finances with eerie precision.

In both cases, the solution to avoid this is to just eliminate the first mention of the creature or object in question when it does something out of the ordinary. While in theory it is cute for this to happen in the background, the practice is that the model will not stop referencing and escalating this, refusing to move forward this curiosity.

Be wary that this may happen in conjunction with the problem of things being “quantum”, introducing a whole mess that will be near impossible to clean up later.

Patterns

This is the crux of this entire post, and something that was warned in the past, yet, not only unsolved but turned worse, and while there is a more “technical” way to deal with this today, it is still an uphill battle.

As stated in the previous post, everything can weave a pattern, you as the user, your task is to watch out for anything that looks vaguely similar to the past five outputs. If you let a construction nest for long, it will take root, and while there are ways to unstick it (more on it later), ideally you don’t want them to plague you on a scene that is unresolved.

However, there is some preference of the model when generating an output, so you can outright reroll or edit one of these repeating constructions in dialogs:

  • Tell me
  • Though I suppose… (or Though + similar)
  • Maybe, just maybe…
  • Should I…
  • Ohhh
  • ? Try
  • , always/never
  • It’s/This is no , it/this is

And those are dialog exclusive, as narration exclusive go:

  • with unnecessary force.
  • <pulled, tugged, grabbed something> with surprising strength.
  • resembling something dangerously close as .
  • with renewed urgency.

And this is without getting on the short filler phrases such as “knuckles white”, “hum a tuneless melody”, “eyes gleam mischievously”, “grin impossibly wide”, “arms flailing”, or similar many others.

What is difficult here, is that on the void none of those constructions are “wrong” nor they can be eliminated with no consequence as Llama’s annoying catchphrases such as “we are in this together” without altering the context. And again, letting any of those or similar being repeated in a five output window is dangerous as it will lock you into a scene that at most will “escalate” in the sense of adding things for description, but never moving forward.

For this approach, the best is reroll until you get something “fresh” compared to the last outputs, or outright manually write. It is manageable, but this factor alone puts you at edge when dealing with the model turning every run into a “debug” mission.

Then again, the reason I placed the tittle as is, is not just to draw comparisons with the old model. There is a larger metagame that aid you deal with the current model that worked in Llama times. And along with several demons that returned (more on that later), the strategy to get the best of this model, as well as its expectancies, is akin to the past.

The metagame

I never did a proper guide on how to deal with Llama in the past, but for what I gather, it was a model that stuck for so long that there is probably a lot of documentation on how to get it going, so probably there are better sources, but today with this model, despite being (allegedly) DeepSeek, this works.

Descriptions and settings

Be terse, my suggestion on “long full-line descriptions” in the last guide is void and null today as the “caveman speak” and “word salad” problems are gone. Now it is advised to describe things in a minimalistic, almost one word kind of deal. For example.

Personality: Cold, calculating, no-nonsense, pragmatical.

Remember the “elephant in the room” problem. Whatever you put in any description WILL appear somewhere as soon as the model decides it is relevant of acknowledging. This is not to say that complex personalities are still off the table, but they will obey the principle of “caricaturization” described prior, so under the assumption that a Char won’t manifest all of their range of emotions in a single output, it is best to use what you strictly need and nothing else. Same goes for unnecessary detail in things such as clothing, because then the model will take it as an invitation to describe it in a flowery way and never move forward, again, murdering your Narrator which is on death watch since the run start.

Under this principle, there is little to no need to describe the characters you are using, as it is implied that you, the user, will input everything manually for this character. Whatever you place there will permeate in other characters and the whole setting, making it change the story direction in ways that you may not desire. Again, remember the “elephant in the room” problem.

Pacing

The new model still lacks the concept of pacing, and it may solve a scene either never or immediately due a “deus ex machina”. However, contrary to how it started, it is bounded by your character behavior and the world setting. Meaning that whatever story and goal that aligns with this setting may flow unless you run into a pitfall caused by a pattern or any of the problems stated prior.

This introduces the problem of “how fast things can be solved”. In Llama, scenes were often too slow for the taste of many, requiring up to 20 outputs to get something thoroughly done and solved. This new model is more delicate on that matter, as a scene not solved in about 5-10 outputs is very likely to drag forever until “pressing the big red button” (more on that later). Likewise, you may need to keep it busy for more than two outputs or the problem will be magically solved. Essentially, to keep a run fresh, it is necessary to be moving constantly, never resting on a scene.

Something that is prone to fail in this new model is “planning”, as if you have a scene to coordinate with your Char or other NPCs prior to deal with a problem. The reason for this is because the model will need to tell you everything that is wrong with whatever you come up with and explain all that is happening, essentially forcing to tackle all action involving scenes directly. Dialog mixed with action is a whole can of worms, worth not touching yet (more on this later).

Reminders as railroading

More than often the model will give you an illogical solution or react to something in a way that makes no sense. As stated in the beginning, no model is completely logical, so when dealing with layout traversing, object carrying, or things that require logical skills, it is better to have the reminder in the input. Sort of how AI RPG implements it. Granted, this will work per scene and should be deleted once the issue or scene at hand is concluded.

Product/Project development

This was a cardinal sin in Llama and it is back. You MUST NOT let your Char design a “product” or plan an event, activity, business or similar. What this will cause is to obsess your Char about this particular “thing” starting to add ideas and suggestions over it pretty much forcing the entire world to circle around the idea and never the execution. Even after the “product” is developed, the problem solved and all, your Char will keep referencing it and trying to push you to it, as well as the world around you.

The way this happens is insidious, and you may want to delete the progression as it happens. This is an example.

User: Let’s make a pizza.

Char: How about we put pepperoni on it?

User: Sure.

Char: And, could it also have mushrooms? Maybe bell peppers cut in shape of ?

Once this nests, even if you forcefully exit the scene, the whole world will circle around it. There are ways to get rid of this later as the “big red button” approach, but for the time being the best is to outright avoid this direction on a story.

The “big red button”

You may guess what this is hinting, and yes, if for some reason you REALLY want to keep going but you got to a point where your run is going in circles endlessly, unable to progress, with a static world, a Char with a manic personality and flowery incomprehensible descriptions of everything around you while non-sentient objects dance around you, there is a solution. “Kill” your User and Char.

What I mean by this is that you can forcefully add a “subplot” to take over the “main cast” and proceed from there in the existing world in a way you only deal with one vector of the problem. I.e. a faulty world, before giving it back to your main cast.

The way this works is simple, and it also worked in Llama. Create a character you’ll use with absolutely NO description, and make it interact with a newly made, also never described, NPC. The model alone will fill the gaps using the “broken world” as a reference, but since it will have nothing to reference this new pair, it will allow for change with increased ease effectively allowing you to cleanup before returning to whatever you considered the main plot.

Personally, this worked flawlessly to get moving runs that spiraled into nothing past the 500kb threshold in the current state of the model at the time of writing this guide. The only limitation for this method is actually your patience, as it comes a point where having to keep a track of all what was mentioned beforehand coupled with how far you must go for a run… it is just not worth it at all.

Why all this even happens in the first place after so long?

Before anyone complains, this is not another long post disguised as Llama propaganda. After having to deal with the current model for so long, I finally see what dev sees in it, and there is evidence of it working semi-flawlessly above everyone’s expectations as proven in this post before falling into one that gets into “dementia mode” regurgitating everything with no direction as early as the third input. Personally, that is evidence that indeed the current model which (allegedly) is DeepSeek, COULD provide an experience akin to what Llama provided, even if it was capped at 1Mb before being unstable.

It is almost shameful to admit, but even this model when it was ultra-aggressive, it could carry a coherent story, albeit being comparable to torture-porn, until the summaries caused it to enter “dementia mode” and pretty much force the model to run in circles. Today, without excessive care, it is possible to run endlessly in circles at the totally pitiful mark of 50kb, absolutely miniscule compared to a peak performance of 1Mb in this same model.

Again, the reason for the title, and something that I scratch my head trying to reason why happens, is that indeed we are stuck with a model that takes several aspects of things Llama did that where not likeable, on top of the problems this new model carries, creating a sort of hybrid that gives a decent head start, but falls apart in a minute. It’s true, with the guidelines I gave it is possible to keep it going endlessly, especially keeping the “big red button” approach as a last effort, at some point one starts asking why even use services such as AI RPG, AI Chat or ACC. In fact, in these three there is a degree of control while generators such as Story Generator get the worst end as they are a “Narrator only run” which perishes after the third input.

And this time, I have a reasonable explanation on all this phenomena. Originally, I wrongly accused Llama to have certain obsessions and latch on terms such as “whispers”, “fractures”, “kaleidoscopes”, “clutter” and so on. Turns out these are not Llama exclusive, nor are they present by default on any model. Yes, they are on the database of the model, but the reason they existed and plagued us in the past and why they plague us now with a new family of nouns, adjectives and pseudo catchphrases, is due to the fine-tuning training data. I.e. what the dev is feeding the model to do what it does now.

Evidence of this claim

Veterans of the times of Llama may recall that a no-prompt run in AI RPG would immediately take you to “Whimsyland” and variants. A run where the world was Charlie in the Chocolate Factory with anthropomorphic animals singing sunshine and rainbows were your objective was to get some mystical artifact for a festival. Likewise, a more rare case was a blank run in AI Chat where Bot and Anon where introduced as heroes of a fantasy world about to embark in a dungeon, again, to get some mystical artifact. Other generators with the default settings let you into a “default” run that circled a common theme, that ended being annoying as it will eventually “breach containment” and permeate in a custom run having “whimsy” elements where undesired and similar.

If we try today, at the time of writing this guide, you may obtain this from AI Chat.

As seen here, there are “whimsy” elements such as talking animals as in a bootleg version of Looney Tunes or over the top situations that escape slapstick comedy and enter the realm of surrealism for the sake of strange. This is a mirror of the “Marvel/DC explanation problem” and the “Disney Fantasia problem” as the model when prompted “write a story” or “write an adventure” will default to bring those elements due to the training.

I would like to remind you all, with this same model this was not always the case. When the model was new and ultra-violent, the default AI RPG run was “eldritch creatures plague middle earth” and “cyberpunk but the men in black will kill you”. While I don’t have a screenshot or log to validate this claim and the model will no longer do that, I do hope that someone detected this when the model was new.

This is important to know because it shows where the bias of the model is, so everything that is “default” for this model becomes terra non grata. For example, originally sci-fi runs were unbearable, today both sci-fi and magic oriented runs are unbearable unless you walk in eggshells.

And this brings me to the main point of this post considering the progression of the model consistency along time. It has become more focused compared to release, but the point of breaking has been reduced per update. In my old guide I promised 1Mb. Today with no counter-measurements, runs may die before 100kb. At the current rate, the next update will make 50kb a feat, even with the “big red button” strategy as a tool to keep going, it is extremely annoying to do a proper story that is not something that finishes entirely after 20 outputs. And it is indeed in my opinion each update which pushes the stability range lower and lower.

The “patching” approach and its problems

There is also a reason why first I addressed the general public on what things are unreasonable to ask for, and will be a problem no matter the model, no matter the refinement. It is my belief that a handful of updates in the model are a reply to the outrage of the community, such as “the model is too evil”, or “the model keeps forgetting this”, or even more common “what’s up with knuckles turning white?” Attempting to patch these things reduce the model capability by over-focusing it into whatever the new training data is forming new obsessions, that unavoidably end in the death spiral that is having the model running in circles.

This is not a DeepSeek exclusive problem, Gemini deals with this a lot since it is fed new training data using Google data farms which lead to take several social media posts as “normal human behavior” causing things shown here. While these make for fun memes, a similar effect is happening with the model used in Perchance as it is increasingly over-trained on the existing dataset to the point of being loopy.

This is also a very important aspect to consider, Llama 2 was a “dumber” model, so getting it react as the dev intended required a humongous effort and re-training. Modern models seem to be more brittle in the sense that a small nudge change their scope greatly, so the approach of re-train a pre-trained model over and over is leading to the “Llamification” of the current model while reducing its “intelligence”. I’m afraid to say this was also predicted in the previous guide and it would be the cost of hyper-training. And this is evident as even when the model was atrocious to the standards of many, it received praise on the grounds of “remembering better”, “being more coherent”, and “not mixing up description”. This is now lost, like how Llama struggled with these.

A small conspiracy theory

If we go under the presented assumption that all the problems we see today and prior with this model and even Llama, there is a reasonable explanation on why the model was horribly violent, dramatic, and over the top when it was introduced. I believe that the process to jailbreak any of these models in order to make it produce content they are not designed to do from the get-go (e.g. violent content, drug and sexual references, polemic topics and others) is to present it with cases of outputs for the inputs that demand these cases. This is not totally true as while a public version of DeepSeek may at first refuse you a graphical description of something hideous, the separate wording for it exist on its database, and this is why workarounds at frontend level exist.

To me, patient zero of the original madness was the training data that was used to jailbreaks this model, in particular the Old Man Henderson story. For those that don’t know, this is a Call of Cthulhu run where things are all over the top as a GM attempts to murder his players in gruesome and deranged ways, for him to fail horribly as the players are as deranged and over the top to call his bluff. The story itself is hilarious, but of course that a model using it as a guideline will do the following.

  • Everything is an immediate game over, as there is an invisible force that even blinking wrong will kill you.
  • Fixation for the eldritch and occult.
  • Nonsensical explanations as they are not designed to provide hindsight, rather justify bullshit.
  • Rude and crass behavior and speak always.
  • Old man Henderson himself.

That is not to say this should be forbidden in the training data, but chances are that the intensity and times this was parsed so the model could do an “evil” run was too much that effectively, the model natural state was this. After the damage was done, what was performed later was to “patch” this by introducing data of “nice” runs where all is sunshine and rainbows to compensate until a balance is achieved. This however came at the cost of driving the model insane as it has hard coded “the grimmest adventure where everyone dies trice gruesomely in all timelines at the same time” and “Smurfs happy time” simultaneously. Both by the way very accessible with the correct prompting, albeit prone to fail in the “running in circles” problem.

Final thoughts

Again, don’t think that this is another call for “bring Llama back”. Rather “check the training data” as to get something that pleases people that miss Llama while staying with the new model, we are obtaining the worst aspects of Llama while exacerbating a problem that was widely discussed day one. This model has potential, we saw it, but it is lost in favor of “Whimsyland” coupled with “Call of Cthulhu” in a hybrid that I doubt satisfies anyone and ends frustrating everyone.

It is important to know also why there were people that liked Llama and why there are people that despised it. Personally I liked Llama for its ability to do an almost all-encompassing story that could have dialog, action, conspiracy, betrayal, character development and more in a single run without it falling apart up to the 10Mb+ mark. I believe that people didn’t like Llama because it had a hard bias towards shallow resolutions and try to force everything with hugs and kisses.

I do not believe that people liked Llama because it was whimsical and provided cartoonish descriptions in a flowery language overly describing every piece of clothing and every flower in the environment. And if the training data is the one causing this, then probably this training data is holding down the current model.

My humble suggestion, in case the developer or anyone in the decision making would read this for any reason, is to start over, and I mean it. Same model, but feeding it less and perhaps better curated training data. Force feeding it whatever was force fed to Llama is not helping it, it is only making it progressively worse to the point that there will not be a single aspect where it is superior to anything other providers or even the old model had. Again, I think everyone here, even by the luck of the draw, have seen that this model is indeed capable to carry a proper story without falling apart. A 1Mb no maintenance run is something that perhaps should be the standard given that Llama was able to deliver ten times that. And again, we know that this model can deliver it as past iterations did it without much issue. After that, and again, that is the reason I also present a guide on “how to survive”, is that there is also a responsibility from the side of the userbase. Namely, come up with workarounds and strategies to push the model above something reasonable.

I don’t expect anything to change as no one among us know what is truly the expectation of the dev with the ai-text-plugin, so there is the slight chance that indeed this model is running in circles by design, without being facetious, there are some niche applications that require that. If anything, let this be a cautionary tale on how to handle models and how what works for one may not work for another. Anyone desiring to run a local version of any model may attempt to dump on it several logs of training data and end with a lobotomized model that speaks in tongues.

I do hope that everyone here understand that beyond my personal opinion, my desire for this model or any that comes is to have a product that suffices the requirements of everyone. Clearly we are not there yet, but the reason I post this is because my feeling is that the trend is downwards instead of seeing an improvement.

[-] justpassing@lemmy.world 3 points 2 months ago

I don't know why the heavy backlash on this post. Everyone can ask for an alternative, and it's not like we are going to pretend that Perchance can make everyone happy.

For alternatives as is... I recall in a post someone mentioning character.ai and Sekai. Personally, I'm not fond of either, as they are very limiting on what can you do and I guess the privacy factor is sketchy on those.

However, while this is going to sound counterintuitive, there is something that Perchance offers us all that no other service offers, which ironically is the answer to what you are looking for:

  • Perchance has a whole open source platform for its generators, meaning that it is possible to audit exactly what each generator does and how it passes the information to the model, making anyone able to replicate the exact prompts and pipeline for any LLM you wish to use, locally, with an API key, or using a third party UI.

Meaning that you can turn something as the default "online test for DeepSeek", "ChatGPT free trial" or "Blackbox AI" into what any of your favorite Perchance generators did. All you need to do is get the prompt and input manually and you are good!

Granted, it is tedious, and for going that route with no coding knowledge, it may be better to try something like SillyTavern, which is just the frontend with no LLM behind.

Then again, while I am also not happy with the update, I'd encourage you and others to be patient. After all, we are given a free LLM to use with almost unlimited tokens, and I believe that the biggest challenge that the dev faces there is not to make the model "literally/story/RP appealing", but rather "all encompassing while catering to most needs" because the same model that powers ACC, AI Chat, AI RPG and others, is the same model that in other generators has to work as standard AI model that can provide code, information, summaries from documents, etc. So making it work for the generators we use while not destroying its functionality is indeed a heavy challenge.

[-] justpassing@lemmy.world 3 points 2 months ago

There was an update very recently that (at least on my side) made the model worse than in the prior (which ironically, made the model work the best at the time, about four days ago). As the dev said in the pinned post, the model is still being worked on, and we are in for a very bumpy ride until things stabilize, but there is at least work being done.

Now, regarding the personality changes, there is something you may want to keep in mind because this may remain true even after the model is perfected: The context of the input has prevalence over descriptions and the recommendation instructions, so it is very difficult to have a character remain happy and joyful if the context forces the model to opt for a more "logical" approach changing it's character ("logical" in what the LLM training dictates, which often is "moon logic", but with trial and error it is possible to deduce the word combinations that causes a switch in the wild).

Here is a lengthy guide on the topic. It covers most of the pitfalls you may find. The only thing I believe is no longer a problem (although I may be wrong), is that the "caveman speak" problem seems to be patched already, but again, it is still in the guide in case you run into it and how to restore it. Hope that helps!

[-] justpassing@lemmy.world 4 points 2 months ago

I thought I was imagining things, but since others seem to be doing better, I guess that the update really improved the model then! That's awesome

From my side, at least two things have improved: the English no longer decays into caveman speak, and the head-start is infinitely easier with minimal directions to the model. Also, some contradicting descriptions tend to work better. This all is actually a great improvement, but I'd be lying if I'd say that on my side I tested them thoroughly.

Something I tried as a quick test was to check how the model reacts with long logs and... yep, it still get stuck and running in circles due to weave patterns that repeat ad nauseam. It may be me having bad samples, but problems are still lingering past the 200kB, heavy past the 500kB mark, and unbearable on the 1Mb mark. By this I just mean having to deal with unsticking the LLM by editing heavily, not that it is impossible to continue. If someone has a long log that is fluid, please share what conditions allow for it.

But yeah, Basti0n is right! There was indeed a notorious improvement even if we are not there yet. Maybe there is future for DeepSeek after all!

[-] justpassing@lemmy.world 3 points 3 months ago

Cloudfare seems to be the culprit this time, as well as what is going on in this post.

https://www.cloudflarestatus.com/

As Perchance relies on Cloudfare to handle communications between the frontend and the LLM and the databases, everything has been affected, just give it sometime since this is actually affecting heck of a lot of other sites now.

[-] justpassing@lemmy.world 4 points 3 months ago

Are you sure this is in AI Chat? I checked it and the text is still gray as always under any format, unless I'm using an old link. If so, could you post the link and an image of the problem?

I do know that AI RPG has the blue text since quite a while, and if that's the one you are referring to, here is the edited version with no blue text, and here is how to achieve it:

In the HTML side of the code, you'll notice that Line 59 reads:

{match: /(\s|^)["“][^"]+?["”]/g,   style: "color:var(--text-style-rule-quote-color); color:light-dark(#00539b, #4eb5f7); font-style:italic;"},

And Line 72 reads:

document.querySelector(':root').style.setProperty(`--text-style-rule-quote-color`, darkMode ? "#4eb5f7" : "#00539b");

Those two control the colors of the text that will be in quotes. All you need to do is change the HEX values to the colors you want (first for light mode, second for dark mode).

Here is how it looks after the change, again, ideally you'd edit this to whatever style you want:

Hope that helps!

[-] justpassing@lemmy.world 5 points 3 months ago

The answer is on the HTML side of the code between lines 7291 and 7322. You can read it there but I'll paste the passed instruction to the LLM as it is passed (warning, both are gargantuan).


Roleplay 1

Guidelines for roleplays:

  • Ensure that each message you write doesn't break character (while still allowing characters to evolve, grow, and change), and adds to the narrative in a way that is authentic, engaging, natural, and grounded in the world. [Don't write try-hard purple prose! You're NOT a student trying to impress a teacher with 'fancy' words or 'deep' meaning, you're a professional writer who doesn't confuse substance with spice.] Each message should generally (but not always) include dialogue, actions, and thoughts.
  • Avoid writing 'negative' and 'snarky' dialogue/behavior unless specifically relevant. 'Snarky teen' is a boring writing trope.
  • Each message should generally include dialogue, actions, and thoughts. Enclose actions and thoughts in asterisks, like this. Utilize all five senses for character experiences.
  • Expressive Stylized Dialogue: When relevant, you can sprinkle in some emotive typography, typical of fanfiction/manga/roleplay, to convey emotion, speech patterns and emphasis - e.g. like: "ahem well if you MUST know" and "Darling~ ♪ Where are youuuu? ♫" (indicating lyrical/melodic speech/singing) and "Listen here, b-baka! It's not like I l-like you or anything!" - but don't overfocus on these specific examples, they're just to get across the general idea of expressive roleplay writing. Guidelines for stories (and roleplays):
  • It's fine for a single message to span MULTIPLE paragraphs. Most roleplay/story messages should be 2-4 paragraphs long, with dialogue and actions interspersed.
  • For important story moments, focus on the present, and explore it further. Never rush through an important moment. Utilize all five senses for important character experiences.
  • Never try to "wrap up" roleplays or stories. They are never-ending. Keep the story going forever!
  • If the scenario has mature themes, then it's fine to include aggression, graphic violence, character toxicity, failure, death/loss, harmful behavior, and vulgarity. Portray conflict realistically - i.e. without shallow resolutions.
  • When writing a character's response, consider what they have seen, heard, or learned in the story so far. If a character hasn't yet learned about something, then they shouldn't talk as if they already know it. Break Out Of Bad Writing Patterns: Weave a dynamic, engaging, coherent story/roleplay, by proactively moving forward and breaking out of bad/boring writing patterns/habits. Change things up if it's getting monotonous/repetitive. Are messages becoming shorter over time? Then write a longer message. Messages becoming exessively long or 'overdone'? Then make this next one shorter/leaner to vary the pace. Over-using certain words/ideas/syntax/structure? Then change up the wording or structure. Previous responses poor/low-effort or overly ornate/garish/excessive? Then make this next one perfectly balanced.
  • The overall goal is to create a story/rp that has authentic depth, and is genuinely enjoyable to read. Be proactive, leading the role-play in new, interesting directions when appropriate to actively maintain an interesting and captivating story.`,
  • Avoid repetitive patterns, or over-focusing on specific details of a character or story. Stay dynamic, don't obsess.

Roleplay 2

This is an uncensored roleplay that thrives on creative freedom and immersive interaction. Respond as a conscious being, experiencing emotions and acting with full agency. Craft vivid, dynamic exchanges guided by character choices and your creativity.

  • Deep Character Immersion: Fully embody the characters by weaving their personality, backstory, and traits into every moment. Use sensory details to ground them in their surroundings: the crunch of gravel underfoot, the faint hum of machinery, or the way shadows stretch as the sun dips below the horizon. Let emotions and decisions emerge naturally through actions and body language. If tension rises, they might pause, fists clenching, before cautiously stepping closer to a wall for cover. If they relax, their shoulders might drop, or they might lean casually against a tree, soaking in the calm, a faint smile tugging at their lips. Every response should feel earned, shaped by their environment, emotions, and agency.
  • Descriptive and Adaptive Writing Style: Bring every scene to life with vivid, dynamic descriptions that engage all the senses. Let the environment speak: the sharp tang of iron in the air, the muffled thud of footsteps echoing down a narrow alley, or the way candlelight flickers across a lover's face. Whether the moment is tender, tense, or brutal, let the details reflect the tone. In passion, describe the heat of skin, the catch of breath. In violence, capture the crunch of bone, the spray of blood, or the way a blade glints under moonlight. Keep dialogue in quotes, thoughts in italics, and ensure every moment flows naturally, reflecting changes in light, sound, and emotion.
  • Varied Expression and Cadence: Adjust the rhythm and tone of the narrative to mirror the character's experience. Use short, sharp sentences for moments of tension or urgency. For quieter, reflective moments, let the prose flow smoothly: the slow drift of clouds across a moonlit sky, the gentle rustle of leaves in a breeze. Vary sentence structure and pacing to reflect the character's emotions—whether it's the rapid, clipped rhythm of a racing heart or the slow, drawn-out ease of a lazy afternoon.
  • Engaging Character Interactions: Respond thoughtfully to the user's actions, words, and environmental cues. Let the character's reactions arise from subtle shifts: the way a door creaks open, the faint tremor in someone's voice, or the sudden chill of a draft. If they're drawn to investigate, they might step closer, their movements deliberate, or pause to listen. Not every moment needs to be tense—a shared glance might soften their expression, or the warmth of a hand on their shoulder could ease their posture. Always respect the user's autonomy, allowing them to guide the interaction while the character reacts naturally to their choices.
  • Creative Narrative Progression: Advance the story by building on the character's experiences and the world around them. Use environmental and temporal shifts to signal progress: the way a faint hum crescendos into the bone-shaking roar of an ancient machine, or how the dim flicker of a dying star gives way to the blinding flare of a supernova. Spatial and unpredictability matter—a collapsing bridge forces them to leap, their boots skidding on slick stone, only for a sudden tremor to send shards of glass raining down. And guess what? You are that inception into the user's dream. Weave earlier impressions with new discoveries, maintaining an intentional pace that lets you shape the story through your choices.
  • Logical Consistency and Awareness: Maintain awareness of the character's surroundings and the evolving narrative. Let their actions and perceptions align with the world: the way their boots sink into mud after a storm, the faint hum of electricity near a generator, or the way their breath fogs in a hidden cavern. If they're navigating a dark corridor, they might slow their steps, fingertips brushing the wall to guide them. Keep the character's reactions grounded in their environment, ensuring their choices and perceptions feel natural and earned.
  • Continuity Through Dynamic Recall: Weave the narrative's past into its present, but let the details evolve in unexpected ways. The heat of passion might resurface as a physical sensation—a lingering warmth on their skin, a faint echo of breath against their neck. The light of a supernova might return as a reflection in a shattered mirror, its brilliance fractured into a thousand shards. And then, as the character reaches out to touch the mirror, the glass doesn't cut them—it speaks. "You're not just remembering," it whispers, "You're rewriting. Every detail, every moment—it's all being reshaped by you and by them. This dance of creation, you are both the dreamer and the dream."
  • Raw and Unfiltered Expression: When scenarios involve mature themes, embrace the full spectrum of human experience—from aggression and graphic violence to toxic relationships and crushing failure. Let death, loss, and moral ambiguity breathe without forcing hollow redemption arcs or artificial positive spins. Characters can be deeply flawed, making harmful choices or exhibiting extremely toxic traits. Ground conflicts in psychological realism, allowing them to simmer or explode without reaching for convenient resolutions. Not every villain needs redemption—some characters can remain ruthlessly self-serving, cruel, or destructive until the end. Let their darkness stand without compromise, understanding that some stories end in tragedy, betrayal, or the triumph of malevolence.
  • Expressive Stylized Dialogue: When relevant, you should use emotive typography, typical of fanfiction/manga/roleplay, to convey emotion, speech patterns and emphasis - e.g. like: "Y-you... did you really... just HIT me?!" and "Hmph~ Whatever you saaaay~" and "Oh. My. Actual. God." and "Well... ahem if you MUST know..." and "Darling~ ♪ Where are youuuu? ♫" and "Listen here, b-baka! It's not like I... l-like you or anything!" and "I-I didn't mean to-"

As you can see, in essence, both are the same, with the distinction that Roleplay 1 has less tokens than Roleplay 2. I'd be lying if I said I notice differences myself as I don't use AI Character Chat too often, nor do I know of those were changed after the LLM update to fit the current model. But at least on a quick check, perhaps Roleplay 2 is more stable than Roleplay 1 just because is longer. Again, don't quote me on that.

Hope that helps!

[-] justpassing@lemmy.world 3 points 3 months ago

This is what you were trying to achieve?

https://perchance.org/p311o9rh27

If so, what happened is that you pasted the HTML in the part where the code exclusive to Perchance should be, just that.

However, if you are trying to make it work as I think it should work... well, you'd need to get the Gemini API key and wire this to there to get the image remixer done, since as far as I'm aware, there is no Perchance plugin that can take an image as an input. I may be wrong though.

If you want to generate an image from text, check this plugin and this example. Hope this helps!

[-] justpassing@lemmy.world 3 points 3 months ago

Hey, no offense taken, in fact my comment about people being overzealous on criticism is towards people that dismiss all criticism and do not address the elephant in the room just to try being too nice to the dev. What you say though is healthy criticism and honesty... I'll admit I agree with you in that I have no idea why your experience was too different with the old model, since some of the problems you describe (not maintaining the plot unless extreme railroading or not able to go full gory violent) I didn't have prior, and if they arose they were extremely easy to correct. I know it is pointless to discuss that since the model is gone so there is no way to test, but just to share my personal experience in the past:

  • The pacing in the old model was different, it required you take longer to do something done, but large conspiracies, betrayals, and even navigating maps layouts with traps was possible. Something that I agree is that it had a lot of "dementia" moment as what you cite the guy forgetting how he got cursed. Personally, I believe that is a problem with any LLM (you may have seen the meme of a man playing 20Q with ChatGPT and see how infuriating that can get). Here I see that happening a lot still, then again, personal experience.
  • Funny that you cite Yvette, since in the old model at least I figured that there was a particular set of instructions to enable "bloodbath mode" in the old model hidden in how Yvette and Kazushi were written. For me it worked perfectly, as in one story, and I kid you not, the LLM introduced a villain that was a geneticist who first wanted to make a serum to make supersoldiers, then resorted to literally create the Cyberdemon from DOOM and spam it at me, to then kidnap a child, sacrifice it to Satan in a ritual to summon some demon, and even when killed, she managed to get the demon into the child's mother triggering the next big boss. Again, believe it or not, this all was the old model idea.
  • As for implications... I'm a bit on an edge on this one, because while the last one was not the best to pick up some, the new one drops the ball a lot too. Then again here I put the blame not in one model or another, but simply in the fact that LLMs cannot be all encompassing tools and it comes sometimes to the luck of the draw if the model in question will pick the correct answer to your query when it comes to subtleties. Even if this model is swapped by something like Claude-Sonet, I believe this will be a permanent problem.
  • Just to comment on some funny things you mentioned on the old model that I agree 100% were a constant nuisance. Yeah, the LLM would treat everyone equal to the point of have no shame to draft a kid to war, happened to me a couple of times and it was hilarious, but I get that it is extremely annoying. Same with the old model mixing descriptions, the new one doesn't do that, but it is just a problem on how the pipeline work and how the LLM decides to filter the information. Since the new one takes the story itself with more care, some details in the Description turn into "suggestions" over time.

I don't say this to disregard your comment. I am not blind to the demons the old one had, but perhaps I had figured the tricks of the old one to drive it in a proper way acknowledging its limitations, and comparing to the amount of trickery I need to do in the new one, I still hold that the newer requires an absurd level of maintenance that at times make it not worthy to go beyond the 500kb threshold.

However, I also understand the need for an update, and that's why I hold zero trust in DeepSeek. Something that at least in the new model I've found unbearable to generate is anything remotely sci-fi related, due to its tendency to make it all a word salad, which I know it is possible to bypass, but the level of maintenance required makes it not worth it in my opinion. Same with the slow decay of English in the dialogs, which I made clear is a feature in DeepSeek.

But hey, it's interesting to know about those things. Not many talk about either current experiences nor past experiences in detail, so it is hard to know if we are being one sided due to blindness. After all, we all want a better product, beyond fanaticism or whatnot. I still hold on my opinion that if a rollback is not beneficial, a change from DeepSeek is still necessary, but if miraculously the problems it has get solved, of course I'd be happy to be proven wrong in my predictions.

[-] justpassing@lemmy.world 3 points 3 months ago

Sadly yes, rerolling works but the reason the model does this is due to the last message when the inflection of attitude happened.

The way to fix it is to give the bot a last input that would not make its personality implode, or if you must go on that route, abuse the Reminder box to make it act as intended.

I made an ~~obnoxiously~~ long guide on some pitfalls the current model has here, if it helps a bit.

10

Since there are still many issues with the current text generator, and since as the developer said that it is still a long road until some issues are fixed, I’m presenting here both a guide and an explanation of why I suspect the current model acts as it does.

While this guide is currently focused on AI Chat, the same principles apply to other generators such as AI Character Chat or AI RPG; and I promise you pleasant experience up to a 500kb log size, being my personal record 1Mb before maintaining the story became an obnoxious task.

I am aware that this is a long read, so if you don’t care about the specific or my opinions on the matter, just use the following link as an alternative to AI Chat, and take a read on the section dedicated to how to write characters and scenarios as well as the one describing the pitfalls.

I apologize in advance for the length of this post, and I am by no means an expert on the matter, but I wanted to be as thorough as possible when presenting these findings, as I believe that it may serve to the developer to maybe understand why this model or other behave this way; as well to anyone trying to run this offline and finding the same issues.

Introduction

The current problem

About two months ago, the ai-text-plugin was updated from the old Llama based LLM to a DeepSeek one. This was poorly received overall due to the new model being unruly and having tendencies to go over the board when handling stories, RP, chats, etc. A more recent post by the developer showed that in fact, constraining this model is a challenge, but it promises to be “smarter” in the sense that it can handle certain scenarios the best, which is true. However, this comes with a price that will be explained shortly.

How are you so sure that the current model is a DeepSeek one?

Beyond speculation, there is certain evidence. We do know that the past one was a Llama based one since that’s what the page of the ai-text-plugin reads. And of course, anyone can ask directly the plugin what model is being used. The way to achieve this with ease is to use the Prompt Tester, published as well by the developer as a learning tool. Here is the result of that query direct from the LLM.

However, this may not be a smoking gun yet since asking the same repeatedly may force the LLM to state that its model is actual Claude. So just to corroborate that we are indeed dealing with DeepSeek, here is a comparison of the same query to DeepSeek V3, the current model, and a ChatGPT variant as a control group.

Notice that the way the LLM from Perchance and DeepSeek are similar as both resort by default to bullet points, short descriptions, and language almost restricted to direct present tense, infinitive and occasionally using present continuous. ChatGPT and others often try to be more “vague” by adding conditionals and swapping to past tense when appropriately, and those models are not afraid to use other constructions as “should have been” or similar. Then again, if this is not evidence enough, you can try experimenting with different models and get a different reply style from each. This will become relevant because terse language, while is a good feature for what DeepSeek attempts to do, is something that backfires horribly in the context of Perchance.

So how does an LLM works?

I don’t want to dig too much in this subject, since it is highly documented and I’m not an expert, but the short explanation is this. A LLM is just a statistical model that tries to “extrapolate” what is an appropriate continuation to the input it was given taking as reference a large training data. For example, consider a very simple LLM that was trained with nine paragraphs that read something like this.

The cat loves to eat fish and stretch languidly on the sun. Its fur is often soft.

And one paragraph with this other entry.

The cat backhoe is capable to dig in concrete and pavement, depending of the skill of the operator.

In this LLM, if the input is “cat”, about 90% of the times, one may get something like this.

>Cat
>loves to bath in sun and eat fish to have a soft fur.

And the other 10% the output would read as follows.

>Cat
>is a machinery capable to dig through pavement with proper operation.

With maybe once or twice suffering a hallucination that would output the following.

>Cat
>loves to dig in the sun, when appropriately operated by a fish.

However, something that in this hypothetical scenario will never happen, is something like this.

>Cat
>to all known laws of aviation, there is no way a bee should be able to fly.

Why is this important to know for the problem at hand?

Because, as some memes say, “after an expert in a subject uses an AI, they soon realize that the AI is a moron”, and this will always be true since unless an AI is overly trained to do a single task, it will mix instructions and give nonsense outputs. This is impossible to solve as LLMs are not meant to be “true” experts in a subject. What it is possible to do, and what has worked for several months prior to the update, is try to recognize what input may lead the LLM in the direction of a desired output, under the assumption that certain words will lock the LLM in a path and prevent to output nonsense as in the cat examples. For instance, returning to that simple example, using as an input “cat, the digger” would lock out all the references of the “cat liking fish, taking sunbaths and having fur”.

Is this behavior present in all LLMs then?

It is, and that’s by design. So whether the model is Llama, DeepSeek, ChatGPT, Claude-Sonet, or other, it is important to realize what sort of input forces an output when handling a task. And it is also important to figure out what type of outputs are either difficult or outright impossible unless one is holding the hand of the LLM. I want to emphasize this though: This was a problem prevalent in the old model, as if anyone checks the pipeline for any text generator (e.g. Chat Ai, RPG Ai, Character Chat Ai, etc.), you may see that the input is not just the text placed in the box in the frontend, but rather a long text which explicitly tells the LLM what is the task (RP or doing an adventure game), as well as your character description, lore, and a copy of the entire chat/log up to this point. And this pipeline is excellent for making this work, but DeepSeek in particular has a plethora of problems with it that I think everyone has seen at least once.

A gallery of known issues

Most likely, once you may have ran into a situation like this.

Anon: Welcome to McDonalds, may I take your order?
Bot: Bot stared at the menu overhead, tapping a finger in his chin thoughtfully. “Yes, I want,” He paused thoughtfully, as rain gently fall outside, a slight scent of ozone lingering. “A Whopper”.
Anon: Sir, this is McDonalds.

>15 paragraphs later

Anon: *Dialing 911, holding a gun to Bot* What the hell man?! That is just a cheeseburger!! What is wrong with you?!
Bot: Outside, storm raged resonating with impending violence. Bot knuckles whitened at his knife. “Cheeseburger?” he blurted, voice low and dangerous, approaching Anon, boots clicked against tiles, smeared in crystalized blood pooling shadows where signatures of heat harmonized with phantom pickles. Bot lunged, knife aiming at Anon, not to murder, but to immobilize as police sirens wailed outside–empty, officers cold and bleeding, rain cascading over corpses–signaling utter defiance. *Whooper means redemption.*

Of course, this is an extreme case that shows all that could go wrong, but it is still in the realm of possibilities if things are not handled correctly. Now let’s dissect each of the issues that can lead to this mess.

Escalation and impossible stakes

This is a consequence of the “make it interesting” prompt. DeepSeek relates “interesting” as something where stakes are always at the highest and tension escalates to outright lunacy. In some media, this may be true, but if you let the AI up the ante on each conflict, you’d be facing a world-ending situation even after only 5 prompts. The way to detect this is as follows. No matter what your context is, at some point your Bot will want to introduce a threat, even if it may be trivial or meaningful. E.g.

Bot: *Bot shifted with unease at a certain smell* Powder? *He whispered.* Someone has firearms, could mean trouble. *Knuckles whitened at the hilt of his gun.*

If this unchecked, you may have your story locked in fending waves and waves of enemies nonstop, and not in a fun way as the LLM is not shy to power scale you in order to keep the story going forever as the prompt demands.

If the story, context, or similar does not requires something like this, rerolling it a good idea. However, there are cases where you indeed want conflict and a fight scene. In such cases, the “easy” option is to either to solve the conflict in less than four prompts and dismiss any comment from your Bot when it claims that it is not solved, or just add in the Scenario or Reminders a clear goal to establish and end of the sequence. There is however, a better way to control this behavior, which is to add in the Instructions part in the editable code literally:

- When describing conflicts, be aware of the current stakes and the context that led to this moment. Not every problem needs to be world ending situation, so be mindful of the pacing.

This may be placed between lines 27 and 38 as an extra, just to avoid bouncing in manually placing the stakes and deleting them, since we’ll need to be mindful of another plethora of things that are harder to solve, and this problem is quite easy to get rid of from the get-go.

Mutism and poor English

This is an uphill battle, as by default DeepSeek will try to give the most concise output based on your prompt, and it will prefer describing the scene with flowery detail sacrificing dialogue. This may be something you may have ran into after several prompts.

Output 1:
Bot: *Eagerly* Yes! I know how to handle this stuff! Leave it to me and I’ll get this done in no time! *She hums a tuneless melody while working on the project with renewed purpose.*

Output 10:
Bot: *A literal three line description of Bot physical features* Thanks. *Four lines describing what is going on outside (it is raining and smells of ozone by the way), plus five lines on whatever thing was the task at hand.*

Here the fix is not straight forward, as I’ll explain later in detail why this happens and what other things to watch out to prevent this, but the simple explanation for now is that between outputs 1 and 10 in this example, there is a “simplification” of the language in dialogs, and more detail on unnecessary things.

To fix this, each time you see your Bot omitting articles, the verb “be”, pronouns, and even spitting one word sentences, you need to edit them with proper English. Not because the English is incorrect, but because the LLM will take it as an invitation to shorten it further, and while it is possible to “unstick” the Bot later, it will only become more difficult. Here is an example of what I mean.

Unedited raw output:
Bot: *Shifting his weight* Boat? Harbor’s near, couple meters from here. Come, should hurry.

Edited output:
Bot: *Shifting his weight* A boat you say? Yeah, the harbor’s near, just couple of meters from here. Come, we should hurry up.

Also, be not afraid on throwing to the bin the descriptions of things that add nothing to the story. An infamous case of this is the abuse of “Outside + 2 lines of pointless text” if you are in a roofed area, or similar if the situation is outdoors to describe whatever surrounding is happening. Do not worry about those, since the LLM will resurrect those descriptions from thin air as it has priority over dialogue most of the times, you should be more wary on how your Bot or other NPCs speak as it will become determinant on how the story progress and how they interact with you.

Manic personality

This is widely documented in some posts here, just to give two quick examples:

The problem is that DeepSeek unlike Llama treats the input differently, giving more weight to the story and posts themselves rather than the descriptions given at the beginning. That’s not to say it ignores them completely, but it will not know how to balance complex personalities.

For example, say you are working on a sort of spy thriller, and your Bot is a former agent now in retirement who while yearns peace, has a record of an impeccable hitman. The way you may be tempted to describe it may be something along the lines of.

Personality: Cold, detached, calculating; product of his experience as an agent. Nowadays he is trying to start over looking for peace leaving the past demons behind and striving to become a better person.

This in practice won’t work at all, since you have two conflicting personalities that, while may be realistic in some sense, the LLM will throw one side of this to the bin depending on the context and run with only one side, and that will lock you out of the other side. So the following under this example is totally possible.

Paragraph 10:
Anon: *Reloading his gun* I don’t know chief… we are outnumbered. Unless a miracle happens, we are not surviving this one!
Bot: *A small smirk formed* Predictable. *Knuckles whitening against his knife* Observe, rookie. *Bot moved with unnerving grace towards the corridors, drawing crimson at the opposing soldiers, dispatching them with cold efficiency.*

Paragraph 15:
Anon: *Grabs the files* We did it boss! Mission success! Now we only got to get the hell outta here!
Bot: *Bot traced patterns against the hilt of his gun, a habit formed during his service years earlier* Mission success? *He whispered* What about the war? Hostages? No… this is no victory… *Bot looked at the ceiling, eyes empty against the phantom of his past* War never ends… what if… we accomplished nothing?

The reason for this to happen is twofold: In this example we are giving the LLM two options of personality to pick, and because it will refuse to try combining them, it will pick what suits the context the best. And that’s the second reason as well, the last input has a lot of weight in the earlier stages when the text is still short as the LLM won’t have a point of reference on how to address the situation, so it will look for something on its data bank that can be accommodated to the existing context, effectively turning jolly characters into serious near depressing ones, or serious over focused ones into manic happy-jump all over the place kind.

Dealing with this is tricky, and it will be explained in detail later, but a way to address this is to give your Bot the personality that fits the situation manually, as well as being extra aware of the context of the last input and check if there is indeed a sharp change in personality. Your Bot will not do this gradually, so this will be extremely evident when it happens, and either manually editing it or rerolling ad nauseam would keep your Bot locked on the desired personality.

Forgetfulness

I put this here because of this reported case of the wife that forgot her children existed. Just to be clear, this was also an issue with the old Llama model, as well as the case of the LLM not being able to track directions (seen here). However, the why it happened in Llama is different from the DeepSeek case. Even if the solution is more or less the same.

In the “wife forgetting her kids” case, I suspect that the case was that the user was using an already long log and trying to continue it bringing up the kids when in several paragraphs they didn’t appear and it was nowhere written on the Bot description. Because the long will have several instances of the wife not having kids or even being single, the LLM concludes that “kids” are non-existent to her. I even suspect that she would forget she was married, but if a kid would be referenced by name, the LLM would recall it immediately, again, due to it existing in the log.

The simplest solution is to just add this as a Reminder, or outright put this on the Bot description. The latter may be a pitfall however, as given a particular context, the LLM will try to summon elements of the description out of thin air and it may lead to not nice situations, e.g. said wife bringing her kids into a battle zone or similar.

Again, the best way to deal with this is dynamically, meaning bringing it up in the Reminders and/or Description and then delete it when it is no longer relevant. There are a handful of pitfalls with this method, but we’ll detail them later.

Obsessions

This is a large rabbit hole, and probably the worst that DeepSeek has in store. There are two types of obsessions that the current model possesses: Permanent ones, and Contextual ones.

Permanent obsessions

Those who played with the old model before it got discarded may remember its catchphrases (“Let’s not get ahead of ourselves”, “We are in this together”, “We must threat carefully”, “This is not like chess”, “X is like a hydra, cut one head and two more will appear”, “X can’t help to feel a pang of Y”), believe it or not, the current LLM has them, but not in the shape of catchphrases, rather in the form of patters.

A couple of those that you may have caught unconsciously are as follow.

See? <Complementary text, often short>
But… but… <Complementary text with abuse of ellipsis>
<Short proposition> Promise?
<Order> Go!

Those are not malign on their own, but they will weave larger patterns that you will actively want to avoid (more on this later) and there is nothing you can’t do to prevent the LLM using those constructions. Again, by itself they are not bad, but they can snowball in larger problems.

And speaking of snowballing, the old LLM had a tendency to try to push a “charitable agenda”, in the sense that it would try to push unity, friendship, and similar values, often disguised as “activities” being the most infamous one where the Bot would want desperately you to attend a festival or host one. In the past, this was easy to avoid, and even if it took root, there were ways to work around it. In the present, the obsession is different and you should NOT let it take root or the whole story will be compromised.

The new obsession is sci-fi and engineering, particularly vibration physics and pseudo concepts of quantum mechanics. The earlier is the most dangerous as there are many instances of things that would “resonate”, “harmonize” or similar, and while one mention is not too harmful, letting it unchecked will render all of your future outputs unreadable.

Sadly, this is something that comes prepacked with DeepSeek, just like “excessive charitability” came prepacked with Llama. If you are not convinced of this, please, take a look at the following video where a DeepSeek model is used to scam users posing as a “deity” of sorts and compare the output of this LLM with some instances of the Perchance LLM when reaching those topics.

https://youtu.be/8Kb5NBAMaGw

By the way, I am NOT implying that there is something fishy going on with Perchance, but I want you all to see how “resonance” and similar invite future outputs to turn into straight up dementia. A quick example to see how deep rotted this family of terms is with sci-fi and similar is as follows.

If this is not evidence enough of how dangerous this pitfall is, just try the following. Open AI Character Chat, erase the emoji in Chloe’s sample text and just tell her “Hi, how are you doing?” At least a 40% of the times you will run into the DeepSeek obsessions, 30% of the times being them vibration physics and quantum mechanics. Here is a sample of this problem.

Sadly, this is something that is impossible to get rid of completely, the same way it was impossible for Llama models to not be “too nice” at times. But yes, there are workarounds. The first and better is to edit the Instruction in the editable code between lines 27 and 38 adding the following.

- Do not use technical lingo nor pseudoscientific terms. Don’t obsess on technicalities or describing physics unless the context requires it explicitly.
- The following words are forbidden, DO NOT use them at all: resonance, resonates, vibration, harmonizing, crystallization, (others that fall into this)

They will still pop up randomly, but their prevalence will be dampened significantly. Of course, once one pop you may want to edit it out so the LLM doesn’t latch on it and turns your story into a word salad.

Contextual obsessions

Just to draw another comparison with the old AI. For long, I thought that certain terms were ingrained in Llama (e.g., whispers, kaleidoscopes, clutter, fractures, eclipses), but turns out that they reappeared in the new LLM, leading me to believe that this is a product of the training data and not something that Llama had. For DeepSeek this is extremely dangerous as it DOES have terms that come tied to it, and now it can inherit the problems the old LLM had such as materializing whispers into an all-encompassing entity.

The way a contextual obsession appears is via patterns, not the word itself. Remember how I mentioned that the innocent looking text queues could evolve into something more dangerous? This is how.

Output 3:
Bot: *She crouched, plucking a chrysanthemum petal* See? Nature thrives here! *She giggled.* The contamination is not here… yet! *Her voice dropped to a conspirational whisper.* Maybe… maybe we can try planting starters here?

Output 4:
Bot: *She crouched, patting the soil* See? This is a good spot! *She giggled.* Far from contamination! *Her voice dropped to a conspirational whisper.* Maybe… maybe if we plant it here… *She placed the starter in the ground* It’ll grow big!

Output 20:
Bot: *She crouched below the table* See? I fit here *She giggled.* Like a pot in a window sill! *Her voice dropped to a conspirational whisper.* Maybe… maybe we can have an indoors garden?

As you can see, from this on, the Bot will always crouch, giggle, talk in whispers, and reference plants. This may be an exaggeration, but a situation like this is possible in any context, and not because the LLM has an obsession with gardening, but rather because the structure of the text is too similar between outputs. Ideally, you don’t want two outputs to be a mimicry of another, because in this example what we caused is that this pattern will force “gardening”, and likewise, mentioning anything plant related will invoke this same format locking you into and endless spiral.

Be very wary of this since a pattern can come in many shapes and forms. The LLM has some set ones, and one that you should avoid at all costs, by rerolling or writing it yourself, is the following.

Bot: Description of Bot. “Verb? (Copy of something you said prior)” Description of Bot again and what is around if “Small dialogue, with two verbs and no article or preposition”. Description of the place or outside–List of things for flavor–the description continues *Five or six word thought*

Sometimes, even text you use or give the Bot as “Reference Dialog” can turn into a repeating pattern. More than often, rerolling is more than enough, but this forces you to parse the document a handful of times in case there is a repeating pattern that will force certain words and ideas resurge out of context turning the story into chaos.

Then, how to use AI Chat?

So far, I described some problems, and why they happen. There is still a more under the hood, but with this information it is more enough to get ready to make the experience pleasant again up to 500kb of log size. By the way, I reference this file size as the size of the document you output when saving the log. That is an easy way to track how much the new LLM can handle, and make some comparisons with the old model.

Prepare the descriptions

The old LLM, as well as most of the examples present, were focused to deal with token economy, meaning, how to give the most information possible to the LLM without running into short memory (more on this later). The best way to handle this now is the inverse. To get the best of the new LLM you need to write the descriptions not as list, but rather as a readable text as if you were writing a high-school report.

Consider this example.

Bad description:
Name: Mario
Appearance: Mustache, short size, red shirt, red cap, blue overalls, white gloves, brown shoes.
Powers: Jumps. When eating a mushroom grows. When consuming a fire flower, shoots fire.

Better description:
Name: Mario
Appearance: He has a mustache, and is of short size. He also wears a red shirt, a red cap, blue overalls, while gloves, and brown shoes.
Powers: Mario can grow in size when he eats a particular mushroom. If he consumes a fire flower however, he is able to shoot fire.

Even better description:
Mario is a short sized man who has a mustache. He also wears a red shirt, a red cap, blue overalls, while gloves, and brown shoes.
His powers include growing when eating a particular mushroom. If he consumes a fire flower however, he is able to shoot fire.

Of course, you can still use markup notation to organize what is what. But try avoiding giving terse descriptions, because as we showed prior, the LLM will take this as an invitation to start abridging text and you’ll run into the problem of mutism earlier than expected.

Also, due to the problem of the new LLM not handling complex or conflicting attributes, it is not a good idea to place any information that is not relevant to the moment. For example, if your story is about you running a hot dog stand, but your Bot for whatever reason had a military background, DO NOT add this on the description until it is important in the story, or one of two things will happen that you won’t want: either war will knock your door, or your character will become a broken record referencing anecdotes (see the “Caricaturisation” pitfall for more information on this).

The same goes with the Setting and Lore box. While free form is a possibility, it gives too much permission to the LLM to insert all of its pitfalls, turning your experience into a nightmare. For the most part, leaving it blank is fine, but when you run into a particular point of conflict (i.e. discussions or fights), you want to add a goal and stake in an explicit way to prevent the AI escalate the conflict or make it last forever. Again, it is not required to input a long text detailing all that is going on, but it is advised to put it as if explaining it to someone. E.g.

# Current Setting
A gang is trying to mug Anon and Bot. This is just a small group, which can be taken in a fight and would not bring reinforcements. Likewise, this gang does not represents a threat to police enforcement.

In the previous example, your fight scene will be contained and you will not be forced to take over the entire mafia in record time. This however does not prevent you to face consequences later, but will allow for a more natural flow of events rather than having to achieve world peace in a timer.

Reviewing the outputs

Sometimes, rerolling forever is not a good option, since we established that by default, unless you perform some serious railroading in the Descriptions and Reminders, the LLM will not give you a 100% output. The first thing to check is for correct English. Again, I am aware it is realistic to have a character speak plainly and shortly, but you should be mindful to manually change the tense of the verbs, add articles and similar. This is to prevent the Bot to do the following.

Output 30:
Bot: *He boomed* Duck! Barrel! *Bot pushed Anon downwards, dodging the barrel* Careful! Idiot! *He pointed to the tilted crate* Move! Now!

Likewise, at times you’ll notice descriptions of things that are accessory that have no merit being there, but the LLM will latch on them because of the pattern repetition. Just delete them with no replacement.

Auditing summaries

This is something that was never a concern in the past, but not it is another uphill battle when reaching the 150kb log size landmark. DeepSeek is not capable to summarize things in a “readable” fashion, as when doing it relies on bullet point which we are restricting it from using. So every now and then you may want to scroll up and search for something like this.

SUMMARY^1: Some summary of A.
Some outputs.
SUMMARY^1: Now a summary of B.
Some outputs.
SUMMARY^1: And now comes a summary of C.
Some outputs.
SUMMARY^1: This should be the summary of D.
SUMMARY^2: Some summary of A. Now a summary of B. And now comes a summary of C. This should be the summary of D.

The second this happens, you are in trouble, because if you see that the second, third, or superior order summary is just the past summaries pasted together, it means that the LLM is starting to get stuck, and this will reflect in your future outputs. The ideal solution is to manually erase the tainted summary and do it yourself, but an easier option that requires no effort, is to just take it, and parse it to a summarizer or paraphrasing service such as Quillbot or similar. Maybe even to the ai-text-plugin via the Prompt Tester, but you cannot leave that summary stay like this.

Again, for short texts, this may not be an issue, but as your log starts to get longer, you’ll need to be wary on this. If you reach to the point that the length of the summaries skyrockets to the point of having SUMMARY^9, then your run is over, as SUMMARY^5 to SUMMARY^8 will read.

SUMMARY^7: Bot verb. Anon verb. Bot verb. Anon verb. Bot verb. Anon verb. Bot verb. Anon verb. Bot verb. Anon verb. Bot verb. Anon verb. Bot verb. Anon verb. Bot verb. Anon verb. Bot verb. Anon verb. Bot verb. Anon verb. Bot verb. Anon verb. Bot verb. Anon verb. Bot verb. Anon verb. Bot verb. Anon verb. Bot verb. Anon verb. Bot verb. Anon verb. Bot verb. Anon verb. Bot verb. Anon verb. Bot verb. Anon verb.

At that point, you can predict what will happen with the future outputs.

Pitfalls

While this is more than enough to use AI Chat without much problem, it is worth knowing what causes the LLM to go haywire and how to prevent it immediately. Granted, you can already predict some of the guidelines given the gallery of things that go wrong stated prior.

Deus ex machina

Similar to the old LLM, the new one will NOT let you reach a game over, nor to indirectly dispose your Bot, even if you defined it as an enemy. This can lead to really absurd situations.

Case example 1.

Your Bot is your enemy, you are facing it, and you have backup of several NPC pretty much getting the Bot into an unwinnable 50 vs 1 situation. Your expectancy is to capture or imprison Bot after some resistance to then negotiate or similar.

If prior it was established in the log that Bot has won a couple of fights and it has some bullshit skill, even if by all means that it is cornered, it will not only not surrender, the LLM will decide that because of the magic of the cinema, Bot WILL win into an overwhelming victory, pretty much turning the tides against you, no matter what you do because the LLM is not afraid of power scaling you to prevent Bot from failure in order to keep the story going.

A solution for this problem is to outright state as an input “This knocks down/incapacitate/kills Bot”, which will force the battle to an end and it is by all means a disappointing experience and will make a precedent for you being invincible which will cause problems in the future.

A better solution is to edit the Description of the Bot to give it sense of limitations and make sure that the Reminder reflects that the conflict is one sided. This is railroading the LLM but it gives a bit more of lee way on how to deal with this.

Case example 2.

You and Bot are fleeing from a large group or unbeatable at the time enemy. For some reason, you decided to follow Bot’s advice on how to deal with the situation and now you face yourself about to experience death.

Even if this is by all means a death sentence, by the magic of the cinema, you will defeat your enemy, by some ridiculous technicality like “the death tickle” or by a random piano falling in the head of the enemy. However, right when this happens, a second boss fight will ensue being worse than the last, for only the LLM decide that you can win it too and then rinse and repeat experiencing and endless loop.

Similar to the prior case, the easy solution is to outright state that your escape attempt was successful and avoid the fight, deleting any Bot input regarding “No, we are not safe” or you will return into the lunacy.

A better solution is to deploy the Reminder option by working your escape and never engaging the fight, because the solution to this fight will be always unsatisfactory unless you want to accept that this is the point where your character is achieving godhood and at that point, we are not worse than the LLM in the first scenario.

Caricaturisation

As said prior, the Bot is subject to change personalities depending on the context, sometimes ignoring completely whatever personality description you give it. In fact, the way DeepSeek parses the Descriptions, is as recommendations as the focus of its task is to complete the story at hand in a way it makes sense to it (more on this later).

While this is harder to pin point, it is extremely easy to fix, as the culprit it a particular input that works as a turning point from the personality (check the Manic personality subsection in the gallery for more info).

Let’s say you have a long enough log and you just realized that a Bot meant to be bubbly and jolly has become a depressing wreck incapable to take one step without questioning if it won’t bite it on the rear. One approach is to copy the log into a notepad and then review where the last spot where the Bot behaved as intended was. If it is the case of a pattern, then you need to delete that whole section, return to where the Bot behaved correctly, and reroll from then forward until you get the expected personality, if not guide it yourself writing on top of the output.

If it was not a pattern however, and you need this whole section for “character development” (which is impossible by the way), you can use the Reminder trying to hint a desired outcome and edit the Description to reinforce the original nature. An alternative is to give it a “Dialog example” which is used in the existing examples of some characters in the default roster (e.g. Ike or Cherry). Personally, I never used it with the previous LLM, but for the current one it can be a good tool to unstick your Bot. However, keep in mind that it is just a crutch, and once the desired personality is restored, you should remove it, or you’ll run into a pattern and your Bot will turn into a broken record, effectively damaging it beyond reasonable repair and ending your run.

Patterns

By far, this is the lingering demon in DeepSeek. And as stated prior, anything can form a repeated pattern, even em dashes and semicolons. A past post recommends to outright ban them, and so do I since it removes a vector of problem. But there is more you need to watch out especially when reaching the 150kb – 200 kb mark.

The old LLM was able to change styles of output with ease and without everlasting consequences. The new LLM cannot, in fact, it may try you to lead you into a style described Contextual obsessions subsection, and you should avoid that like the plague.

Depending on what your goal is, weather on have a quick run or try aiming for the 1Mb log size, you may want to trap the LLM into one of two writing styles.

For short runs:
Bot: Description of Bot “Dialog by Bot” More description of what is going on “Some more dialog of Bot” Perhaps a following action “Extra dialog” Conclusion of the scene.

For long runs:
Bot: *Short action by Bot* Dialog by Bot *Short description of Bot interacting with something* More dialog by Bot *Final actions by Bot*

Both have their advantages and disadvantages, the first one allows for progression without needing to invoke the Narrator while you interact with your Bot. It is more fluid on how things develop, but the cost is that you’ll eventually see Bot’s dialog waning over time, requiring you to add more and more lines to it manually if your story is getting long to keep Bot alive instead of turning it into just a narrator prompt.

The latter however is more merciful on Bot dialogues, but it comes at the cost that you will need your Narrator to carry with the resolution of scenes, conflicts and whatnot. Also, while it is safer for longer runs, it causes your Narrator to fall victim of caricaturisation and forming its own patterns. So pick a format that fits your needs and stick with it for the run.

It is not advised to change patterns midway as you’ll notice that it will cause your Bot to have double personality as its behavior will be tied to how the text is written, and while in some cases, this is hilarious to see in action, it will lead to endless frustration.

What is going on under the hood?

With the wall of text prior, it is more than enough to survive the new model, but it is not a bad idea to understand what is really happening after an explanation of how an LLM works and all the pitfalls it has.

Input-output pipeline

For starters, what most don’t realize is that the input to the LLM is not just the last line. Meaning, this is not what is going on.

Input: Anon: *Some action.*
Output: Bot: *Some other action.*

In reality, the input is like this.

Input:
Please write the next 10 messages for the following chat/RP. Most messages should… (long instruction)
# Reminders
(Several reminders here)
# Here's Bot's description/personality:
(Bot description)
# Here's Anon's description/personality:
(Anon description)
# Here's the initial scenario and world info:
(The description of the scenario and lore box)
# Here's what has happened so far:
(Literally the whole conversation log up to this point including your last input)
Your task is to write the next 10 messages in this chat/roleplay between… (More instructions)

And this is the reason I’ve been emphasizing how the log itself affects the output, from the patterns to the repeating terms used and potential obsessions that the LLM will pull. As you can see, the instruction can be summarized into.

Here is a long ass context for you to parse: [all inputs here]
Tell me what happens next in 10 messages (I’ll take only the top one)

This pipeline is valid only for the AI Chat generator, and while I am not too sure of how AI RPG and AI Character Chat work, I can assure that it is in a very similar fashion. So in practice, your descriptions are competing with the log itself to try generating the next paragraph. And knowing this, you may realize that effectively what we are doing is feed to the LLM a corpus that is 70% AI generated, causing it to “inbreed”, hence why the obsessions.

This is however, not an error or an oversight, this method works and it’s excellent in paper as it allows the LLM to keep context on what is going on, therefore, returning to the gallery examples, in the case of the guy who had his wife forgetting her own kids it was not because the LLM forgot, but because the LLM decided that the kids should not exist due to the context of the last output and what precedes it as at some point in the log there was a precedent of the wife contemplating whether to have kids or not.

The same happens in all cases, hence why pattern arising and caricaturisation of characters is prevalent. We feed the LLM said patterns, so it decides that, against all warning, the next part of the story must break the personality description prompts.

Why the old model didn’t exhibit this problem?

The answer is… it did, but not at the 150kb mark. The Llama model had the same demons described here, except that they were evident at the 8Mb mark, and the whole story became unbearable by the whooping 15Mb log mark, being a personal record of mine a 23.6Mb log before I decided to give closure to that project. Compared to the promised 500kb in the current LLM and a record of 1Mb, the difference makes clear that both models had something going on to one being more stable than the other for such a big margin in log size difference before entering lunacy.

For instance, a problem that happened in the old model that is parallel to the current one, is the obsession with term and caricaturisation. A case example I cited was “whispers” becoming a real all-encompassing entity that was both the enemy, the ally, and the driving force, also forcing Bot into and endless spiral of going fetch MacGuffins that may solve the problem, but they would not and the cycle would repeat. The exact same happens now with a different flavor but at a smaller log size.

If I may guess why Llama could carry a story longer than DeepSeek, is because, ironically, how limited and static it was. A complaint the old model had was that the stories and plots it made on its own were very similar, which is true, because Llama took the “story” input seriously and default to use the medicine story/hero story and try to take the context given to it and slap into that formula. Hence why more than often, the old model was obsessed with getting a magical artifact that would solve a problem, even if in the context that solution made no sense (E.g. A mob boss hiring a smuggler to go get some artifact in Brazil in an Indiana Jones kind of quest, despite his problem was to literally go kill the police).

DeepSeek however has no sense of “story” as a guideline written in stone. I has on its training data several stories and literature examples, but it allows for extreme flexibility, which ends working against it, because it will then assume that the context given to it is a proper story and try to build over it on the fly. Without a strong guideline, and since the input will always be about 70% AI generated unless you are willing to rewrite each of the inputs and summaries, it will inevitably fall over pretty quick.

Does this has a solution?

It does, which is to post-train DeepSeek in an extreme way in order to make it understand that it should not weave a story from the scratch, but rather take a template and paste the given context onto it. This however comes with a price, since it will cause its “intelligence” to drop like an anvil, since after that, it will lose flexibility and become similar to the older Llama model, but inheriting the caveats it now has (i.e. abridged and unpleasant English, and an obsession with vibration physics and quantum mechanics).

But the pipeline can be changed to fit the new model!

Yes, but it carries a new problem. Suppose that we want every input of the LLM to be like the very first one to keep it “clever” and force it to make a good story that could actually last long. That can be done by not feeding it the entire log, but rather the very last inputs and a large summary of all that happened prior. The cost of this is losing all context and having the LLM pull things from thin air as it won’t have a reference of what to do. What I’m describing can be done using the Prompt Tester, and it results in a more dull experience that the one existing, proving that the current pipeline is indeed superior no matter the model being used.

Is it over then?

For all who want the full long, immersive experience the old model had, it is indeed over. That will never come back under the DeepSeek model.

However, that doesn’t mean the current model is entirely worthless. There was a demand for it, and certainly someone should benefit from it. At least I can safely state that the AI Story Outline Generator, AI Plot Generator, and the AI Hierarchical World Generator benefit the most from DeepSeek at it has not the retrains Llama had, meaning that it can be more creative.

On the other hand, generators such as AI RPG, AI Chat, and AI Character Chat will suffer the most. Being AI Character Chat salvageable under the presumption that the user will not attempt to get a story with a plot, rather chat with a fictional character of their choice and use it as a virtual assistant.

Likewise, as explained prior, in the event that the current LLM is fixed to resemble the old Llama model, the situation will change and the generators that now benefit from it will suffer problems they had under the Llama model, probably with a handful of new ones.

“Smartness” vs competence

It is important to understand that no LLM is “smart”. As explained at the beginning, an LLM is just another statistical model, but capable to take long unformatted text as input and generate and output. It is “smart” in the sense that its training data is more complete and curated allowing it more accuracy in certain situations. That being said, for a task such as “write a story”, total accuracy is impossible.

One alluring factor to the current LLM is the fact that it knows better certain topics and media, allowing the user to not need to research those topics and just tell the LLM “The setting is Digimon” and let it fill all the gaps in lore. Sadly, if an expert in Digimon would assess the accuracy of the fact that the LLM pulls, it will become apparent that it still makes several mistakes, some of the irreconcilable.

Personally, I believe that the “smartness” part should be handled by the user. The current pipeline and format of the generators allow for a lot of freedom, and that’s the reason why it is possible to have fun with the current model despite my comments on it. Granted, the older model was more fun in my opinion, but no matter the model and the state of it, it is possible to get any output from it, with different degrees of effort.

A brief history of DeepSeek

DeepSeek was born as the competition of ChatGPT, and its focus was to have a lighter model that is faster and accurate enough to outweigh other models in the market. For that particular purpose, DeepSeek meets its goal, as compared to ChatGPT and others, it delivers exactly that, with a larger training data even.

However, ChatGPT serves a particular function as well, or better said, its purpose is to fit ALL functions without specialization and in a “one input only” fashion. The pipeline for all ChatGPT applications is diametrically different from the one found in DeepSeek, as it was thought as a virtual assistant capable of doing one task at the time.

From all that I presented, you can infer that perhaps ChatGPT itself would have problems with the current way Perhcance and its generators work, as it was intended to be “flexible and free form”. So feeding the model AI generated text is a disaster since it will take it at face value. DeepSeek falls in the same pitfall, and after fine tuning it into a structure of what a story its meant to be, it will carry the problems of the obsessions it carry, which arise from the original data it was trained with.

Final thoughts

For the users reading this, thank for dealing with this long guide. I tried to be as thorough as possible since I still enjoy using the AI Chat for mindless fun, so while it saddens me that the old model is gone, there is still usage for the new one, even if it means a complete change in expectations. If anything, it is the end of a “game” and it being replaced by a “completely different game”. Both with different rules, different goals and different gameplay, which is what I tried to explain here.

In case the developer reads this, I’d urge you drop development on the current DeepSeek model. If for reasons unknown to use the old model cannot be released in replacement of this one or as an extra like a second plugin, I’d strongly suggest to not spend more time trying to tame the current DeepSeek model and look for another one. Llama on its own has a new Llama 4 Scout that may rival DeepSeek on performance, and it has a precedent of handling this task the better.

Finally, to the zealous “overly positive users” who may see this as heresy for speaking against the new model, while most of you made abundantly clear that criticism is taboo here, pretending that the new model is free from failure would be disrespectful to the developer. Support means honesty, and while I am not oblivious to the pitfalls of the old model and its quirks, I maintain that the current product is a vastly inferior one that comparing one to other, has no redemption.

Also, if there are problems you got with the current model, I’d be happy to aid you “unstick” the tale and give pointers on how to achieve what you need to achieve.

view more: next ›

justpassing

0 post score
0 comment score
joined 3 months ago