[-] Almaumbria@lemmy.world 3 points 1 week ago* (last edited 1 week ago)

Hii, you could try with a coin flip:

else if (racc.checked) {Math.random() > 0.5 ? "and wearing [randAcc]" : ""}

Math.random() is uniformly distributed, so there's roughly a 50% chance that the result is above 0.5, and thus a 50% chance that you get an accessory.

For different odds, play around with adjusting the threshold (0.5). Make it higher to make accessories less likely, decrease it to make it more likely.

EDIT: Oh, right, I forgot -- you can also calculate the threshold based on how many accessories you have. It should go something like this:

Math.random() > 1/accessories.getLength

If you have five elements in accessories, 1/5 gives you 0.2, so the odds would be about 80% of the time you'll get an accessory. So that's another thingy to play around with.

EDIT2: Just correcting my own faulty math :D

[-] Almaumbria@lemmy.world 4 points 2 weeks ago* (last edited 2 weeks ago)

Hii, style='...' is for writing the CSS code directly into the HTML element. For anything you need to be reusable, what you want is to define a class within the <style>...</style> block, eg:

.my_class {
  color:            #B8B8B8;
  background-color: #303030;
  border-color:     #383838;
}

And then you can set an element to use it via class="my_class". You can have multiple classes on the same element if you separate the names with spaces (class="my_class another_class").

[-] Almaumbria@lemmy.world 3 points 3 weeks ago

Yup, I'm wee bit of a Linux sysadmin kinda nerd and I do a lot of stuff in Perl, which is a very dense language that I wish I had never picked up, but hey, it is excellent at text processing.

The original idea went like this: save a template to a text file, read it, perform replacements, and then write that modified copy to the clipboard. The original version was just that, not too complicated. Off the top of my head, the meat of it is something like this (pseudocode):

copy="";
for line of readlines filename
  for key in table
    line=line.replace(/\b${key}\b/,table[key]);
  copy += line + "\n"

Basically, you find key within the template and expand it to the corresponding value in the table, which is just JSON, eg:

const table={
  "--lighting":"text describing lighting"
  "--background":"text describing background"
  // and so on
}

But then I realized that for a character, I would need more than one item. So that means this thing needs to work recursively, that is to say "--character" would have to be an object, which has to be converted into a string that can be put into the template...

Similar story if, for instance, one element references another. Maybe you replace "--background", but the text for it contains a reference like say "--time-of-day" or something. So you have to do multiple passes, until there is nothing left to replace.

Anyway, I hacked it together in a very absent-minded kind of way without a clear plan so now it's completely unmaintainable lmaooo.

It is OK as a prototype, the good thing it has is that once you define an object and save it to the "database" (in my case, just a bunch of text files saved on my disk), you can reference it anywhere else. I've been using it to write character sheets a lot, and that kinda bridged the gap between T2I and text gen, I just write a new object referencing other objects in the database (facial features, hair, clothing, personality traits, et cetera) and boom, that's a fresh character I can insert into a scene.

Think of it like playing with dolls, maybe? You comb their hair, have them try out new clothes, and then give them a batmobile :DD and once you define what the batmobile looks like, you can place it anywhere else. Something like that.

I think that it's a very good fit for Perchance, and since Javascript is more or less just a well-adjusted Perl the translation work isn't too challenging. The only real concern is planning the project a little bit better so I don't turn it into spaghetti twice.

[-] Almaumbria@lemmy.world 2 points 3 weeks ago

Hmm, I don't know, lots of weird stuff. I don't trust out-of-nowhere crazy walls of text person, so I've removed their presence from my mental register.

But I worked out how to do photography, which is what most people seem to be complaining about, so here's a short guide, hope it helps:

  • Use the default T2I.
  • Delete the negative prompt.
  • Set "style" to "NO STYLE".

Then just play around with this template:

a low resolution photographic still frame from a cult 1980s b-movie, cinematic close-up portrait shot featuring a lonely wanderer, Jim.
Background: Foggy streets of an antique gothic city. Midnight. Lit in ominous hues of purplish blue.
Aesthetic: Dark fantasy.
Photography Style: Modern avant garde cinema. Second millenium expressionist photography, professionally shot and composed.
Natural Lighting: Exquisite strong blue lighting with dramatic deep red accents.
Camera: 70mm IMAX MSM 9802.

Jim walking through the streets.

Chose some pseudo LaserDisc mambo as baseline but you can change that around, obviously. And then you can keep going. Like say:

Jim walking through the streets.
Face: describe Jim's face.
Hair: describe Jim's hair.
Eyes: same thing for the eyes...
Body: idem
Clothing: and so on and so on.

That template with a guidance scale of 11 has been working wonders for me, and it's a simple enough pattern that you can easily modify it. I'm a weird case, because I have characters and style elements saved as JSON files, I just generate these things from an overcomplicated script (6000+ SLOC of spaghetti, should do a clean rewrite in Javascript and make it into a generator at this point...)

Anyway, I'm actually getting better results now than I was getting before the update, across the board. So, I'm sorry to folks having a bad time, but all the despair going around? Calling results useless and horrible and shit and all that? Man, that sounds like a skill issue. This entire situation only took me a wee bit of tinkering to solve, it was next to trivial. I mean, c'mon, people! You can do better! Or you can ask for help, maybe, now that would at least make this a more pleasant environment to be in.

OK OK I'll stop now. Cheers! :)

[-] Almaumbria@lemmy.world 2 points 3 weeks ago

[t=Thing.consumableList] already makes it so [t] takes from the Thing list, which only has one item. So anytime you place t between brackets, it tries to do it again, but it already consumed that one item, so it fails.

So I tried this:

output
  [Thing()]

Thing()=>
  const a=Adjective.consumableList;
  const n=Noun.consumableList;
  return [a,n,a,n].join(' ');

It takes from both lists, twice, and gives you the result, without elements from either list being repeated. I think there ought to be a better way to do it, but it gets the job done :)

[-] Almaumbria@lemmy.world 2 points 3 weeks ago

There's currently what I presume is a temporary issue where elements beyond the 250-300th word will be ignored, I'm not 100% certain on what the exact cutoff point is, it seems to vary.

So: I just stripped my prompts a little, mostly removing the redundant and overly verbose descriptions that used to be necessary -- that fixed it right away.

I'm getting downright amazing results with these smaller inputs now, for both realistic and stylized images. The trick really is that you have to put the most important details first, and the least important last. That way you guarantee that the "essence" of the image you want doesn get lost if you inadvertently cross the 250-300 word boundary.

Also, some people are saying that removing negative prompts seemed to help. I can't confirm that because I wasn't using them.

Now, and I'ma go a bit offtopic here, I don't know if this situation is only temporary or what, but it took such a miniscule effort on my part to adapt to the change that, with all due respect, I'm left once again wondering what in Dagon's unholy piehole is wrong with folks rushing to the torch and pitchfork.

Well, business as usual: another update, another riot ;)

[-] Almaumbria@lemmy.world 6 points 3 weeks ago* (last edited 3 weeks ago)

It is remarkably attentive, until the cutoff point at which it will simply omit elements. I could be wrong, but my gut tells me the latter arose as an unintended side effect of implementing the former.

[-] Almaumbria@lemmy.world 3 points 1 month ago

I’m really not a coder (...)

I beg to differ!

recently widowed ^[if (a > 75) {2} else if (76 > a && a > 60) {0.5} else if (61 > a && a > 30) {0.2} else {0}]

That right there is code. So where's the harm in a little more? :)

Here, it should look something like this in Perchance list syntax:

// a list with each HTML checkbox element
settings_el_list() =>
  return [
    fantasy_check,
    modern_check,
    adult_check,
    child_check,
    baby_check,
    appearance_check,
    morevoices_check,
    classes_check,
    expanded_check,
  ];

// ^ join "true" or "false" for each checkbox into a comma-separated string
//   and write that string to localStorage
save_settings() =>
  localStorage.settings=settings_el_list().map(e=>String(e.checked===true)).join(',');
  return;

// ^ read that string back in ;>
load_settings() =>
  // no settings saved, so skip loading
  if(! localStorage.settings)
    return;

  // get "true" or "false" by splitting the saved string at each comma
  const ar=localStorage.settings.split(",");
  // ^use that array to set the corresponding checkbox
  settings_el_list().forEach((e,i)=>e.checked=ar[i]==="true");
  return;

Put that on the lists, then on your HTML, add something like this to the <head>:

<script>load_settings();</script>

Putting onclick="save_settings()" to the close button for the settings popup...

<a class="close" href="#" onclick="save_settings()">&times;</a>

Would make it so the settings are saved each time the popup is closed. That's the simplest way to go about it, I think.

Bonus track, a more "sophisticated" alternative would be triggering the save F on each click of a checkbox, so as to guarantee that the settings are saved for each modification, but it strikes me as unnecessary in this case. Anyway, you can do that by adding a call to update() at the end of save_settings() (that is right before the return), and then change the onclick of each checkbox to call save_settings() instead. Again, it seems a bit much, but I wanted to mention it anyway :B

Oh, also, note that the empty return at the end of the functions are actually unnecessary, I just add them to make the end of execution explicit. You can remove those if you want.

Cheers!

[-] Almaumbria@lemmy.world 1 points 1 month ago

Hi! :)

If you only need to store short strings (textual values), then you can localStorage['name']=variable (or localStorage.name=variable, same thing). It's possible to store simple objects this way by converting them into text, like so:

// storing value
localStorage.name=JSON.stringify(variable);
// restrieving it
variable=JSON.parse(localStorage.name);

But localStorage has a size limit (about 10MB on my end). If you have large structured data you need to store, then you'll need a local database; the standard way to do that is with IndexedDB, which isn't too bad, though it can seem a bit daunting at first. Anyway, you can import the kv plugin to have an easier time with it.

[-] Almaumbria@lemmy.world 1 points 4 months ago

Forgive me, but I believe I have explained the situation to you in a rather thorough manner, and fail to see a way to make it any more clear.

I am not arguing that the specific generator you're using is working correctly at the present moment; I am letting you know that this is temporary. Do not take my word for it: click the little 'edit' button to bring up the source code and tweak the prompts yourself. The bulk of the work is fairly straightforward: replacing rules designed to deal with the old model's quirks for rules that work for the new model.

You will have to experiment a fair bit with writing the entire prompt from scratch, and for doing this, the AI Text Generator is a tool I cannot recommend enough. There are multiple ways to structure a complex prompt, but from my own testing, I've found that a very good way is to break it into sections, providing a role for the model, followed by context data, then optionally an input, and then a task followed by a list of contraints.

As an example, here's a prompt I've been using for generating lorebook entries from narration passages:

# Role:

You are a cultured English linguist, novelist and dramaturge working on a theatrical play. Maintain internal consistency within the story, and prioritize pacing and plot momentum over minor details. Currently, you are writing brief lorebook entries for the play's world and characters. Such an entry is a timeless observation, peculiarity, key fact and/or theme, or an otherwise noteworthy piece of information about the world or it's characters.

***

# Lorebook:

<paste existing lore here or leave blank>

***

# INPUT:

<paste some passages here>

***

# Task:

Condense INPUT into compact single-paragraph lorebook entries, extracting solely novel information. Each entry must be self-contained: Provide enough surrounding context such that it would make sense if read on its own, leaving little room for ambiguity. Entries must also be timeless: they must still be true if read later on, so phrase them as referencing a past event. Each entry must be no more than 3 sentences; abridge details as needed. Utilize names rather than pronouns when referencing characters or locations.

Format each entry like this: `[[<Title> (<search keywords/tags>)]]: <content>.`

Output as many entries as needed.

***

# Constraints:

- Do not use the em dash ("–") symbol. Replace the em dash symbol with either of: comma (","), colon (":"), semicolon (";"), double hyphen ("--"), ellipsis ("..."), period ("."), or wrap the text between parenthesis.
- Avoid rehashing phrases and verbal constructs. If a line or sentiment echoes a previous one, either in content or structure, then rephrase or omit it. Minimize repetition to keep the text fluid and interesting.
- Avoid hyperfixating on trivialities; some information is merely there for flavor or as backdrop, and doesn't need over-explaining nor over-description. If a detail doesn’t advance character arcs or stakes, either ignore it or summarize it in under 10 words.

The no-em-dash rule doesn't work 100% of the time, but other than that it's actually pretty fun: you can just write away for a few paragraphs, and it'll output you some memories/lore, which you can then paste into the Lorebook section, and repeat the process. I've been using variations of this method to generate things like character descriptions, factions, locations, or just to make it rapid fire minor lore details that "fill in the blanks" between existing entries for realism.

You can take that template and rework it to your liking, even build new generators based off of that. Go ahead: the new model lets you do some extremely cool things, the difference for prompt engineering is simply night and day.

Now, this labor may be entirely outside of your skillset, and that's alright. However, if that's indeed the case, then I'd humbly request you give the maintainer(s) the time to do it for you before calling for a rollback.

That is all.

[-] Almaumbria@lemmy.world 2 points 4 months ago

Please, pay close attention:

But you have to understand that generators using old prompts will more than likely not work out of the box, you have to tinker with them to get the results you want.

That is the last sentence in the text you quoted, emphasis mine.

The argument: prompts need to be rewritten to make full use of the new model's capabilities, and that takes time as there's a lot of trial and error involved. After (not before) such a rework is done, the results become much better.

This is not speculation on my part: I've been doing exactly that, tweaking old code, and I'm merely reporting my findings. How do you know the maintainer of AI Story Generator is not in the middle of a similar rework?

In the famous words of the old model: let's not get ahead of ourselves. Patience will be more rewarding than a rollback, this I can assure you.

[-] Almaumbria@lemmy.world 1 points 4 months ago

Hi! :)

Just commenting to clarify that the 'break bad patterns' bug is unrelated to the new model: this behavior is actually caused by the prompt used by AI character chat when generating the bot reply -- it always contains the phrase "Remember the break bad patterns rule", in reference to an item in the default writing instructions. IIRC, and in case it hasn't yet been fixed, this line is added to the end of the prompt somewhere deep in the getBotReply function; I forked ACC and can confirm that editing it out removed the issue.

Anyway, there are similar bugs in other generators, and I suspect most of them are also due to the prompt having similar instructions that where originally meant to mitigate quirks of the old model, but now only cause problems.

More on-topic, I've been testing the new model a lot, writing prompts for it from scratch, and the results are amazing: it can consistently understand complex, structured instructions, so one can more reliably make little 'programs' with it, not just narrative stuff. But you have to understand that generators using old prompts will more than likely not work out of the box, you have to tinker with them to get the results you want.

I really, really wish the new model stays. It has opened up a lot of possibilities for making new generators, and a rollback would really suck for me as a developer. I'm specifically hacking away at ACC to put together a new tool for narrative, world-building and roleplay; it's working fantastic, so I get a feeling of absolute dread each time I see posts like this! Please, don't take it away from me, I only need some more time! ;)

Anyway, just wanted to share these bits. Cheers!

view more: next ›

Almaumbria

0 post score
0 comment score
joined 4 months ago