[-] RandomPerchanceUser@lemmy.world 2 points 3 months ago

Perchance chat is gonna get the site shut down I swear.

Any forum with a buncha unhinged weirdos + underage users is trouble waiting to happen for the site. People wanna chat , do that on Lemmy or whatever other moderated forums there are IMO.

[-] RandomPerchanceUser@lemmy.world 1 points 4 months ago

Who told you this? Like...why?

[-] RandomPerchanceUser@lemmy.world 1 points 5 months ago

You ask for info , I give info. Why so distrustful?

[-] RandomPerchanceUser@lemmy.world 1 points 5 months ago* (last edited 5 months ago)

Perchance model is FLUX Chroma.

FLUX Chroma Flash Heun

Easiest way to get photoreal output is to head off to getty and copy a photo caption: https://www.gettyimages.com/editorial-images

Example output on perchance generator

2025 Toronto International Film Festival - Black Excellence Brunch TORONTO, ONTARIO - SEPTEMBER 08: (L-R) Karen Chapman, Regina Taylor and Joan Jenkinson attend the Black Excellence Brunch during the 2025 Toronto International Film Festival at Petros82 Restaurant on September 08, 2025 in Toronto, Ontario. (Photo by Leon Bennett/Getty Images)

From : https://www.gettyimages.com/editorial-images/entertainment/event/toronto-international-film-festival-black-excellence-brunch/776375508?editorialproducts=all

[-] RandomPerchanceUser@lemmy.world 6 points 6 months ago

I don't mind hearing what you are actually angry about if its something personal.

[-] RandomPerchanceUser@lemmy.world 1 points 6 months ago

You are not yourself man.

Take a break and get a grip on reality.

Revert to your mental health checklist of things to do.

0

Link to image-to-prompt: https://huggingface.co/codeShare/flux_chroma_image_captioner/blob/main/gemma_image_captioner.ipynb

Writing prompts for Chroma is hard and Joycaptions is inaccurate so I assembled the training data I could find for the model , picked 400 image text pairs at random and trained Google Gemma 3 LoRA model as in image to prompt tool that can run on Google Colab.

Its a proof-of-concept. Feel free to train your own LoRa captioning models for use on perchance. The workflow of converting JSON and .parquets into a dataset can be found in this notebook in the repo: https://huggingface.co/codeShare/flux_chroma_image_captioner/blob/main/train_on_parquet.ipynb

For the original unsloth notebook visit: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma3_(4B)-Vision.ipynb

Other unsloth models: https://docs.unsloth.ai/get-started/unsloth-notebooks

/Cheers!

1
submitted 6 months ago* (last edited 6 months ago) by RandomPerchanceUser@lemmy.world to c/perchance@lemmy.world

Link to image-to-prompt: https://huggingface.co/codeShare/flux_chroma_image_captioner/blob/main/gemma_image_captioner.ipynb

Writing prompts for Chroma is hard and Joycaptions is inaccurate so I assembled the training data I could find for the model , picked 400 image text pairs at random and trained Google Gemma 3 LoRA model as in image to prompt tool that can run on Google Colab.

Its a proof-of-concept. Feel free to train your own LoRa captioning models for use on perchance. The workflow of converting JSON and .parquets into a dataset can be found in this notebook in the repo: https://huggingface.co/codeShare/flux_chroma_image_captioner/blob/main/train_on_parquet.ipynb

For the original unsloth notebook visit: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma3_(4B)-Vision.ipynb

Other unsloth models: https://docs.unsloth.ai/get-started/unsloth-notebooks

Also Tensor Art holds a contenst for running the new Qwen model so you might wanna check that out: https://mee6.xyz/i/vSpIL2tvi0

/Cheers!

[-] RandomPerchanceUser@lemmy.world 1 points 6 months ago* (last edited 6 months ago)

FLUX Chroma (Perchance T2i model) is literally the best model on the market haha.

I've collected what prompt I can find on Chroma training data here (see screenshot above for output): https://huggingface.co/datasets/codeShare/chroma_prompts/blob/main/parquet_explorer.ipynb

So try mimicing that prompt format ๐Ÿ‘

[-] RandomPerchanceUser@lemmy.world 2 points 6 months ago* (last edited 6 months ago)

Good scientific approach!

I collected training prompts from lodestones repo to get an idea of the prompt format for Chroma: https://huggingface.co/datasets/codeShare/chroma_prompts/blob/main/parquet_explorer.ipynb

Still early stages so you'll have to download the .parquet file to your own Google Drive and access it via the notebook from there

[-] RandomPerchanceUser@lemmy.world 3 points 6 months ago* (last edited 6 months ago)

what is the aesthetic 0 style type of art? anime screencap with a title in red text Fox-like girl holding a wrench and a knife, dressed in futuristic armor, looking fierce with yellow eyes. Her outfit is a dark green cropped jacket and a skirt-like bottom. \: title the aesthetic 0 style poster "Aesthetic ZERO"

(Current Perchance Chroma model could be an early epoch and not changed until epoch 50 finishes training)

Chroma Epoch 48 (latest one)

6
submitted 6 months ago* (last edited 6 months ago) by RandomPerchanceUser@lemmy.world to c/perchance@lemmy.world

Source: https://huggingface.co/lodestones/Chroma/discussions/72

Chroma (Perchance Text to image model) is trained on 5 million images

The tagging system for all these images include the word 'aesthetic' in the training prompt , used in this manner :

'aesthetic 0' , 'aesthetic 1' , .... 'aesthetic 10' , 'aesthetic 11' are labels used to denote the visual style in the training data

Where 'aesthetic 11' denotes (good) AI-images used for training data

Thats all we know.

//----//

This system isn't 100% accurate , but it is highly recommended you use the term 'aesthetic' at least once (preferably often) to mimic the training prompt(s) in the Chroma model.

Check the HF page for Chroma in the future on further info regarding training data / prompts you can use while generating images on the perchance website.

TLDR; use the word 'aesthetic' in your prompt to improve them for perchance text-to-image generation

Cheers!

[-] RandomPerchanceUser@lemmy.world 2 points 6 months ago

Are you a human?

[-] RandomPerchanceUser@lemmy.world 3 points 7 months ago

Happy to help ๐Ÿ‘

[-] RandomPerchanceUser@lemmy.world 3 points 7 months ago* (last edited 7 months ago)

Just run the Chroma Model on Tensor Art , or some similiar service:

tungsten.run shakker.ai seaart.ai Frosting.ai Pollinations.ai Kling

New Tensor Art accounts <1month old can't make NSFW content.

Reason is there is ongoing issue with people making throwaway accounts to flood the NSFW channel with bad stuff.

7
submitted 7 months ago* (last edited 7 months ago) by RandomPerchanceUser@lemmy.world to c/perchance@lemmy.world

FLUX Chroma: https://huggingface.co/lodestones/Chroma

FLUX Chroma (Tensor Art) : https://tensor.art/models/886764918794154122

Unlike base FLUX Schnell , FLUX Chroma uses NAG (Normalized Attentive Guidence) : https://huggingface.co/spaces/ChenDY/NAG_FLUX.1-dev

TLDR; NAG are negatives added to FLUX model

Paper: https://arxiv.org/abs/2505.21179

See FLUX Chroma Huggingface repo for additional changes from base FLUX Schnell model

To help with creating prompts for FLUX , use Joycaptions:https://huggingface.co/spaces/fancyfeast/joy-caption-beta-one

And Danbooru tags: https://donmai.moe/wiki_pages/help:home

The prompt can be up to 512 tokens long , which can be checked at https://sd-tokenizer.rocker.boo/

view more: next โ€บ

RandomPerchanceUser

0 post score
0 comment score
joined 7 months ago