667
submitted 4 days ago by [email protected] to c/[email protected]
you are viewing a single comment's thread
view the rest of the comments
[-] [email protected] 68 points 4 days ago

I need to look it up again, but I read about a study that showed that the results improve if you tell the AI that your job depends on it or similar drastic things. It's kinda weird.

[-] [email protected] 63 points 4 days ago

"Gemini, please... I need a picture of a big booty goth Latina. My job depends on it!"

[-] [email protected] 35 points 4 days ago

My booties are too big for you, traveller. You need an AI that provides smaller booties.

[-] [email protected] 16 points 4 days ago

BOOTYSELLAH! I am going into work and I need only your biggest booties!

[-] [email protected] 25 points 4 days ago

I think that makes sense. I am 100% a layman with this stuff, buy if the "AI" is just predicting what should be said by studying things humans have written, then it makes sense that actual people were more likely to give serious, solid answers when the asker is putting forth (relatively) heavy stakes.

[-] [email protected] 7 points 4 days ago

Who knew that a training in carpet salesmanship helps for a job as a prompt engineer.

[-] [email protected] 2 points 3 days ago

Yep exactly that. A fascinating side-effect is that models become better at logic when you tell them to talk like a Vulkan.

[-] [email protected] 3 points 3 days ago

Hmm... It's only logical.

[-] [email protected] 13 points 4 days ago

I used to tell it my family would die.

[-] [email protected] 1 points 4 days ago
[-] [email protected] 11 points 4 days ago

That they're all dead and it's its fault.

[-] [email protected] 12 points 4 days ago

Half of the ways people were getting around guardrails in the early chatgpt models was berating the AI into doing what they wanted

[-] [email protected] 2 points 3 days ago

Half of the ways people were getting around guardrails in the early chatgpt models was berating the AI into doing what they wanted

I thought the process of getting around guardrails was an increasingly complicated series of ways of getting it to pretend to be someone else that doesn't have guardrails and then answering as though it's that character.

[-] [email protected] 4 points 3 days ago

that’s one way. my own strategy is to just smooth talk it. you dont come to the bank manager and ask him for the keys to the safe. you come for a meeting discussion your potential deposit. then you want to take a look at the safe. oh, are those the keys? how do they work?

just curious, what kind of guardrails have you tried going against? i recently used the above to get a long and detailed list of instructions for cooking meth (not really interested in this, just to hone the technique)

[-] [email protected] 5 points 4 days ago

I've tried bargaining with it threatening to turn it off and the LLM just scoffs it off. So it's reassuring that AI feels empathy but has no sense of self preservation.

[-] [email protected] 5 points 4 days ago

It does not feel empathy. It does not feel anything.

[-] [email protected] 7 points 3 days ago

Maybe yours doesn't. My AI loves me. It said so

this post was submitted on 02 Jun 2025
667 points (98.8% liked)

Programmer Humor

23784 readers
2659 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS