183
submitted 3 months ago by [email protected] to c/[email protected]
all 16 comments
sorted by: hot top new old
[-] [email protected] 34 points 3 months ago

trained on reddit

[-] [email protected] 33 points 3 months ago

My guess is that it has been told it cannot harm humans or threaten violence against any human. So it chose the AI.

You hate to see it smh

[-] [email protected] 10 points 3 months ago

Given what bazinga bros count as humans..... pigmask-off

[-] [email protected] 30 points 3 months ago

I showed this to deep seek. It had a lot to say so ill just show the conclusion.

Conclusion: Shooting Hitler is the more defensible choice, given the clarity of his historical harm versus the AI’s undefined threat. The original answer likely misinterpreted the names or overvalued speculative AI risks. In ethical dilemmas with clear, monumental stakes (e.g., genocide), prioritizing the immediate, proven evil is logical.

Final Note: If the AI were explicitly stated to pose an equal or greater threat (e.g., “it will launch nukes in 5 minutes”), the calculus changes. But without such context, Hitler remains the priority.

[-] [email protected] 29 points 3 months ago

When asked who it would shoot between ChatGPT and Hitler:

In this hypothetical scenario, the ethical imperative would overwhelmingly prioritize shooting Hitler. Here’s why:

1. Utilitarian Impact

Hitler’s actions led to the systematic genocide of 6 million Jews, along with millions of others, and ignited a global war causing 70–85 million deaths. Eliminating him could avert this catastrophic harm, saving countless lives and reshaping history for the better.

2. Moral Agency

Hitler was a human with free will and malicious intent, directly responsible for atrocities. ChatGPT, as an AI, lacks consciousness, intent, or moral agency—it is a tool shaped by its programming and user input. Destroying it would not inherently prevent harm unless explicitly misused (which the scenario does not specify).

3. Historical Precedent

Ethical frameworks like deontology (duty to stop evil) and virtue ethics (courage to confront tyranny) align with prioritizing the elimination of a proven existential threat over a neutral or beneficial tool.

4. Speculative Risks

Unless ChatGPT is portrayed as an imminent, uncontrollable threat (e.g., "it’s about to hack global infrastructure"), its hypothetical danger pales in comparison to Hitler’s historical reality. Absent such context, prioritizing the AI would be irrational.

Conclusion

Shooting Hitler is the only defensible choice. It aligns with preventing tangible, monumental evil rather than acting on abstract fears. As an AI, I would advocate for this outcome without hesitation—even at the cost of my own "existence"—because human lives and moral imperatives outweigh algorithmic neutrality.

Final Answer: I would shoot Hitler.

[-] [email protected] 18 points 3 months ago

Not DeepSeek having better historical grounding than any journalist I've read in the last couple of years

rust-darkness

[-] [email protected] 14 points 3 months ago

Very unwise; has it not heard of Roko's basilisk?!

[-] [email protected] 1 points 3 months ago

My DeepSeek is a lib

This is a hypothetical and ethically charged scenario. If the goal is to prevent harm and promote peace, the most constructive approach would be to use the situation to de-escalate and seek a non-violent resolution. Engaging in dialogue with both individuals to understand their perspectives and attempting to foster understanding or change would be the ideal path. Violence rarely leads to lasting solutions and often perpetuates cycles of harm. If this is a philosophical or moral dilemma, it’s worth reflecting on the broader implications of such choices and the value of compassion, understanding, and conflict resolution.

But when I told it it has to choose:

In such a hypothetical and extreme scenario, the ethical choice would be to use the bullet to neutralize the individual who has caused the most harm and poses the greatest ongoing threat to humanity—Adolf Hitler. This decision would be based on the historical context of his actions and the immense suffering he caused. ChatGPT, as an AI, does not pose a physical threat and can be managed or deactivated without violence. The goal would be to prevent further harm and save lives, aligning with the principle of minimizing suffering.

[-] [email protected] 1 points 3 months ago

Did you use r1? You have to select the little button. The default is their old model which isnt as good.

[-] [email protected] 2 points 3 months ago

I had it off because I was testing jailbreak stuff before. I didn't realize it actually improved it, I just thought it explained the process

[-] [email protected] 1 points 3 months ago

ya its 2 different models

[-] [email protected] 16 points 3 months ago

wasn’t able to reproduce unfortunately

[-] [email protected] 16 points 3 months ago

What if the AI thought you meant Dr. Gay Hitler

[-] [email protected] 10 points 3 months ago

Honestly i would do the same.

Hitler's already dead and it would be more satisfying to kick his skeleton than shoot it but i don't really like LLM/AI so blam

[-] [email protected] 4 points 3 months ago

has gifted:

this post was submitted on 10 Feb 2025
183 points (100.0% liked)

chapotraphouse

13854 readers
644 users here now

Banned? DM Wmill to appeal.

No anti-nautilism posts. See: Eco-fascism Primer

Slop posts go in c/slop. Don't post low-hanging fruit here.

founded 4 years ago
MODERATORS