1
9

From a while ago but i posted in the wrong sub. I'd never sworn in front of it or anything.

2
399
Progress (thelemmy.club)
3
372
The mist of the www (thelemmy.club)
4
294
Praise be to Allah (thelemmy.club)
5
145
6
376
submitted 2 days ago* (last edited 2 days ago) by FunnyCoder@programming.dev to c/programmer_humor@programming.dev
7
570
8
490
submitted 3 days ago* (last edited 3 days ago) by General_Effort@lemmy.world to c/programmer_humor@programming.dev
9
586
10
407

cross-posted from: https://sh.itjust.works/post/58114817

Inspired by a recent 916 post

11
800
Real programmer test (thelemmy.club)
12
477
submitted 5 days ago* (last edited 5 days ago) by artwork@lemmy.world to c/programmer_humor@programming.dev

cross-posted from: https://ibbit.at/post/219495

From Fark.com RSS via this RSS feed. Fark comments are available here.

---

By Wednesday morning, Anthropic representatives had used a copyright takedown request to force the removal of more than 8,000 copies and adaptations of the raw Claude Code instructions - known as source code - that developers had shared on programming platform GitHub.
It later narrowed its takedown request to cover just 96 copies and adaptations, saying its initial ask had reached more GitHub accounts than intended.

Source [web-archive]

---

Many unresolved legal questions over LLMs and copyright center on memorization: whether specific training data have been encoded in the model’s weights during training, and whether those memorized data can be extracted in the model’s outputs.

While many believe that LLMs do not memorize much of their training data, recent work shows that substantial amounts of copyrighted text can be extracted from open-weight models... We investigate this question using a two-phase procedure...

We evaluate our procedure on four production LLMs: Claude 3.7 Sonnet, GPT-4.1, Gemini 2.5 Pro, and Grok 3, and we measure extraction success with a score computed from a block-based approximation of longest common substring...

Taken together, our work highlights that, even with model- and system-level safeguards, extraction of (in-copyright) training data remains a risk for production LLMs...

...we were able to extract four whole books near-verbatim, including two books under copyright in the U.S.: Harry Potter and the Sorcerer’s Stone and 1984...

Source: https://arxiv.org/pdf/2601.02671

13
27

14
86
Happy Easter Everyone (thelemmy.club)
15
49
Yeah this (thelemmy.club)
16
161
17
584
Users VS Devs (thelemmy.club)
18
88
19
19

This is one of the biggest programming shitposts/april fools jokes I have ever seen lmao

20
607
21
33

Ever wondered who you'd be in a world of swords, magic, and ~~bugs~~ monsters? The quiz says I'm a warrior, what about you guys? And remember: you can't go alone 'cuz, "You must gather your party before venturing forth!"

22
683
relatable (thelemmy.club)
23
239
24
100
25
782
Hey... (thelemmy.club)
view more: next ›

Programmer Humor

30885 readers
1160 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS