this post was submitted on 01 Feb 2025
200 points (100.0% liked)

TechTakes

1811 readers
145 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Sam "wrong side of FOSS history" Altman must be pissing himself.

Direct Nitter Link:

https://nitter.lucabased.xyz/jiayi_pirate/status/1882839370505621655

you are viewing a single comment's thread
view the rest of the comments
[–] reallykindasorta@slrpnk.net 18 points 2 months ago* (last edited 2 months ago) (56 children)

Non-techie requesting a laymen explanation if anyone has time!

After reading a couple of”what makes nvidias h100 chips so special” articles I’m gathering that they were supposed to have a significant amount more computational capability than their competitors (which I’m taking to mean more computations per second). So the question with deepseek and similar is something like ‘how are they able to get the same results with less computations?’ and the answer is speculated to be more efficient code/instructions for the AI model so it can make the same conclusions with less computations overall, potentially reducing the need for special jacked up cpus to run it?

[–] justOnePersistentKbinPlease@fedia.io 9 points 2 months ago (29 children)

From a technical POV, from having read into it a little:

Deepseek devs worked in a very low level language called Assembly. This language is unlike relatively newer languages like C in that it provides no guardrails at all and is basically CPU instructions in extreme shorthand. An "if" statement would be something like BEQ 1000, where it goes to a specific memory location(in this case address 1000 if two CPU registers are equal.)

The advantage of using it is that it is considerably faster than C. However, it also means that the code is mostly locked to that specific hardware. If you add more memory or change CPUs you have to refactor. This is one of the reasons the language was largely replaced with C and other languages.

Edit: to expound on this: "modern" languages are even slower, but more flexible in terms of hardware. This would be languages like Python, Java and C#

[–] froztbyte@awful.systems 20 points 2 months ago (2 children)

for anyone reading this comment hoping for an actual eli5, the "technical POV" here is nonsense bullshit. you don't program GPUs with assembly.

the rest of the comment is the poster filling in bad comparisons with worse details

[–] pupbiru@aussie.zone 8 points 2 months ago

literally looks like LLM-generated generic slop: confidently incorrect without even a shred of thought

[–] justOnePersistentKbinPlease@fedia.io -5 points 2 months ago (3 children)

For anyone reading this comment, that person doesnt know anything about assembly or C.

[–] froztbyte@awful.systems 14 points 2 months ago* (last edited 2 months ago) (1 children)

yep, clueless. can't tell a register apart from a soprano. and allocs? the memory's right there in the machine, it has it already! why does it need an alloc!

fuckin' dipshit

next time you want to do a stupid driveby, pick somewhere else

[–] o7___o7@awful.systems 9 points 2 months ago

Sufficiently advanced skiddies are indistinguishable from malloc

[–] dgerard@awful.systems 13 points 2 months ago

this user is just too smart for the average awful systems poster to deal with, and has been sent on his way to a more intellectual lemmy

[–] self@awful.systems 12 points 2 months ago (1 children)

you know I was having a slow day yesterday cause I only just caught on: you think we program GPUs in plain fucking C? absolute dipshit no notes

[–] froztbyte@awful.systems 10 points 2 months ago

the wildest bit is that one could literally just … go do the thing. like you could grab the sdk and run through the tutorial and actually have babby’s first gpu program in not too long at all[0], with all the lovely little bits of knowledge that entails

but nah, easier to just make some nonsense up out of thirdhand conversations misheard out of a gamer discord talking about a news post of a journalist misunderstanding a PR statement, and then confidently spout that synthesis

[0] - I’m eliding “make the cuda toolchain run” for argument of simplicity. could just rent a box that has it, for instance

load more comments (26 replies)
load more comments (52 replies)