[-] [email protected] 6 points 4 months ago* (last edited 4 months ago)

That's the opposite of what I'm saying. Deepseek is the one under scrutiny, yet they are the only one to publish source code and training procedures of their model.

this has absolutely fuck all to do with anything i've said in the slightest, but i guess you gotta toss in the talking points somewhere

e: it's also trivially disprovable, but i don't care if it's actually true, i only care about headlines negative about AI

[-] [email protected] 6 points 4 months ago

"the media sucks at factchecking DeepSeek's claims" is... an interesting attempt at refuting the idea that DeepSeek's claims aren't entirely factual. beyond that, intentionally presenting true statements that lead to false impressions is a kind of dishonesty regardless. if you mean to argue that DeepSeek wasn't being underhanded at all and just very innocently presented their figures without proper context (that just so happened to spurn a media frenzy in their favor)... then i have a bridge to sell you.

besides that, OpenAI is very demonstrably pissing away at least that much money every time they add one to the number at the end of their slop generator

[-] [email protected] 7 points 8 months ago

oh gods they're multiplying

[-] [email protected] 7 points 11 months ago

you do an excellent job of writing. several times reading this i had to mentally delete paragraphs of explanations for some of the rationalist thoughts and ideology because you have described it perfectly in just a sentence or two.

shared this with some of my in-tech-but-skeptical friends. you deserve a bigger audience

[-] [email protected] 8 points 1 year ago* (last edited 1 year ago)

wow, that side-by-side is so obviously bad i'm surprised it even got posted. usually AI bros try to hide the worst of the tech, or at the very least, say shit like "this is only the beginning!!"

also, was not expecting to click that link and see FUNKe. good nostalgia

[-] [email protected] 7 points 1 year ago* (last edited 1 year ago)

~~you might know what "monotonic" means if you had googled it, which would also give you the answer to your question~~

edit: this was far too harsh of a reply in retrospect, apologies. the question is answered below, but i'll echo it: a "monotonic UUID" is one that numerically increases as new UUIDs are generated. this has an advantage when writing new UUIDs to indexed database columns, since most database index structures are more efficient when inserting at the end than at a random point (non-monotonic UUID's).

[-] [email protected] 7 points 1 year ago

but the NULLGE

[-] [email protected] 8 points 1 year ago

i suppose there is something more "magical" about having the computer respond in realtime, and maybe it's that "magical" feeling that's getting so many people to just kinda shut off their brains when creators/fans start wildly speculating on what it can/will be able to do.

how that manages to override people's perceptions of their own experiences happening right in front of it still boggles my mind. they'll watch a person point out that it gets basic facts wrong or speaks incoherently, and assume the fault lies with the person for not having the true vision or what have you.

(and if i were to channel my inner 2010's reddit atheist for just a moment it feels distinctly like the ways people talk about Christian Rapture, where flaws and issues you're pointing out in the system get spun as personal flaws. you aren't observing basic facts about the system making errors, you are actively in ego-preserving denial about the "inevitability of ai")

[-] [email protected] 7 points 1 year ago

i prefer P=N!S, actually

[-] [email protected] 8 points 1 year ago* (last edited 1 year ago)

i couldn't delete the one question i had on stackoverflow, so i used a text generator to overwrite the body and title of the question. fight garbage with garbage

[-] [email protected] 6 points 1 year ago

at least if it was "vectors in a high-dimensional space" it would be like. at least a little bit accurate to the internals of llm's. (still an entirely irrelevant implementation detail that adds noise to the conversation, but accurate.)

[-] [email protected] 6 points 1 year ago

The best way I can relate current LLM’s is the early days of the microprocessor.

i promise we did it! we made iphone 2! this is just like iphone 2! of course it doesn't work yet but it will work eventually! we made iphone 2 please believe us!!

he's already banned but i love how every time this argument comes up there's absolutely no substance to the metaphor. "ai is like the internet/microprocessors/the industrial revolution/the Renaissance", but there's no connective tissue or actual relation between the things being compared, just some hand-waving around the general idea of progress and pointing to other popular/revolutionary things and going "see! it's just like that!"

The majority of hallucinations are due to user input errors that are not accounted for in the model tokenizer and loader code. This is just standard code errors. Processing every possible spelling, punctuation, and grammar error is a difficult task.

"i'm sorry, but you used the wrong form of 'their' in your prompt, that's why it inexplicably included half a review of Click in your meeting summary."

AI is like a mirror of yourself upon the dataset. It can only reflect what is present in the dataset and only in a simulacrum of yourself through the prompts you generate. It will show you what you want to see. It is unrivaled access to information if you have the character to find yourself and what you are looking for in that reflection.

s-tier. no notes. does lemmy have user flairs? because if so i'm calling dibs

view more: ‹ prev next ›

ebu

0 post score
0 comment score
joined 1 year ago