khalid_salad

joined 3 months ago
[–] [email protected] 36 points 4 days ago* (last edited 4 days ago) (8 children)

Well, two responses I have seen to the claim that LLMs are not reasoning are:

  1. we are all just stochastic parrots lmao
  2. maybe intelligence is an emergent ability that will show up eventually (disregard the inability to falsify this and the categorical nonsense that is our definition of "emergent").

So I think this research is useful as a response to these, although I think "fuck off, promptfondler" is pretty good too.

[–] [email protected] 12 points 1 week ago

I assume it's the same dorks who say "ChatGPT is useful to summarize emails."

[–] [email protected] 23 points 1 week ago

Everybody knows that all languages derive from ULTRAFRENCH.

[–] [email protected] 36 points 1 week ago (28 children)

So Geoffrey Hinton is a total dork.

Hopefully, [this Nobel Prize] will make me more credible when I say these things really do understand what they're saying. [There] is a whole school of linguistics that comes from Chomsky that thinks it's nonsense to say these things understand language. That school is wrong. Neural nets are much better at processing language than anything produced by the Chomsky school of linguistics.

[–] [email protected] 13 points 1 week ago (1 children)

https://www.bbc.com/news/articles/c62r02z75jyo

It’s going to be like the Industrial Revolution - but instead of our physical capabilities, it’s going to exceed our intellectual capabilities ... but I worry that the overall consequences of this might be systems that are more intelligent than us that might eventually take control

😩

[–] [email protected] 12 points 1 week ago

"I only had this problem because I was very reckless," he continued, "partially because I think it's interesting to explore the potential downsides of this type of automation. If I had given better instructions to my agent, e.g. telling it 'when you've finished the task you were assigned, stop taking actions,' I wouldn't have had this problem.

just instruct it "be sentient" and you're good, why don't these tech CEOs undersand the full potential of this limitless technology?

[–] [email protected] 8 points 1 week ago (2 children)

This particular bit of news has me so down. :(

[–] [email protected] 6 points 2 weeks ago* (last edited 2 weeks ago) (10 children)

https://xcancel.com/karpathy/status/1841848120897912967#m

jfc

Edit: replaced screenshot of tweet + description with link to tweet

[–] [email protected] 5 points 2 weeks ago (1 children)

There’s vanishly little that LLMs are actually being used for that can’t be done far cheaper (computatiomally and cost-wise) with existing tools.

Reminds me of the most recent Adam Conover podcast. He had as guests two computer scientists who were purportedly critical of AI, and one of them still shat out something to the effect of:

It does have a use case where something takes longer to produce than it does to verify. For example, a website ...

[–] [email protected] 17 points 4 weeks ago (1 children)

Every few years there is some new CS fad that people try to trick me into doing research in


"algorithms" (my actual area), then quantum, then blockchain, then AI.

Wish this bubble would just fucking pop already.

[–] [email protected] 16 points 1 month ago

go gatekeeper somewhere else

Me, showing up to a chemistry discussion group I wasn't invited to:

Alchemy has valid use cases. If you want to be pedantic about what alchemy means, go gatekeep somewhere else.

view more: next ›