this post was submitted on 23 Nov 2023
226 points (99.6% liked)

the_dunk_tank

15915 readers
1 users here now

It's the dunk tank.

This is where you come to post big-brained hot takes by chuds, libs, or even fellow leftists, and tear them to itty-bitty pieces with precision dunkstrikes.

Rule 1: All posts must include links to the subject matter, and no identifying information should be redacted.

Rule 2: If your source is a reactionary website, please use archive.is instead of linking directly.

Rule 3: No sectarianism.

Rule 4: TERF/SWERFs Not Welcome

Rule 5: No ableism of any kind (that includes stuff like libt*rd)

Rule 6: Do not post fellow hexbears.

Rule 7: Do not individually target other instances' admins or moderators.

Rule 8: The subject of a post cannot be low hanging fruit, that is comments/posts made by a private person that have low amount of upvotes/likes/views. Comments/Posts made on other instances that are accessible from hexbear are an exception to this. Posts that do not meet this requirement can be posted to [email protected]

Rule 9: if you post ironic rage bait im going to make a personal visit to your house to make sure you never make this mistake again

founded 4 years ago
MODERATORS
 

Literally just mainlining marketing material straight into whatever’s left of their rotting brains.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (2 children)

Notice the distinction in my comments between an LLM and other algorithms, that's a key point that you're ignoring. The idea that other commenters have is that for some reason there is no input that could produce the output of human thought other than the magical fairy dust that exists within our souls. I don't believe this. I think a sufficiently advanced input could arrive at the holistic output of human thought. This doesn't have to be LLMs.

[–] [email protected] 12 points 1 year ago (1 children)

the magical fairy dust that exists within our souls.

Who said that?

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago) (1 children)

You're missing the forest for the trees. Replace "magical fairy dust" with [insert whatever you think makes organic, carbon-based processing capable of sentience but inorganic silicon-based processing incapable of sentience].

[–] [email protected] 10 points 1 year ago

You're missing the forest for the trees.

smuglord

whatever you think makes organic, carbon-based processing capable of sentience but inorganic silicon-based processing incapable of sentience

No one I see here took that position. The position being taken is that LLMs are not that and their trajectory isn't really going there no matter how much hype you've bought into out of Reddit New Atheist contrarian knee-jerk desire to stick it to those that you assume believe in "the magical fairy dust that exists within our souls."

[–] [email protected] 6 points 1 year ago (1 children)

I haven't seen anyone here (or basically anyone at all, for that matter) suggest that there's literally no way to create mentality like ours other than being exactly like us. The argument is just that LLMs are not even on the right track to do something like that. The technology is impressive in a lot of ways, but it is in no way comparable to even a rudimentary mind in the sense that people have minds, and there's no amount of tweaking or refining the basic approach that's going to move it in that direction. "Genuine" (in the sense of human-like) AI made from non-human stuff is certainly possible in principle, but LLMs are not even on that trajectory.

Even setting that aside, I think framing this as an I/O problem elides some really tricky and deep conceptual content, and suggests some fundamental misunderstanding about how complex this problem is. What on Earth does "the output of human thought" mean in this sense? Clearly you don't really mean human thought, because you obviously think whatever "output" you're looking for can be instantiated in non-human systems. It must mean human-like thought, but human-like in what sense? Which features are important to preserve, and which are incidental or parochial to the way humans do human-like thought? How you answer that question greatly influences how you evaluate putative cases of "genuine" AI, and it's possible to build in a great deal of hidden bias if we don't think carefully and deliberately about this. From what I've seen, virtually none of the AI hypers are thinking carefully or deliberately about this.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

The top level comment this chain is on specifically reduces GPT by saying it's "just an algorithm", not by saying it's "just an LLM", which is implicitly claiming that no algorithm could match or exceed human capabilities, because they're "just algorithms".

You can even see this person further explicitly defending this position in other comments, so the mentality you say you haven't seen is literally the basis for this entire thread.

[–] [email protected] 6 points 1 year ago* (last edited 1 year ago)

The smol bean LLM is unfairly misunderstood sometimes while presently tightening the grip of the surveillance state and denying medical coverage to people while putting artists out of work. I'm sure the billionaires bankrolling it will wipe away those statistically-produced tears with wads of cash, so all will be well.