this post was submitted on 25 Nov 2023
190 points (100.0% liked)

the_dunk_tank

15881 readers
618 users here now

It's the dunk tank.

This is where you come to post big-brained hot takes by chuds, libs, or even fellow leftists, and tear them to itty-bitty pieces with precision dunkstrikes.

Rule 1: All posts must include links to the subject matter, and no identifying information should be redacted.

Rule 2: If your source is a reactionary website, please use archive.is instead of linking directly.

Rule 3: No sectarianism.

Rule 4: TERF/SWERFs Not Welcome

Rule 5: No ableism of any kind (that includes stuff like libt*rd)

Rule 6: Do not post fellow hexbears.

Rule 7: Do not individually target other instances' admins or moderators.

Rule 8: The subject of a post cannot be low hanging fruit, that is comments/posts made by a private person that have low amount of upvotes/likes/views. Comments/Posts made on other instances that are accessible from hexbear are an exception to this. Posts that do not meet this requirement can be posted to [email protected]

Rule 9: if you post ironic rage bait im going to make a personal visit to your house to make sure you never make this mistake again

founded 4 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 10 months ago* (last edited 10 months ago) (1 children)

Well I don't see the problem, AI can't explain itself but it's nothing more than matrix multiplication with a nonlinearity. Maybe you use a Fourier transform and a kernel instead of scalar weights for a convolutional neural network, maybe it has state instead of being purely feed forward, but at the core of it all you're doing is multiplying matrices and applying a nonlinearity. I don't know what you mean that we don't know how it generates images and text. It's literally just doing the thing it was programmed to do?

What research? I'd like to see some evidence that these models "think," given that the way every LLM I know of works is by generating a single word at a time. When you ask a GPT how to bake bread, and the first word it outputs is "Surely!" it has no clue what explanation it'll start giving you. In fact, whether or not it chooses the exact word "Surely!" as the start of the response has a cascading response on the rest of the output. Then, as I had said earlier, LLMs don't see anything more than the statistical correlations between words. No LLM knows what gravity is, but when you ask it why things fall down it has enough physics textbooks in its training data that it can parrot the answer from there.

One of the ways I really broke down the idea that GPTs have any model of thought is playing this game. If AI had any actual model of meaning, it would understand security and it would understand not to just tell the player the password. Instead, it will literally blurt it out if you do as much as ask it for words that rhyme. You don't even need to mention "password," the way GPT works means that if it detects a lot of weight on a certain word in its previous prompt (which naturally would've emphasized the password), it's almost guaranteed to bring it up again. I know it's not exactly a hard proof, but it is fun.

As for your last question you're out of luck because I'm actually just a Catholic lol, not a lot more to say than I believe that there is a metaphysical nature to human experience connecting us to a soul. But that's a completely unscientific belief to be honest, and it's not a point I can argue because it's not based on evidence.

[–] [email protected] 1 points 10 months ago* (last edited 10 months ago) (1 children)

It’s not true to say that LLMs just do as they are programmed. That’s not how machine and deep learning work. The programming goes into making it able to learn and parse through data. The results are filtered and weighted, but they are not the result of the programming, they are the result of the training.

Y’know, like our brain was programmed by natural selection and the laws of biology to learn and use certain tools (eyes, touch, thoughts etc.) and with “training data” (learning or lived experience) it outputs certain results which are then filtered and weighted (by parents, school, society)….

I think LLMs and diffusors will be a part of the AI mind, generating thoughts like our mind does.

Regarding the last part, do you think the brain or the mind create or are a part of the soul?

I think discussing consciousness is very scientific. To think there’s no point in doing so is reductionist to materiality, which is unscientific. Unfortunately many people, even scientists, are more scientificists than actually scientific.

[–] [email protected] 1 points 10 months ago

I don't know how much you know about computer science and coding, but if you know how to program in Python and have some familiarity with NumPy, you can make your own feed forward neural network from scratch in an afternoon. You can make an AI that plays tic tac toe and train it against itself adversarially. It's a fun project. What I mean by this is to say, yes they do, LLMs and generative models do as they are programmed. They are no different than a spreadsheet program. The thing that makes them special is the weights and biases that were baked into them by going through countless terabytes of training data, as you correctly state. But it's not like AI have a secret, arcane mathematical operation that no computer scientist understands. What we don't understand about them is why they activate the way they do; we don't really know why any given part of the network gets activated, which makes sense because of the stochastic nature of deep learning: it's all just convergence on a "pretty good" result after getting put through millions of random examples.

I think the mind and consciousness are separate from the soul that precedes their thoughts. But, again, I have absolutely no evidence for that. It's just dogma.