yogthos

joined 4 years ago
MODERATOR OF
[–] [email protected] 2 points 1 hour ago

You can tell it was written by a lib.

[–] [email protected] 1 points 5 hours ago

The way to look at models like R1 is as layers on top of the LLM architecture. We've basically hit a limit of what generative models can do on their own, and now research is branching out in new directions to supplement what the GPT architecture is good at doing.

The potential here is that these kinds of systems will be able to do tasks that fundamentally could not be automated previously. Given that, I think it's odd to say that the utility is not commensurate with the effort being invested into pursuing this goal. Making this work would effectively be a new industrial revolution. The reality is that we don't actually know what's possible, but the rate of progress so far has been absolutely stunning.

[–] [email protected] 1 points 5 hours ago* (last edited 5 hours ago)

Not in any meaningful way. A statistical model cannot address the Frame problem. Statistical models themselves exacerbate the problems of connectionist approaches. I think AI researchers aren’t being honest with the causality here. We are simply fooling ourselves and willfully misinterpreting statistical correlation as causality.

The frame problem is addressed by creating a model of the environment the system interacts with. That's what provides the context for reasoning and deciding what information is relevant and what isn't. Embodiment is one obvious way to build such a model where the robot or even a virtual agent interacts with the environment and encodes the rules of the environment within its topology.

Let me repeat myself for clarity. We do not have a valid general theory of mind.

This is not necessary for making an AI that can reason about the environment, make decisions, and explain itself. Furthermore, not having a theory of mind does not even prevent us from creating minds. One example of this could be using evolutionary algorithms to evolve agents that have similar reasoning capabilities to our own. Another would be to copy the structure of animal brains to a high degree of fidelity.

Because you are a human doing it, you are not a machine that has been programmed.

You are programmed in a sense of the structure of your brain being a product of the information encoded in your DNA. The same way the neural network is a product of the algorithms used to build it. However, the learning that both your brain and the network are doing is encoded in the weights and connections of the network through reinforcement. These are not programmed in either case.

This is a really Western brained idea of how our biology works, because as complex systems we work on inscrutable ranges.

🙄

Strength. We cannot build a robot that can get stronger over time. Humans can do this, but we would never build a robot to do this. We see this as inefficient and difficult. This is a unique biological aspect of the human experience that allows us to reason about the physical world.

You're showing utter lack of imagination on your part here. Of course we could build a robot that could get stronger. There's nothing uniquely biological about this example.

Pain. We would not build a robot that experiences pain in the same way as humans. You can classify pain inputs. But why would you build a machine that can “understand” pain. Where pain interrupts its processes? This is again another unique aspect of human biology that allows us to reason about the physical world.

Maybe try thinking why organisms evolve pain in the first place and what advantage it provides.

[–] [email protected] 1 points 5 hours ago

I understand how LLMs work perfectly fine. What you don't seem to understand is that neurosymbolic AI is a combination of LLMs for parsing inputs and categorizing them with a symbolic logic engine for doing reasoning. If you bothered to actually read the paper I linked you wouldn't have wasted your time writing this comment.

[–] [email protected] 1 points 5 hours ago

People build confidence doing work in any domain. Working with artificial agents is simply going to build different kinds of skills.

[–] [email protected] 2 points 5 hours ago

We already have that problem with humans as well though.

[–] [email protected] 2 points 10 hours ago (2 children)

I don't think it's overhyped at all. It's taking two technologies that are good at solving specific types of problems and using them together in a useful way. The problem that symbolic AI systems ran into in the 70s are precisely the ones that deep neural networks address. You're right there are challenges, but there's absolutely no reason to think they're insurmountable.

I'd argue that using symbolic logic to come up with solutions is very much what reasoning is actually. Meanwhile, classification of input problem is the same one that humans have as well. Somehow you have to take data from the senses and make sense of it. If you're claiming this is garbage in garbage out process, then the same would apply to human reasoning as well.

The models can create internal representations of the real world through reinforcement learning in the exact same way that humans do. We build up our internal world model through our interaction with environment, and the same process is already being applied in robotics today.

I expect that future AI systems will be combinations of different types of algorithms all working together and solving different challenges. Combining deep learning with symbolic logic is an important step here.

[–] [email protected] 5 points 10 hours ago (4 children)

I don't see these tools replacing humans in the decision making process, rather they're going to be used to automate a lot of tedious work with the human making high level decisions.

[–] [email protected] 3 points 11 hours ago (6 children)

Do you actually understand what symbolic logic is?

[–] [email protected] 4 points 11 hours ago

The whole AI subscription business model is basically dead in the water now, and Nvidia might start tanking too. 🤣

view more: next ›