locallynonlinear

joined 1 year ago
[–] [email protected] 4 points 10 months ago* (last edited 10 months ago)

For what it's worth then, I don't think we're in disagreement, so I just want to clarify a couple of things.

When I say open system economics, I mean from an ecological point of view, not just the pay dollars for product point of view. Strictly speaking, there is some theoritical price and a process, however gruesome, that could force a human into the embodiment of a bird. But from an ecosystems point of view, it begs the obvious question; why? Maybe there is an answer to why that would happen, but it's not a question of knowledge of a thing, or even the process of doing it, it's the economic question in the whole.

The same thing applies to human intelligence, however we plan to define it. Nature is already full of systems that have memory, that can abstract, reason, that can use tools, that are social, that are robust in the face of novel environments. We are unique but not due to any particular capability, we're unique because of the economics and our relationship with all the other things we depend upon. I think that's awesome!

I only made my comment to caution though, because yes, I do think that overall people still put humanity and our intelligence on a pedestal, and I think that plays to rationalist hands. I love being human and the human experience. I also love being alive, and part of nature, and the experience of the ecosystem as a whole. From that perspective, it would be hard for me to believe that any particulart part of human intelligence can't be reproduced with technology, because to me it's already abundant in nature. The question for me, and our ecosystem at large, is when it does occur,

what's the cost? What role, will it have? What regulations, does it warrant? What, other behaviors will it exhibit? And also, I'm ok not being in control of those answers. I can just live, in a certain degree of uncertainty.

[–] [email protected] 5 points 10 months ago (2 children)

Yes, and ultimately this question, of what gets built, as opposed to what is knowable, is an economics question. The energy gradients available to a bird are qualitatively different than those available to industry, or individual humans. Of course they are!

There's no theoritical limit to how close an universal function approximator can get to a closed system definition of something. Bird's flight isn't magic, or unknowable, or non reproduceable. If it was, we'd have no sense of awe at learning about it, studying it. Imagine if human like behavior of intelligence was completely unknowable. How would we go about teaching things? Communicating at all? Sharing our experiences?

But in the end, it's not just the knowledge of a thing that matters. It's the whole economics of that thing embedded in its environment.

I guess I violently agree with the observation, but I also take care not to put humanity, or intelligence in a broad sense, in some special magical untouchable place, either. I feel it can be just as reductionist in the end to demand there is no solution than to say that any solution has its trade offs and costs.

[–] [email protected] 10 points 10 months ago

One day, when Zack is a little older, I hope he learns it's okay to sometimes talk -to someone- instead of airing one's identity confusion like an arxiv prepublish paper.

Like, it's okay to be confused in a weird world, or even have controversial opinions. Make some friends you can actually trust, aren't demanding bayesian defenses of feelings, and chat this shit out buddy.

[–] [email protected] 6 points 10 months ago (4 children)

It's a good interview, and I really like putting economics here in perspective. If I could pour water on AI hype in a succinct way, I'd say this: capability is again, not the fundamental issue in nature. Open system economics, are.

There are no known problems that can't theoritically be solved, in a sort of pedantic "in a closed system information always converges" sort of way. And there numerous great ways of making such convergence efficient with respect to time, including who knew, associative memory. But what does it, mean? This isn't the story of LLMs or robotics or AI take off general. The real story is the economics of electronics.

Paradoxically, just as electronics is hitting its stride in terms of economics, so are the basic infrastructural economics of the entire system becoming strained. For all the exponential growth in one domain, so too has been the exponential costs in other. Such is ecosystems and open system dynamics.

I do think that there is a future of more AI. I do think there is a world of more electronics. But I don't claim to predict any specifics beyond that. Sitting in the uncertainty of the future is the hardest thing to do, but it's the most honest.

[–] [email protected] 8 points 10 months ago

I had a friend for many years who would do this. To be clear, this person was otherwise a decent friend and I had good times with them. But they would constantly declare, loudly, to everyone, how fat they were. They would make constant comments on how fat, their relatives were. They'd insist that other people were making special arrangements for them because of their fatness.

No matter how many times people would assure this person that we largely did not care or consider their weight as any factor in hanging out with them or interacting with them, they would deny it. No matter how many times I or anyone else carefully suggested that there may be some value in speaking to a therapist about their anxiety around their weight, they would not listen.

This same person would also complain how much fat shame society as a whole inflicts. But they refused to acknowledge their own.

It is sad, and infuriating, and it eventually pushed me and many other people away.

[–] [email protected] 5 points 10 months ago

Adversarial attacks on training data for LLMs is in fact a real issue. You can very very effectively punch up with regards to the proportion of effect on trained system with even small samples of carefully crafter adversarial inputs. There are things that can counter act this, but all of those things increase costs, and LLMs are very sensitive to economics.

Think of it this way. One, reason why humans don't just learn everything is because we spend as much time filtering and refocusing our attention in order to preserve our sense of self in the face of adversarial inputs. It's not perfect, again it changes economics, and at some point being wrong but consistent with our environment is still more important.

I have no skepticism that LLMs learn or understand. They do. But crucially, like everything else we know of, they are in a critically dependent, asymmetrical relationship with their environment. The environment of their existence being our digital waste, so long as that waste contains the correct shapes.

Long term I see regulation plus new economic realities wrt to digital data, not just to be nice or ethical, but because it's the only way future systems can reach reliable and economical online learning. Maybe the right things happen for the wrong reasons.

It's funny to me just how much AI ends up demonstrating non equilibrium ecology at scale. Maybe we'll have that self introspective moment and see our own relationship with our ecosystems reflect back on us. Or maybe we'll ignore that and focus on reductive world views again.

[–] [email protected] 5 points 10 months ago

True, there's value. But I think if you try to measure that value, it disappears.

A good postmorterm puts the facts on the table, and leaves the team to evaluate options. I don't think any good postmorterm should have apologies or ask people to settle social conflicts directly. One of the best tools a postmorterm has is the "we're going to work around this problem by reducing the dependency on personal relationships."

[–] [email protected] 10 points 10 months ago (3 children)

And indeed, the other crucial piece is that... apologizing isn't a protocol with an expected reward function. I can just, not accept your apology. I can just, feel or "update my priors" howmever I like.

We apologize and care about these things because of shame. Which we have to regulate, in part through our actions and perspectives.

Why people feel the way they do and act the way do makes total sense when ~~one finally confronts your own vulnerabilities~~ sorry, builds an API and RL framework.

[–] [email protected] 12 points 10 months ago

Why does it feel like Yud is a magician trying to coax an increasingly uninterested audience with pulling handkerchiefs from his sleeve when his big saw the assistant in half trick doesnt net an applause in 2024?

[–] [email protected] 8 points 10 months ago

Normies go crazy for this one neat rationalist trick!

[–] [email protected] 8 points 10 months ago (1 children)

Talk a lot about white culture, and only scarcely mention that he thinks white culture is a product of genetics.

I remember in the early days of the "culture wars" as far as political agendas going, hearing about "white/ethno-european pride," and being naively curious, I actually tried to engage these people on the topics of European culture and history, and found exactly zero engagement on these topics. Just politics abusing people's confusion of heritage with people's internal shame and lack of identity.

The paradox I've always found is that the more secure in your identity and heritage you are, the more happy you are to share, grow, and widen that. Maybe a hot take, but growing up in the south, alot of people there hide their personal internal shame and confusion in aggression and identity politics.

[–] [email protected] 12 points 10 months ago

I think, I feel sorry for her, in the kind of I don't really endorse or have anything to do with sort of way.

She is the limit of what happens when you idolize certain people, are betrayed by certain people, and never grow from that experience.

view more: ‹ prev next ›