[-] [email protected] 4 points 1 year ago

Also seems relevant

Like in the deer, the large-scale target morphology can be revised – the pattern memory re-written – by transient physiological experience. The genetics sets the hardware with a default pattern outcome, but like any good cognitive system, it has a re-writable memory that learns from experience.

[-] [email protected] 5 points 1 year ago

I love DnD and TTRPGs. I even love watching some streams when the quality is high. But I'm with you slides in pocket protector I don't generally like this new wave of people who bring the expectation to my tables that every scene and every situation is a massive mellow drama mary sue projection for their OC that must be maximized.

What was that about wit and brevity? Simple done well?

[-] [email protected] 4 points 1 year ago

Up with the gradients!

[-] [email protected] 5 points 1 year ago

In practice, alignment means "control".

And the the existential panic is realizing that control doesn't scale. So rather than admit that goal "alignment" doesn't mean what they think it is, rather than admit that darwinian evolution is useful but incomplete and cannot sufficiently explain all phenomena both at the macro and micro levels, rather than possibly consider that intelligence is abundant in systems all around us and we're constantly in tenuous relationships at the edge of uncertainty with all of it,

it's the end of all meaning aka the robot overlord.

[-] [email protected] 5 points 1 year ago

Yes, and ultimately this question, of what gets built, as opposed to what is knowable, is an economics question. The energy gradients available to a bird are qualitatively different than those available to industry, or individual humans. Of course they are!

There's no theoritical limit to how close an universal function approximator can get to a closed system definition of something. Bird's flight isn't magic, or unknowable, or non reproduceable. If it was, we'd have no sense of awe at learning about it, studying it. Imagine if human like behavior of intelligence was completely unknowable. How would we go about teaching things? Communicating at all? Sharing our experiences?

But in the end, it's not just the knowledge of a thing that matters. It's the whole economics of that thing embedded in its environment.

I guess I violently agree with the observation, but I also take care not to put humanity, or intelligence in a broad sense, in some special magical untouchable place, either. I feel it can be just as reductionist in the end to demand there is no solution than to say that any solution has its trade offs and costs.

[-] [email protected] 5 points 1 year ago

Adversarial attacks on training data for LLMs is in fact a real issue. You can very very effectively punch up with regards to the proportion of effect on trained system with even small samples of carefully crafter adversarial inputs. There are things that can counter act this, but all of those things increase costs, and LLMs are very sensitive to economics.

Think of it this way. One, reason why humans don't just learn everything is because we spend as much time filtering and refocusing our attention in order to preserve our sense of self in the face of adversarial inputs. It's not perfect, again it changes economics, and at some point being wrong but consistent with our environment is still more important.

I have no skepticism that LLMs learn or understand. They do. But crucially, like everything else we know of, they are in a critically dependent, asymmetrical relationship with their environment. The environment of their existence being our digital waste, so long as that waste contains the correct shapes.

Long term I see regulation plus new economic realities wrt to digital data, not just to be nice or ethical, but because it's the only way future systems can reach reliable and economical online learning. Maybe the right things happen for the wrong reasons.

It's funny to me just how much AI ends up demonstrating non equilibrium ecology at scale. Maybe we'll have that self introspective moment and see our own relationship with our ecosystems reflect back on us. Or maybe we'll ignore that and focus on reductive world views again.

[-] [email protected] 5 points 1 year ago

True, there's value. But I think if you try to measure that value, it disappears.

A good postmorterm puts the facts on the table, and leaves the team to evaluate options. I don't think any good postmorterm should have apologies or ask people to settle social conflicts directly. One of the best tools a postmorterm has is the "we're going to work around this problem by reducing the dependency on personal relationships."

[-] [email protected] 5 points 2 years ago

Probably has something to do with the whole "We definitely know that race is a strong determinant of humanity, but we acknowledge that race isn't the only determinant if you also already have money or influence and could help us."

[-] [email protected] 5 points 2 years ago

Is this an enemy of my enemy is my friend situation? Pinker's naive optimism bubble, is not exactly a perspective I 100% endorse either but hey 🤷

[-] [email protected] 4 points 2 years ago

Because we all know Bob won't just fuxking wipe his ass in private. He needs to know we saw it all.

[-] [email protected] 4 points 2 years ago

since there’s nothing you can do to stop some asshole company from pilfering your code.

Currently. Though I think that there is a future where adversarial machine learning might be able to greatly increase the cost of training on pilfered data by encoding human generated inputs in a way that runs counter to training algorithms.

https://glaze.cs.uchicago.edu/

view more: ‹ prev next ›

locallynonlinear

0 post score
0 comment score
joined 2 years ago