Your mistake, distant future ghost, was in developing RNA repair nanites without creating universal healthcare.
YourNetworkIsHaunted
There's a particular failure mode at play here that speaks to incompetent accounting on top of everything else. Like, without autocontouring how many additional radiologists would need to magically be spawned inti existence and get salaries, benefits, pensions, etc in order to reduce overall wait times by that amount? Because in reality that's the money being left on the table; the fact that it's being made up in shitty service rather than actual money shouldn't meaningfully affect the calculus there.
By refusing to focus on a single field at a time AI companies really did make it impossible to take advantage of Gel-Mann amnesia.
There's inarguably an organizational culture that is fundamentally disinterested in the things that the organization is supposed to actually do. Even if they aren't explicitly planning to end social security as a concept by wrecking the technical infrastructure it relies on, they're almost comedically apathetic about whether or not the project succeeds. At the top this makes sense because politicians can spin a bad project into everyone else's fault, but the fact that they're able to find programmers to work under those conditions makes me weep for the future of the industry. Even simple mercenaries should be able to smell that this project is going to fail and look awful on your resume, but I guess these yahoos are expecting to pivot into politics or whatever administration position they can bargain with whoever succeeds Trump.
That's fascinating, actually. Like, it seems like it shouldn't be possible to create this level of grammatically correct text without understanding the words you're using, and yet even immediately after defining "unsupervised" correctly the system still (supposedly) immediately sets about applying a baffling number of alternative constraints that it seems to pull out of nowhere.
OR alternatively despite letting it "cook" for longer and pregenerate a significant volume of its own additional context before the final answer the system is still, at the end of the day, an assembly of sochastic parrots who don't actually understand anything.
I don't think that the actual performance here is as important as the fact that it's clearly not meaningfully "reasoning" at all. This isn't a failure mode that happens if it's actually thinking through the problem in front of it and understanding the request. It's a failure mode that comes from pattern matching without actual reasoning.
write it out in ASCII
My dude what do you think ASCII is? Assuming we're using standard internet interfaces here and the request is coming in as UTF-8 encoded English text it is being written out in ASCII
Sneers aside, given that the supposed capability here is examining a text prompt and reason through the relevant information to provide a solution in the form of a text response this kind of test is, if anything, rigged in favor of the AI compared to some similar versions that add in more steps to the task like OCR or other forms of image parsing.
It also speaks to a difference in how AI pattern recognition compared to the human version. For a sufficiently well-known pattern like the form of this river-crossing puzzle it's the changes and exceptions that jump out. This feels almost like giving someone a picture of the Mona Lisa with aviators on; the model recognizes that it's 99% of the Mona Lisa and goes from there, rather than recognizing that the changes from that base case are significant and intentional variation rather than either a totally new thing or a 'corrupted' version of the original.
I don't know that it holds enough of an edge for a golden guillotine, but it's dense and heavy enough that we could probably create a workable alternative if we give up on clean cuts.
The classic "I don't understand something therefore it must be incomprehensible" problem. Anyone who does understand it must therefore be either lying or insane. I'm not sure if we've moved forward or backwards by having the incomprehensible eldritch truth be progressive social ideology itself rather than the existence of black people and foreign cultures.
It seems you have some familiarity with the gold trade. Tell me, have you seen those new laser cutters up close?
It's also the sound it makes when I drop-kick their goddamned GPU clusters into the fuckin ocean. Thankfully I haven't run into one of these yet, but given how much of the domestic job market appears to be devoted towards not hiring people while still listing an opening it feels like I'm going to.
On a related note, if anyone in the Seattle area is aware of an opening for a network engineer or sysadmin please PM me.
Jesus, fine, I'll watch it already, God.