Oh, sorry. We're in agreement and my sentence was poorly constructed. The computation of a matrix multiplication usually requires at least pencil and paper, if not a computer. I can't compute anything larger than a 2 × 2. But I'll readily concede that Strassen's specific trick is simple enough that a mentalist could use it.
Only the word "theoretical" is outdated. The Beeping Busy Beaver problem is hard even with a Halting oracle, and we have a corresponding Beeping Busy Beaver Game.
Your understanding is correct. It's worth knowing that the matrix-multiplication exponent actually controls multiple different algorithms. I stubbed a little list a while ago; important examples include several graph-theory algorithms as well as parsing for context-free languages. There's also a variant of P vs NP for this specific problem, because we can verify that a matrix is a product in quadratic time.
That Reddit discussion contains mostly idiots, though. We expect an iterative sequence of ever-more-complicated algorithms with ever-slightly-better exponents, approaching quadratic time in the infinite limit. We also expected a computer to be required to compute those iterates at some point; personally I think Strassen's approach only barely fits inside a brain and the larger approaches can't be managed by humans alone.
To be fair, I'm skeptical of the idea that humans have minds or perform cognition outside of what's known to neuroscience. We could stand to be less chauvinist and exceptionalist about humanity. Chatbots suck but that doesn't mean humans are good.
It's been almost six decades of this, actually; we all know what this link will be. Longer if you're like me and don't draw a distinction between AI, cybernetics, and robotics.
A German lawyer is upset because open-source projects don't like it when he pastes chatbot summaries into bug reports. If this were the USA, he would be a debit to any bar which admits him, because the USA's judges have started to disapprove of using chatbots for paralegal work.
Somebody pointed out that HN's management is partially to blame for the situation in general, on HN. Copying their comment here because it's the sort of thing Dan might blank:
but I don't want to get hellbanned by dang.
Who gives a fuck about HN. Consider the notion that dang is, in fact, partially to blame for this entire fiasco. He runs an easy-to-propagandize platform due how much control of information is exerted by upvotes/downvotes and unchecked flagging. It's caused a very noticeable shift over the past decade among tech/SV/hacker voices -- the dogmatic following of anything that Musk or Thiel shit out or say, this community laps it up without hesitation. Users on HN learn what sentiment on a given topic is rewarded and repeat it in exchange for upvotes.
I look forward to all of it burning down so we can, collectively, learn our lessons and realize that building platforms where discourse itself is gamified (hn, twitter, facebook, and reddit) is exactly what led us down this path today.
Every person I talk to — well, every smart person I talk to — no, wait, every smart person in tech — okay, almost every smart person I talk to in tech is a eugenicist. Ha, see, everybody agrees with me! Well, almost everybody…
Meanwhile, actual Pastafarians (hi!) know that the Russian Federation openly persecutes the Church of the Flying Spaghetti Monster for failing to help the government in its authoritarian activities, and also that we're called to be anti-authoritarian. The Fifth Rather:
I'd really rather you didn't challenge the bigoted, misogynist, hateful ideas of others on an empty stomach. Eat, then go after the bastards.
May you never run out of breadsticks, travelers.
He's talking like it's 2010. He really must feel like he deserves attention, and it's not likely fun for him to learn that the actual practitioners have advanced past the need for his philosophical musings. He wanted to be the foundation, but he was scaffolding, and now he's lining the floors of hamster cages.
This is some of the most corporate-brained reasoning I've ever seen. To recap:
- NYC elects a cop as mayor
- Cop-mayor decrees that NYC will be great again, because of businesses
- Cops and other oinkers get extra cash even though they aren't business
- Commercial real estate is still cratering and cops can't find anybody to stop/frisk/arrest/blame for it
- Folks over in New Jersey are giggling at the cop-mayor, something must be done
- NYC invites folks to become small-business owners, landlords, realtors, etc.
- Cop-mayor doesn't understand how to fund it (whaddaya mean, I can't hire cops to give accounting advice!?)
- Cop-mayor's CTO (yes, the city has corporate officers) suggests a fancy chatbot instead of hiring people
It's a fucking pattern, ain't it.
corbin
0 post score0 comment score
I'm gonna be polite, but your position is deeply sneerworthy; I don't really respect folks who don't read. The article has quite a few quotes from neuroscientist Anil Seth (not to be confused with AI booster Anil Dash) who says that consciousness can be explained via neuroscience as a sort of post-hoc rationalizing hallucination akin to the multiple-drafts model; his POV helps deflate the AI hype. Quote:
At the end of the article, another quote explains that Seth is broadly aligned with us about the dangers:
A pseudoscience has an illusory object of study. For example, parapsychology studies non-existent energy fields outside the Standard Model, and criminology asserts that not only do minds exist but some minds are criminal and some are not. Robotics/cybernetics/artificial intelligence studies control loops and systems with feedback, which do actually exist; further, the study of robots directly leads to improved safety in workplaces where robots can crush employees, so it's a useful science even if it turns out to be ill-founded. I think that your complaint would be better directed at specific AGI position papers published by techbros, but that would require reading. Still, I'll try to salvage your position:
Any field of study which presupposes that a mind is a discrete isolated event in spacetime is a pseudoscience. That is, fields oriented around neurology are scientific, but fields oriented around psychology are pseudoscientific. This position has no open evidence against it (because it's definitional!) and aligns with the expectations of Seth and others. It is compatible with definitions of mind given by Dennett and Hofstadter. It immediately forecloses the possibility that a computer can think or feel like humans; at best, maybe a computer could slowly poorly emulate a connectome.