I guess both chatbots and humanoid robots are basically about the fantasy of automating human labor away effortlessly. In the past, most successful automation probably required a strong understanding of not just the tech, but also the tasks themselves and often a complete overhaul of processes, internal structures etc. In the end, there was usually still a need for human labor, just with different skill sets than before. Many people from the C-suite aren't very good at handling these challenges, even if they would want to make everyone believe otherwise. This is probably why the promise of reaping all the rewards of automation without having to do the work sounds compelling to many of them.
the reason is they’re selling sci fi dreams of robot servants even though these dreams are lies.
We've seen the same with chatbots, I guess. Objectively speaking, they perform worse at most tasks than regular search engines, databases, dedicated machine learning-based tools etc. However, they sound humanoid (like overly sycophantic human office workers, to be more precise), thus the hype.
What purpose is this tool even supposed to serve? The most obvious use case that comes to mind is employee monitoring.
New reality at work: Pretending to use AI while having to clean up after all the people who actually do.
... and just a few paragraphs further down:
The number of people tested in the study was n=16. That’s a small number. But it’s a lot better than the usual AI coding promotion, where n=1 ’cos it’s just one guy saying “I’m so much faster now, trust me bro. No, I didn’t measure it.”
I wouldn't call that "burying information".
Completely unrelated fact, but isn't the prevalence of cocaine use among U. S. adults considered to be more than 1% as well?
(Referring to this, of course - especially the last part: https://pivot-to-ai.com/2025/06/05/generative-ai-runs-on-gambling-addiction-just-one-more-prompt-bro/)
Stock markets generally love layoffs, and they appear to love AI at the moment. To be honest, I'm not sure they thought beyond that.
FWIW, I work in a field that is mostly related to law and accounting. Unlike with coding, there are no simple "tests" to try out whether an AI's answer is correct or not. Of course, you could try these out in court, but this is not something I would recommend (lol).
In my experience, chatbots such as Copilot are less than useless in a context like ours. For more complex and unique questions (which is most of the questions we are dealing with everyday), it simply makes up smart-sounding BS (including a lot of nonexistent laws etc.). In the rare cases where a clear answer is already available in the legal commentaries, we want to quote it verbatim from the most reputable source, just to be on the safe side. We don't want an LLM to rephrase it, hide its sources and possibly introduce new errors. We don't need "plausible deniability" regarding plagiarism or anything like this.
Yet, we are being pushed to "embrace AI" as well, we are being told we need to "learn to prompt" etc. This is frustrating. My biggest fear isn't to be replaced by an LLM, not even by someone who is a "prompting genius" or whatever. My biggest fear is to be replaced by a person who pretends that the AI's output is smart (rather than filled with potentially hazardous legal errors), because in some workplaces, this is what's expected, apparently.
As usual with chatbots, I'm not sure whether it is the wrongness of the answer itself that bothers me most or the self-confidence with which said answer is presented. I think it is the latter, because I suspect that is why so many people don't question wrong answers (especially when they're harder to check than a simple calculation).
Similar criticisms have probably been leveled at many other technologies in the past, such as computers in general, typewriters, pocket calculators etc. It is true that the use of these tools and technologies has probably contributed to a decline in the skills required for activities such as memorization, handwriting or mental calculation. However, I believe there is an important difference to chatbots: While typewriters (or computers) usually produce very readable text (much better than most people's handwriting), pocket calculators perform calculations just fine and information from a reputable source retrieved online isn't any less correct than one that had been memorized (probably more so), the same couldn't be said about chatbots and LLMs. They aren't known to produce accurate or useful output in a reliable way - therefore many of the skills that are being lost by relying on them might not be replaced with something better.
In any case, I think we have to acknowledge that companies are capable of turning a whistleblower's life into hell without ever physically laying a hand on them.
HedyL
0 post score0 comment score
Maybe my analogy is a little bit too silly and too obvious, but I think wanting a humanoid robot (rather than one designed in a way that is best suited for the purpose) could be somewhat akin to wanting a mechanical horse rather than a car. On the one hand, this may sound like a reasonable idea if saddles, carriages, stables and blacksmiths are already available. On the other hand, the mechanical horse is going to be a lot slower than a car and a lot more uncomfortable to ride. Also, it is still going to need charging stations or gas stations (since it won't eat oats) and dedicated repair shops (since veterinarians won't be able to fix it). Also, its technology might be a lot more complex and difficult to fix than that of a car (especially the early models).