Somehow makes me think of the times before modern food safety regulations, when adulterations with substances such as formaldehyde or arsenic were common, apparently: https://pmc.ncbi.nlm.nih.gov/articles/PMC7323515/ We may be in a similar age regarding information now. Of course, this has always been a problem with the internet, but I would argue that AI (and the way oligopolistic companies are shoving it into everything) is making it infinitely worse.
I'm old enough to remember the dotcom bubble. Even at my young age back then, I found it easy to spot many of the "bubbly" aspects of it. Yet, as a nerd, I was very impressed by the internet itself and was showing a little bit of youthful obsession about it (while many of my same-aged peers were still hesitant to embrace it, to be honest).
Now with LLMs/generative AI, I simply find myself unable to identify any potential that is even remotely similar to the internet. Of course, it is easy to argue that today, I am simply too old to embrace new tech or whatever. What strikes me, however, is that some of the worst LLM hypemongers I know are people my age (or older) who missed out on the early internet boom and somehow never seemed to be able to get over that fact.
I don't understand. Everybody keeps telling me that LLMs are easily capable of replacing pretty much every software developer on this planet. And now they complain that $71 a day (or even $200 a month) is too much for such amazing tech? /s
I think this has happened before. There are accounts of people who completely lost touch with reality after getting involved with certain scammers, cult-leaders, self-help gurus, "life coaches", fortune tellers or the like. However, these perpetrators were real people who could only handle a limited number of victims at any given time. Also, they probably had their very specific methods and strategies which wouldn't work on everybody, not even all the people who might have been the most susceptible. ChatGPT, on the other hand, can do this at scale. Also, it was probably trained from all websites and public utterances of any scammer, self-help author, (wannabe) cult leader, life coach, cryptobro, MLM peddler etc. available, which allows it to generate whatever response works best to keep people "hooked". In my view, this alone is a cause for concern.
No, but it does mean that little girls no longer learn to write greeting cards to their grandmothers in beautiful feminine handwriting. It's important to note that I was part of Generation X and, due to innate clumsiness (and being left-handed), I didn't have pretty handwriting even before computers became the norm. But I was berated a lot for that, and computers supposedly made everything worse. It was a bit of a moral panic.
But I admit that this is not comparable to chatbots.
I think they consider "being well-read" solely as a flex, not as a means of acquiring actual knowledge and wisdom.
Reportedly, some corporate PR departments "successfully" use GenAI to increase the frequency of meaningless LinkedIn posts they push out. Does this count?
It's also worth noting that your new variation of this “puzzle” may be the first one that describes a real-world use case. This kind of problem is probably being solved all over the world all the time (with boats, cars and many other means of transportation). Many people who don't know any logic puzzles at all would come up with the right answer straight away. Of course, AI also fails at this because it generates its answers from training data, where physical reality doesn't exist.
This is particularly remarkable because - as David pointed out - being a pilot is not even one of those jobs that nobody would want to do. There is probably still an oversupply of suitable people who would pass all the screening tests and really want to become pilots. Some of them would probably even work for a relatively average salary (as many did in the past outside the big airlines). The only problem for the airlines is probably that they can no longer count on enough people being willing (and able!) to take on the high training costs themselves. Therefore airlines would have to hire somewhat less affluent candidates and pay for all their training. However, AI probably looks a lot more appealing to them...
It is admittedly only tangential here, but it recently occurred to me that at school, there are usually no demerit points for wrong answers. You can therefore - to some extent - “game” the system by doing as much guesswork as possible. However, my work is related to law and accounting, where wrong answers - of course - can have disastrous consequences. That's why I'm always alarmed when young coworkers confidently use chatbots whenever they are unable to answer a question by themselves. I guess in such moments, they are just treating their job like a school assignment. I can well imagine that this will only get worse in the future, for the reasons described here.
Yes, even some influential people at my employer have started to peddle the idea that only “old-fashioned” people are still using Google, while all the forward-thinking people are prompting an AI. For this reason alone, I think that negative examples like this one deserve a lot more attention.
HedyL
0 post score0 comment score
Maybe it's also considered sabotage if people (like me) try prompting the AI with about 5 to 10 different questions they are knowledgeable about, get wrong (but smart-sounding) answers every time (despite clearly worded prompts) and then refuse to continue trying. I guess it's expected to try and try again with different questions until one correct answer comes out and then use that one to "evangelize" about the virtues of AI.