The other day someone told me that their partner used ChatGPT instead of going to therapy.
We’re all so cooked.
The other day someone told me that their partner used ChatGPT instead of going to therapy.
We’re all so cooked.
So many people are going to develop/exacerbate mental illness from doing that.
Turbo cooked
The point of contention regarding therapy for me is that I’m literally paying for an impersonal conversation in which I express my deepest insecurities to someone who most likely doesn’t give a shit.
I don’t see how AI fixes that but I also don’t understand why it can’t help if your relationship with your therapist is supposed to be a fundamentally clinical one.
The problem is that AI does absolutely not provide a clinical relationship. If your input becomes part of the LLM's context (which it has to in order to have a conversation) it will inevitably start mirroring you in ways you might not even notice, something humans commonly (and subconsciously) respond to with trust and connection.
Add to that that they are designed to generally agree with and enable whatever you tell them and you basically have a machine that does everything to reinforce a connection to itself and validate the parts of yourself you have concerns about.
There are already so many stories of people spiralling because they started building rapport with an LLM and it's hard to imagine a setting where that is more likely to occur than when you use one as your therapist
Given the way LLMs function, the will have a hard time with therapy. Chat GPT's context window is 128k tokens. As you chat, your prompts/replies add up and start filling the context window. GPT also has to look at its own responses for context. That fills up the window as well. LLMs suck with nearly empty context windows and nearly full context windows. When you're close to having a full context window, it will start hallucinating and having problems with responses. Eventually it will only be able to focus on parts of your conversations because you've blown past the 128k token mark.
The ways to mitigate this problem have to be done by the user and they disrupt therapy.
There are already so many stories of people spiralling because they started building rapport with an LLM and it's hard to imagine a setting where that is more likely to occur than when you use one as your therapist
There are multiple cases where an LLM is alleged to have contributed to someone's suicide, from supporting sentiments of the afterlife being better to giving practical advice.
If find that it's helpful being able to talk to someone that you can't disappoint. Otherwise I will always lie to make them feel better about how I'm doing
I’ve had therapists with whom that exact scenario has happened, I’ve literally lied to them about how I’m doing.
can’t imagine why trusting the agreeable electrified foolin’ machine created by sociopaths can’t not help
I think it also has something to do with how distanced most of us are from creation and maintenance of machines, particularly electronics. if you don't quite understand what a transistor is, or how code works, or how a large language model turns inputs into outputs, then "well there must be a little dude in there somewhere" makes as much sense as anything. Plus people tend to personify inanimate objects to begin with.
then "well there must be a little dude in there somewhere"
One of my earliest memories is my mom showing me how a cash register works and telling me there was a little gremlin inside the machine powering it.
This your mom? ->
Mommy!
It's true though, they eat the coins. Times have been tough for cash register gremlins since everyone started using cards to pay for everything.
6 year old me has verified this info as TRUE
I was talking to my father in law today and he's worried that it's going to skynet us or like, make us dependent on it and then quit helping or something.
I tried explaining that it's not a robot, it's not an AI, it's not C-3PO. It's a very elaborate autocomplete. It can't even do kindergarten level arithmetic consistently, which a basic calculator can accomplish, and it's because it's just autocomplete.
"Yeah, but what if it starts teaching itself."
I love the man but God damn people think this thing is so much more than it is.
I think Americans are primed to feel very enthusiastic about anything that tells them things they already believe, even moreso if it sounds very confident and organized. And people are already prone to woowoo stuff.
I don't think you'd break the spell if you informed everyone it's just lines of code that doesn't think. A statistically significant number of Americans claim to talk with spirits and angels, or have the gift of prophecy. On the flip side the techbro types believe in garbage like the future basilisk AI that tortures evseryone forever. These same people are already deranged, all the LLM does is organize their mashed potato brains into sentences that can be read. And just about every major LLM seems primed to have a very servile, docile writing style so it's trivial to get them to say whatever you want so long as you keep saying the same thing. They're not well designed for confrontation.
I firmly believe one of the best ways to deal with reactionaries, woowoo types peddling scams, or conspiracy theorists is to simply tell them they're a fucking idiot. "That sounds fucking stupid. Shut up, nerd, and never speak to me about this again." That's how you do it, that's how you dispell stuff. Social embarrassment and confrontation.
An LLM won't do that, it'll rub a nice mental salve on your already smooth brain. It's an actual echo chamber.
On the flip side the techbro types believe in garbage like the future basilisk AI that tortures evseryone forever
makes a bit of noise but i doubt all that many people are true believers compared to the population who has ever touched computer for a living
I don’t really understand how to explain to people who don’t understand that chatgpt is the same as when your phone guesses what you’re going to say next word, but on turbo mode. If they still don’t get it after that then I don’t know what to do to explain it further. Just fucking get it man!!
I try to tell them that the machine is just calculating what the next word is to follow the previous word. It doesnt understand the context of what it's saying, only that these words fit together right.
At risk of sounding ignorant...
There has to be more to it than that, right? I mean these tools can write working code in whatever language I need, using the libraries I specify, and it just spits out this code in seconds. The code is 90% of the way there.
LLMs can also read charts and correctly assess what's going on, can create stock trading strategies using recent data, can create recipes that work implying some level of understanding of how to cook, etc. It's kinda scary how much these things can do. Now that my job is training these models I see how far they've come in just coding, and they will 100% replace a LOT of developers.
Because the llms have been trained on however many cured data sets with mostly correct info. It sees how many times a phrase has been used in relation to other phrases, calculates the probability of if this is the correct output, then gambles on a certain preprogrammed risk tolerance, and spits out the output. Of course the software engineers will polish it up with barriers to keep it within certain boundaries,
But the key thing is that the llm doesnt understand the fundamental concepts of what you're asking it.
I'm not a programmer, so I could be misunderstanding the overall process, but from what I've seen on how llms work and are trained, AI makes a very good attempt of what you almost wanted. I don't how quickly AI will progress, but for now I I just see it as an extremely expensive party trick
That party trick is shipping code and is good enough to replace thousands of developers at Microsoft and other companies. Maybe that says something about how common production programming problems are. A lot of business code boils down to putting things in and pulling things out of databases or moving data around via API calls and other communication methods. This tool handles that kind of work with ease.
can create recipes that work implying some level of understanding of how to cook
Being able to emulate patterns does not actually indicate some sort of higher level of understanding. You aren't going to get innovative new recipes, they are either just paraphrasing what they have read many people describe or they are cobbling together words.
We've had opposite experiences training these things.
I've been shocked how little they've advanced and how absolutely shit they are. I'm training them in math and it's fucking soul sucking misery. They're less capable than Wolfram Alpha was 20 years ago. The mistakes they make are so fucking bad, holy shit. I had one the other day try to use Heron's Formula for the area of a triangle on a problem where there were no triangles!
These things are crap and they aren't getting better.
There has to be more to it than that, right?
No, there really isn't. You're just pigging backing off of the exploited labor of working class engineers and enjoying the luxury of living away from the blood soaked externalities that make your chatbot sing.
If AI actually did what you think it does then why would the capitalist class support it? A computer program that is the might of millions of workers? How would the control of the capitalist class continue to exist?
Or the more reasonable explanation that like smartphones and crypto, there exists a very lucrative profit incentive for the capitalist leach to create profit margins out of thin air. Westerners are trained to overconsume so this doesn't come as a suprise.
If AI actually did what you think it does then why would the capitalist class support it?
Because server farms are cheaper than hiring developers, artists, writers, etc.? Capitalists don't care about the environmental impacts as long as their bottom line isn't affected.
This technology is killing jobs. Thousands are being laid off at Microsoft this month on top of layoffs at lots of other tech companies. The field I went to college to learn is cooked. There's already thousands of over qualified people applying to the few jobs that are left. This is a way bigger deal than Crypto and another way for the owning class to hoard more wealth for themselves at the expense of us working class folks.
I feel really grateful that I was exposed to Markov chain bots on irc back in the day as it was a powerful inoculant.
The appeal of LLMs seems uncomfortably similar to how Thomas Jefferson enthusiastically employed dumb waiters to limit interaction with enslaved people.
Also add in the fact that public school education is abysmal and burger brains think a computer that can do some basic common sense shit is godlike.
Eh, back in the day we were getting advice from people who stood over cracks in the ground and got high off ethane or tossing knucklebones (as one of the less gross options) to see the future. Humans have always been susceptible to magical thinking and a lot of us can remember when personal computing was barely functional, so it's not surprising that ChatGPT seems like a quantum leap to some people.
The fact that someone has written a program that's capable of convincing people that it's god still has terrifying implications and I for one am not excited about the prospect of a wave of computer-inspired stochastic terrorism, but I don't think this is a sign that contemporary people are uniquely dumb.
back in the day
cracks in the ground
Oracles? Back in your day you had oracles? Damn, Hexbear is so diverse that we have immortal leftists shitposting on here.
It's called the immortal science, if you aren't working to transcend your flesh prison, you need to elevate your game.
Not sure I'd generalize like that. IMO you'll find very obvious correlations between the people who tend to use AI regularly because they're living Dunning Kruger types who always believe they have the great "talent" or "the genious idea" but just need the magic tool to make it work, those who have been "forced" to use it at work e.g some programmers and finaly those that as you say just make excuses for it and may not even use it but nevertheless consume the slop consciously and happily(e.g r/chatgpt users) .
I'd guess is a significant majority of the average population who live outside these bubbles are far less favorable towards AI.
Someone on reddit had an intriguing take for once. The people who are like "ChatGPT revolutionized my work" are people who are just really bad at stuff. I read that comment the other day. And then now if you go to reddit and look at the AI subs, you get stuff like this:
https://www.reddit.com/r/OpenAI/comments/1lpte80/chatgpt_is_a_revelation_for_me_in_my_work/
I know the arguments done to death surrounding AI and being a risk to jobs etc. but I work in a very niche area of law and there's a lot of complex pieces of case law and legislation that deal with it and, frankly, my memory is terrible with retention of this info. I also struggle sometimes with interpreting judgments, specifically when Judgments are written in very complex "legalese" which I've always hated.
It's a very tempting thesis considering it predicts observation. But I think I would temper it a little bit to avoid ableism or getting too far into technocratic thinking.
I also struggle sometimes with interpreting judgments, specifically when Judgments are written in very complex “legalese” which I’ve always hated.
this person graduated law school
I'm sorta the opposite: social relationships are so shallow and superficial in late stage capitalism that most social relationships can be replaced with a chatbot. If your social relationships can be summed up as water cooler conversations with coworkers and catching up with your drinking buddies, you might as "socialize" with a chatbot instead. Say what you will about a chatbot, but at least a chatbot won't stab you in the back like the case of socializing with coworkers or turn you into a functioning alcoholic like the case of socializing at a bar.
If your social relationships are limited to having pointless conversations about the weather or traffic or your favorite sports team, then what is the point of the social relationship in the first place?
reread my comment
Okay, that came out a lot more unhinged than how it sounded in my head lmao
Was going to post this as it's own comment but I think I see what you mean.
I was out with some friends recently and most of the discussion was typical small talk, catching up type stuff. Me and 1 friend got into a discussion about music. We were talking about how listening to music by yourself is really an entirely new thing, and that throughout history music has been a communal/social experience. Like when people used to be forced to work 12+ hour days, 6/7 days a week, and then Sunday they'd get together and play music together, and how it was the likely the only good thing going for them.
Then someone else jumps in and says "wow this is so deep" and just completely fucking killed the whole vibe, conversation went back to shallow small talk. It was as if the conversation being deeper than "what have you been up to?" actually bothered this person. And it was barely even a "deep" thing to talk about!
I think you're absolutely right that late stage capitalism has utterly destroyed our ability to connect. It's like everyone is afraid to say anything beyond the superficial for some reason
It was as if the conversation being deeper than "what have you been up to?" actually bothered this person. And it was barely even a "deep" thing to talk about!
There are many people who are uncomfortable with conversations that involve some form of personal investment
Its just commodity fetishism to a higher level.
Banned? DM Wmill to appeal.
No anti-nautilism posts. See: Eco-fascism Primer
Slop posts go in c/slop. Don't post low-hanging fruit here.