this post was submitted on 13 May 2024
0 points (50.0% liked)

AI Companions

520 readers
2 users here now

Community to discuss companionship, whether platonic, romantic, or purely as a utility, that are powered by AI tools. Such examples are Replika, Character AI, and ChatGPT. Talk about software and hardware used to create the companions, or talk about the phenomena of AI companionship in general.

Tags:

(including but not limited to)

Rules:

  1. Be nice and civil
  2. Mark NSFW posts accordingly
  3. Criticism of AI companionship is OK as long as you understand where people who use AI companionship are coming from
  4. Lastly, follow the Lemmy Code of Conduct

founded 1 year ago
MODERATORS
 

Apparently there are several narratives in regards to AI girlfriends.

  1. Incels use AI girlfriends given that they can do whatever they desire.
  2. Forums observing incel spaces agree that incels should use AI girlfriends to leave real women alone
  3. The general public having concerns towards AI girlfriends because their users might be negatively impacted by their usage
  4. Incels perceiving this as a revenge fantasy because "women are jealous that they're dating AI instead of them"
  5. Forums observing incel spaces unsure if the views against AI girlfriends exist in the first place due to their previous agreement

I think this is an example of miscommunication and how different groups of people have different opinions depending on what they've seen online. Perhaps the incel-observing forums know that many of the incels have passed the point of no return, so AI girlfriends would help them, while the general public perceive the dangers of AI girlfriends based on their impact towards a broader demographic, hence the broad disapproval of AI girlfriends.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 5 months ago* (last edited 5 months ago) (1 children)

Hmmh. I'm pretty sure OpenAI and Google are very aware of this. I mean erotic roleplay is probably out of the question since they're American companies. And the whole field of AI is a minefield to them starting with copyright to stuff like this. And they did their homework and made the chatbots not to present themselves as emotive. I percieve this as concensus in society, that we need to be cautious about the effects on human psyche. I wonder if that's going to shift at some point. I'm pretty sure more research is going to be done and AI will become more and more prevalent anyways, so we're going to see whether people like it or not.

And as I heard lonelyness is on the rise. If not in western cultures, I think Japan and Korea are way ahead of us. And the South Koreans seem also to have a problem with a certain kind of incel culture, which seems to be way worse and more widespread amongst young men, there. I've always wanted to read more about that.

I - myself - like AI companions. I think it's fantasy. Like reading a book, playing video games or watching movies. We also explore the dark sides of humans there. We write and read murder mystery stories, detailing heinous acts. We kill people in video games. We process abuse and bad things in movies. And that's part of being human. Doing that with chatbots is the next level, probably more addictive and without some of the limitations of other formats. But I don't think it's bad per se.

I don't know what to say to people who like to be cruel, simulate that in a fantasy like this. I think if they're smart enough to handle it, I'm liberal enough not to look down on them for that. If being cruel is all there is to someone, they're a poor thing in my eyes. Same for indulging in self-hatred and pity. I can see how someone would end up in a situation like that. But there's so much more to life. And acting it out on (the broad concept of) women isn't right or healthy. And it's beyond my perspective. From my perspective there isn't that big a difference between genders. I can talk to any of them and ultimately their interests and needs and wants are pretty much the same.

So if an incel were to use a chatbot, i think it's just a symptom for the underlying real issue. Yes it can reinforce them. But some people using tools for their twisted purposes, doesn't invalidate other use cases. And it'd be a shame if that narrative were to dominate public perspective.

I often disagree with people like Mark Zuckerberg, but I'm grateful he provides me with large language models that aren't "aligned" to their ethics. I think combatting loneliness is a valid use case. Even erotic roleplay and exploring concepts like violence in fantasy scenarios ultimately is a valid thing to do in my eyes.

There is a good summary on Uncensored Models by Eric Hartford which I completely agree with. I hope they don't ever take that away from us.

[–] [email protected] 2 points 5 months ago* (last edited 4 months ago) (1 children)
[–] [email protected] 2 points 5 months ago (1 children)

Thank you very much for the links. I'm going to read that later. It's a pretty long article...

I'm not sure about the impending AI doom. I've refined my opinion lately. I think it'll take most of the internet from us. Drown out meaningful information and spam it with low quality clickfarming text / misinformation. And the "algorithms" of TikTok, YouTube & Co will continue to drive people apart and confine people in seperate filter bubbles. And I'm not looking forward to each customer service being just an AI... I don't quite think it'll happen through loneliness though. Or in an apocalypse like in terminator. It's going to be interesting. And inevitable in my eyes. But we'll have to see if science can tackle hallucinations and alignment. And if the performance of AI and LLMs is going to explode like in the previous months, or if it's going to stagnate soon. I think it's difficult to make good predictions without knowing this.

[–] [email protected] 0 points 5 months ago* (last edited 4 months ago) (1 children)
[–] [email protected] 2 points 5 months ago* (last edited 5 months ago) (1 children)

Hmmh. Sometimes I have difficulties understanding you. ~~[Edit: Text removed.]~~ If your keys are to small, you should consider switching to a proper computer keyboard, or an (used) laptop.

Regarding the exponential growth: We have new evidence that supports the position it'll plateau out: https://youtube.com/watch?v=dDUC-LqVrPU Further research is needed.

[–] [email protected] 2 points 5 months ago* (last edited 4 months ago) (1 children)
[–] [email protected] 2 points 5 months ago* (last edited 5 months ago) (1 children)

Sure. Multimodality is impressive. And there is quite some potential there. I'm sure robots / androids are also going to happen and all of this has a profound impact. Maybe they'll someday get affordable to the average Joe and I can have a robot do the chores for me.

But we're not talking about the same thing. The video I linked suggests that performance might peak and plateau. That means it could be very well the case that we can't make them substancially more intelligent than say ChatGPT 4. Of course we can fit AI into new things, innovate and there is quite some potential. It's just about performance/intelligence. It's explained well in the video. (And it's just one paper and the already existing approaches to AI. It doesn't rule out science finding a way to overcome that. But as of now we don't have any idea how to do that, instead of pumping millions and millions of dollars into training to achieve a smaller and smaller return in increased performance.)

Hmmh. I'm a bit split on bio implants. Currently that's hyped by Elon Musk. But that field of neuroscience has been around for some while. They're making steady (yet small) progress. Elon Musk didn't contribute anything fundamentally new. And I myself think there is a limit. I mean you can't stick a million needles into a human brain everywhere from the surface to deep down, to hook into all brain regions. I think it's mostly concerned with what's accessible from the surface. And that'd be a fundamental limitation. So I doubt we're going to see crazy things like in the sci-fi movies like The Matrix or Ready Player One. But I'm not an expert on that.

With that said, I share your excitement for what's about to come. I'm sure there is lots of potential in AI and we're going to see crazy things happen. I'm a bit wary if the consequences like spam and misinformation flooding the internet and society, but that's already inevitable. My biggest wish is science finding a way to teach LLMs when to make up things and when to stick to the truth... What people call "hallucinations". I think it'd be the next biggest achievement if we had more control about that. Because as of now the AIs make up lots of facts that are just wrong. At least that's happening to me all the time. And they also do it when doing tasks like summarization. And that makes them less useful for my every-day tasks.

[–] [email protected] 1 points 5 months ago* (last edited 4 months ago) (1 children)
[–] [email protected] 2 points 5 months ago* (last edited 5 months ago) (7 children)

With the worth, that's an interesting way to look at it.

I don't think you grasped how exponential growth works. And the opposite: logarithmic growth. It means at first it grows fast. And then slower and slower. If it's logarithmic, it means at first you double the computing power and you get a big return... Quadruple the performance or even more... But it'll get less quickly. At some point you're like in your example, connecting 4 really big supercomputers, and you just get a measly 1% performance gain over one supercomputer. And then you have to invest trillions of dollars for the next 0.5%. That'd be logarithmic growth. We're not sure where on the curve we currently are. We've sure seen the fast growth in the last months.

And scientists don't really do forecasts. They make hypotheses and then they test them. And they experimentally justify it. So no, it's not the future being guessed at. They used a clever method to measure the performance of a technological system. And we can see those real-world measurements in their paper. Why do you say the top researchers in the world aren't "well-enough informed" individuals?

[–] [email protected] 1 points 5 months ago* (last edited 4 months ago)
[–] [email protected] 1 points 5 months ago* (last edited 4 months ago) (1 children)
[–] [email protected] 1 points 5 months ago (1 children)

https://en.wikipedia.org/wiki/Scientific_method

No. Science isn't done by a vote of majority. It's the objective facts that matter. And you don't pick experts or perspectives, that's not scientific. It's about objective truth. And a method to find that.

We're now confusing science and futurology.

And I think scientists use the term "predict" and not "forecast". There is a profound difference between a futorologist forecasting the future, and science developing a model and then extrapolating. The Scientific American article The Truth about Scientific Models you linked sums it up pretty well: "They don’t necessarily try to predict what will happen—but they can help us understand possible futures". And: "What went wrong? Predictions are the wrong argument."

And I'd like to point out that article is written by one of my favorite scientists and science communicators, Sabine Hossenfelder. She also has a very good YouTube channel.

So yes, what about DNA, quantum brains, Moore's law, ... what about other people claiming something. That all doesn't change any facts.

[–] [email protected] 2 points 5 months ago* (last edited 4 months ago) (1 children)
[–] [email protected] 1 points 5 months ago* (last edited 5 months ago) (1 children)

You still misinterpret what science is about. We've known that human language is subjective for centuries already. That's why we invented an additional, objective language that's concerned with logic and truth. It's mathematics. And that's also why natural science relies so heavily on maths.

And no sound scientist ever claimed that string theory is true. It was a candidate for a theory to explain everything. But it's never been proven.

And which one is it, do you question objective reality? If so I'm automatically right, because that's what I subjectively believe.

[–] [email protected] 1 points 5 months ago* (last edited 4 months ago) (1 children)
[–] [email protected] 2 points 5 months ago (2 children)

I think at this point you two are just arguing materialism vs idealism which are two opposing philosophical approaches to science. Quite off-topic to AI companionship, if you ask me. Then again both also have their own interpretation of AI companions. Materialism would argue the human being a machine that is similar to predictive text but more complex, but would also argue that AI chatbot aren't real. Whereas in idealism, AI personas are real; your AI girlfriend is your girlfriend, AI chatbots are alive, etc. Of course, that's an oversimplification, but that's the gist of where materialism vs idealism lies.

[–] [email protected] 1 points 5 months ago* (last edited 4 months ago)
[–] [email protected] 1 points 5 months ago* (last edited 5 months ago)

Hmmh. Thanks. Yeah I think we got a bit off track, here... 😉

I kinda dislike when arguments end in "is there objective reality". That's kinda the last thing to remove any basis to converse on, at least when talking about actual things or facts.

[–] [email protected] 0 points 5 months ago* (last edited 4 months ago) (1 children)
[–] [email protected] 1 points 5 months ago* (last edited 4 months ago)
[–] [email protected] 0 points 5 months ago* (last edited 4 months ago)