this post was submitted on 18 Aug 2023
35 points (100.0% liked)

Technology

37717 readers
412 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

The tech giant is evaluating tools that would use artificial intelligence to perform tasks that some of its researchers have said should be avoided.

Google’s A.I. safety experts had said in December that users could experience “diminished health and well-being” and a “loss of agency” if they took life advice from A.I. They had added that some users who grew too dependent on the technology could think it was sentient. And in March, when Google launched Bard, it said the chatbot was barred from giving medical, financial or legal advice. Bard shares mental health resources with users who say they are experiencing mental distress.

top 10 comments
sorted by: hot top controversial new old
[–] [email protected] 24 points 1 year ago (2 children)

Why in the pissity-fuck would I take life advice from Google, Google applications or an AI trained by Google.

That is so far out of the question for what I find reasonable

[–] [email protected] 7 points 1 year ago (1 children)

There's a lot of dumdums out there who will fall for this shit.

[–] [email protected] 3 points 1 year ago

Get rich quick! Buy my $600 box set and you'll be on your way to financial independence

[–] [email protected] 5 points 1 year ago

I think this is more like a precaution. They are not making a service specifically for this, but probably an update to Bard in the case a user asks those questions. I think its reasonable, but must be done and released in the most curated and well developed state possible to prevent another story that already happened. (That suicide hotline that suddenly went full AI, to then backtrack because it responded badly).

[–] [email protected] 6 points 1 year ago

Most potential technology will of course be acontextual and lacking any kind of critique of capitalism. I witness people I know make all kind of "wise" decisions based on career expectations or ambition and definitions of "success," and they're actually really miserable yet can't seem to recognise why. I'm unconvinced AI developed by capitalist companies would provide healthier perspectives. Heck, even humans fall short in such roles as counselling, when they lack class consciousness or a critique of neoliberalism. AI probably doesn't stand much more chance.

[–] [email protected] 4 points 1 year ago (2 children)

I actually predicted this years ago. This technology will improve with time and people will use it the same way they use movie or books recommendations today. After some time people will get so used to it they will just let AI control their lives through career, lifestyle, fashion, relationship and other recommendations. Eventually google will just automatically book a restaurant dinner for you when its AI decides it is optimal for you to go out, it will rent you a new aparment when it decides it's better for you than the current one and it will find you a new job when it decides it's time to change it. People will turn into robots with external decision centres. And the AI won't be even that smart, just well trained.

[–] [email protected] 3 points 1 year ago (1 children)

I see what you mean. In a way, lifestyle and fashion choices are already partially governed by AI, in the sense that Instagram and TikTok recommendation algorithms influence what the user perceives as being trendy. I don't know if we'll get to the point where people are literally letting an AI tell them what to do, but I think AI will only get more and more influential in our lives in subtle ways.

[–] [email protected] 2 points 1 year ago

Of course I'm just paying 'what if' but I really can see this happen. Imagine: you get out of work, get into an autonomous car and Alexa tells "I'm taking you to your new apartment. I arranged for your things to be moved there today and updated your home address data in bank, amazon and municipal registry. According to my analysis your commute time will be 10% shorter, you will save $100 per month on average and the style matches your preferences 5% better. Overall you will be 12% happier there." You get there and you actually do like it and you are 12% happier. Most people would just go with it. We would be the rebels hiding in forests to avoid the algorithm and live our own lives. Sometimes we would be 12% less happy then the human-robots but at least we would think for ourselves...

[–] [email protected] 1 points 1 year ago

People's reactions to new technology is famously hard to predict, but I guess it's worth considering.

AI is getting good at white-collar tasks way faster than blue-collar ones, too, so this might be how it looks at work. An app tells you to build or fix something with no context, you send back pictures or any comments and concerns, and then you get assigned the next task. Nobody really knows who they work for or why, exactly.

[–] [email protected] 2 points 1 year ago

If it goes beyond recommending some real mental health resources this is so dangerous.

That or I can imagine them cranming in stuff like "check out this influencer's fave bestselling self-care items!"