this post was submitted on 23 Jun 2023
4 points (100.0% liked)

Singularity | Artificial Intelligence (ai), Technology & Futurology

4 readers
1 users here now

About:

This sublemmy is a place for sharing news and discussions about artificial intelligence, core developments of humanity's technology and societal changes that come with them. Basically futurology sublemmy centered around ai but not limited to ai only.

Rules:
  1. Posts that don't follow the rules and don't comply with them after being pointed out that they break the rules will be deleted no matter how much engagement they got and then reposted by me in a way that follows the rules. I'm going to wait for max 2 days for the poster to comply with the rules before I decide to do this.
  2. No Low-quality/Wildly Speculative Posts.
  3. Keep posts on topic.
  4. Don't make posts with link/s to paywalled articles as their main focus.
  5. No posts linking to reddit posts.
  6. Memes are fine as long they are quality or/and can lead to serious on topic discussions. If we end up having too much memes we will do meme specific singularity sublemmy.
  7. Titles must include information on how old the source is in this format dd.mm.yyyy (ex. 24.06.2023).
  8. Please be respectful to each other.
  9. No summaries made by LLMs. I would like to keep quality of comments as high as possible.
  10. (Rule implemented 30.06.2023) Don't make posts with link/s to tweets as their main focus. Melon decided that the content on the platform is going to be locked behind login requirement and I'm not going to force everyone to make a twitter account just so they can see some news.
  11. No ai generated images/videos unless their role is to represent new advancements in generative technology which are not older that 1 month.
  12. If the title of the post isn't an original title of the article or paper then the first thing in the body of the post should be an original title written in this format "Original title: {title here}".
  13. Please be respectful to each other.

Related sublemmies:

[email protected] (Our community focuses on programming-oriented, hype-free discussion of Artificial Intelligence (AI) topics. We aim to curate content that truly contributes to the understanding and practical application of AI, making it, as the name suggests, “actually useful” for developers and enthusiasts alike.)

Note:

My posts on this sub are currently VERY reliant on getting info from r/singularity and other subreddits on reddit. I'm planning to at some point make a list of sites that write/aggregate news that this subreddit is about so we could get news faster and not rely on reddit as much. If you know any good sites please dm me.

founded 1 year ago
MODERATORS
 

According to António Pombeiro, Deputy Secretary-General of the Internal Administration, who spoke to journalists on 20 June in Porto, "if the pilot project goes well, we are prepared to start using the system to answer calls as of 2025."

Currently, we are facing "a very recent technology", and there is the "need to do many tests", admitting that for now we are "very much in the unknown", so the operation of the pilot project will be key.

"In certain situations, we have waiting periods due to the great amount of calls. This happens when there are incidents that involve a lot of publicity, a lot of people watching what is happening and everyone has the initiative to call 112", said António Pombeiro, giving the example of urban fires.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 1 year ago (1 children)

Are these systems able to recognize when they don't know something, or when to contact a human? All the focused learning base in the world won't help if the machine still confabulates answers, and so far afaik that's a flaw with all the models.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago) (1 children)

Absolutely, 100%. We aren't just plugging in an LLM and letting it handle calls willy nilly. We're telling it like a robot exactly what to do, and the LLM only comes into play when it's trying to interpret the intent of the person on the phone within the given conversation they're having.

So for instance as we develop this for our end users, we're building out functionality in pieces. For each piece where we know we can't do that (yet), we "escalate" the call to the real person at the call center for them to handle. As we develop more these escalations get fewer, however there are many instances that will always escalate. For instance if the user says "let me speak to a person" or something to that effect, we'll escalate right away.

For things the LLM can actually do against that users data, those are hard coded actions we control, it didn't come up with them. It didn't decide to do them, we do. It isn't skynet and it isn't close either.

The LLM's actual functional use is quite limited to just understanding the intent of the user's speech, that's all. That's how it's being used all over (to great results).

[–] [email protected] 2 points 1 year ago (1 children)

I’m not calling customer service unless I need a human, so the automated assistants are a huge waste of my time.

The biggest problems I have with these systems is when the company using them force you to use them, especially on the phone. They can have big accessibility barriers and it’s really frustrating when they don’t have a “let me talk to a human” function - more and more companies are using these things and not offering that and it’s a genuinely horrible experience for me, every time.

[–] [email protected] 1 points 1 year ago (3 children)

That's great, you're not everyone though, and you're not fielding everyone's calls either.

I'm in Healthcare. A massive chunk of our calls are simply "you have an order expected on (date), and shipping to (your address), is this information correct? Yes? Awesome, kthxbye".

That's it. By utilizing automatic dialers for that kind of thing we're freeing up a ton of time for the real people to do more difficult hands on customer service.

I'm gonna say it, you're the same person my great grandfather was, complaining about ATMs because they were over complicated.

[–] [email protected] 3 points 1 year ago (1 children)

I'm gonna say it, you're the same person my great grandfather was, complaining about ATMs because they were over complicated.

Concerning support calls, if my errand can be handled by bots I can most often just get it done on the website. When i actually pick up the phone, unfailingly my issue is enough of an edge case that the automated bots cant handle it, and just end up wasting my time.

[–] [email protected] 2 points 1 year ago

Look at the date, 2025, those issues will be very probably mostly resolved by then. Not saying that those things are not real problems now, just pointing out that we are talking about pretty damn far future in terms of ai.

[–] [email protected] 1 points 1 year ago

Yeah, I do understand the frustration the same as with ATM when they were new if you don't have the background knowledge then navigating menus and pressing buttons feels strange - that tech soon became ubiquitous though and for most people today ATM are basically obsolete because phone apps are far more convenient.

I think we'll see the same happen, everyone will get used to talking to AI and systems will be refined - it won't be long before you don't even need to call up to get them to change your account because you'll just tell your AI to sort it out and it'll handle all the communication.

[–] [email protected] 0 points 1 year ago

I’m not opposed to technology or progress. If the task I’m calling about can be easily automated then it should be available to do online, and if it’s online, I’ll do it every time on there. But companies are more and more using forcing phone calls and horrible automated systems to discourage people from doing things they don’t want them to do (requesting a refund, cancelling an account, making a complaint) in the hopes that the experience will be so discouraging that they’ll give up. It’s literally a dark pattern.

Besides which, if the problems are that straight forward and easy to automate, machine learning is entirely unnecessary for dealing with them.

I think you’re being quite naive about how developments in this area could make our lives significantly worse.