this post was submitted on 17 May 2024
502 points (94.8% liked)
Technology
59299 readers
4425 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Honestly I feel people are using them completely wrong.
Their real power is their ability to understand language and context.
Turning natural language input into commands that can be executed by a traditional software system is a huge deal.
Microsoft released an AI powered auto complete text box and it's genius.
Currently you have to type an exact text match in an auto complete box. So if you type cats but the item is called pets you'll get no results. Now the ai can find context based matches in the auto complete list.
This is their real power.
Also they're amazing at generating non factual based things. Stories, poems etc.
...they do exactly none of that.
No, but they approximate it. Which is fine for most use cases the person you're responding to described.
They're really, really bad at context. The main failure case isn't making things up, it's having text or image in part of the result not work right with text or image in another part because they can't even manage context across their own replies.
See images with three hands, where bow strings mysteriously vanish etc.
New models are like really good at context, the amount of input that can be given to them has exploded (fairly) recently... So you can give whole datasets or books as context and ask questions about them.
They do it much better than anything you can hard code currently.
Google added context search to Gmail and it's infuriating. I'm looking for an exact phrase that I even put in quotes but Gmail returns a long list of emails that are vaguely related to the search word.
That is indeed a poor use. Searching traditionally first and falling back to it would make way more sense.
It shouldn't even automatically fallback. If I am looking for an exact phrase and it doesn't exist, the result should be "nothing found", so that I can search somewhere else for the information. A prompt, "Nothing found. Look for related information?" Would be useful.
But returning a list of related information when I need an exact result is worse than not having search at all.
Searching with synonym matching is almost.decades old at this point. I worked on it as an undergrad in the early 2000s.and it wasn't new then, just complicated. Google's version improved over other search algorithms for a long time.and then trashed it by letting AI take over.
Google's algorithm has pretty much always used AI techniques.
It doesn't have to be a synonym. That's just an example.
Typing diabetes and getting medical services as a result wouldn't be possible with that technique unless you had a database of every disease to search against for all queries.
The point is AI means you don't have to have a giant lookup of linked items as it's trained into it already.
Yes, synonym searching doesn't strictly mean the thesaurus. There are a lot of different ways to connect related terms and some variation in how they are handled from one system to the next. Letting machine learning into the mix is a very new step in a process that Library and Information Sci has been working on for decades.
Exactly. The big problem with LLMs is that they're so good at mimicking understanding that people forget that they don't actually have understanding of anything beyond language itself.
The thing they excel at, and should be used for, is exactly what you say - a natural language interface between humans and software.
Like in your example, an LLM doesn't know what a cat is, but it knows what words describe a cat based on training data - and for a search engine, that's all you need.
That's called "fuzzy" matching, it's existed for a long, long time. We didn't need "AI" to do that.
No it's not.
That allows for mis typing etc. it doesn't allow context based searching at all. Cat doesn't fuzz with pet. There is no similarity.
Also it is an AI technique itself.
Bullshit, fuzzy matching is a lot older than this AI LLM.
I didn't say LLM. AI has existed since the 50s/60s. Fuzzy matching is an AI technique.
That's why I only use Perplexity. ChatGPT can't give me sources unless I pay, so I can't trust information it gives me and it also hallucinated a lot when coding, it was faster to search in the official documentation rather than correcting and debugging code "generated" by ChatGPT.
I use Perplexity + SearXNG, so I can search a lot faster, cite sources and it also makes summaries of your search, so it saves me time while writing introductions and so.
It sometimes hallucinates too and cites weird sources, but it's faster for me to correct and search for better sources given the context and more ideas. In summary, when/if you're correcting the prompts and searching apart from Perplexity, you already have something useful.
BTW, I try not to use it a lot, but it's way better for my workflow.