this post was submitted on 07 Dec 2023
538 points (87.7% liked)
Asklemmy
43945 readers
881 users here now
A loosely moderated place to ask open-ended questions
Search asklemmy ๐
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- [email protected]: a community for finding communities
~Icon~ ~by~ ~@Double_[email protected]~
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I have to say no, I can't.
The best decision I could make is a guess based on the logic I've determined from my own experiences that I would then compare and contrast to the current input.
I will say that "current input" for humans seems to be more broad than what is achievable for AI and the underlying mechanism that lets us assemble our training set (read as: past experiences) into useful and usable models appears to be more robust than current tech, but to the best of my ability to explain it, this appears to be a comparable operation to what is happening with the current iterations of LLM/AI.
Ninjaedit: spelling
If you can't make logical decisions then how are you a comp sci major?
Seriously though, the point is that when making decisions you as a human understand a lot of the ramifications of them and can use your own logic to make the best decision you can. You are able to make much more flexible decisions and exercise caution when you're unsure. This is actual intelligence at work.
A language processing system has to have it's prompt framed in the right way, it has to have knowledge in its database about it and it only responds in a way that it's programmed to do so. It doesn't understand the ramifications of what it puts out.
The two "systems" are vastly different in both their capabilities and output. Even in image processing AI absolutely sucks at driving a car for instance, whereas most humans can do it safely with little thought.
I don't think that fully encapsulates a counter point, but I think that has the beginnings of a solid counter point to the argument I've laid out above (again, it's not one I actually devised, just one that really put me on my heels).
The ability to recognize when it's out of its depth does not appear to be something modern "AI" can handle.
As I chew on it, I can't help but wonder what it would take to have AI recognize that. It doesn't feel like it should be difficult to have a series of nodes along the information processing matrix to track "confidence levels". Though, I suppose that's kind of what is happening when the creators of these projects try to keep their projects from processing controversial topics. It's my understanding those instances act as something of a short circuit where (if you will) when confidence "that I'm allowed to walk about this" drops below a certain level, the AI will spit out a canned response vs actually attempting to process input against the model.
The above is intended ad more a brain dump than a coherent argument. You've given me something to chew on, and for that I thank you!
Well, it's an online forum and I'm responding while getting dressed and traveling to an appointment, so concise responses is what you're gonna get. In a way it's interesting that I can multitask all of these complex tasks reasonably effortlessly, something else an existing AI cannot do.