this post was submitted on 29 Jan 2025
37 points (93.0% liked)

Asklemmy

44600 readers
856 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_[email protected]~

founded 5 years ago
MODERATORS
 

What are your thoughts on Generative Machine Learning models? Do you like them? Why? What future do you see for this technology?

What about non-generative uses for these neural networks? Do you know of any field that could use such pattern recognition technology?

I want to get a feel for what are the general thoughts of Lemmy Users on this technology.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 23 hours ago* (last edited 3 hours ago)
  • I don’t think it’s useful for a lot of what it’s being promoted for—its pushers are exploiting the common conception of software as a process whose behavior is rigidly constrained and can be trusted to operate within those constraints, but this isn’t generally true for machine learning.

  • I think it sheds some new light on human brain functioning, but only reproduces a specific aspect of the brain—namely, the salience network (i.e., the part of our brain that builds a predictive model of our environment and alerts us when the unexpected happens). This can be useful for picking up on subtle correlations our conscious brains would miss—but those who think it can be incrementally enhanced into reproducing the entire brain (or even the part of the brain we would properly call consciousness) are mistaken.

  • Building on the above, I think generative models imitate the part of our subconscious that tries to “fill in the banks” when we see or hear something ambiguous, not the part that deliberately creates meaningful things from scratch. So I don’t think it’s a real threat to the creative professions. I think they should be prevented from generating works that would be considered infringing if they were produced by humans, but not from training on copyrighted works that a human would be permitted to see or hear and be affected by.

  • I think the parties claiming that AI needs to be prevented from falling into “the wrong hands” are themselves the most likely parties to abuse it. I think it’s safest when it’s open, accessible, and unconcentrated.