this post was submitted on 13 Jul 2023
31 points (100.0% liked)

Technology

12 readers
1 users here now

This magazine is dedicated to discussions on the latest developments, trends, and innovations in the world of technology. Whether you are a tech enthusiast, a developer, or simply curious about the latest gadgets and software, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on topics such as artificial intelligence, robotics, cloud computing, cybersecurity, and more. From the impact of technology on society to the ethical considerations of new technologies, this category covers a wide range of topics related to technology. Join the conversation and let's explore the ever-evolving world of technology together!

founded 2 years ago
 

I know a lot of people want to interpret copyright law so that allowing a machine to learn concepts from a copyrighted work is copyright infringement, but I think what people will need to consider is that all that's going to do is keep AI out of the hands of regular people and place it specifically in the hands of people and organizations who are wealthy and powerful enough to train it for their own use.

If this isn't actually what you want, then what's your game plan for placing copyright restrictions on AI training that will actually work? Have you considered how it's likely to play out? Are you going to be able to stop Elon Musk, Mark Zuckerberg, and the NSA from training an AI on whatever they want and using it to push propaganda on the public? As far as I can tell, all that copyright restrictions will accomplish to to concentrate the power of AI (which we're only beginning to explore) in the hands of the sorts of people who are the least likely to want to do anything good with it.

I know I'm posting this in a hostile space, and I'm sure a lot of people here disagree with my opinion on how copyright should (and should not) apply to AI training, and that's fine (the jury is literally still out on that). What I'm interested in is what your end game is. How do you expect things to actually work out if you get the laws that you want? I would personally argue that an outcome where Mark Zuckerberg gets AI and the rest of us don't is the absolute worst possibility.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 4 points 1 year ago* (last edited 1 year ago)

If what you're going to give me is an oversimplified analogy that puts too much faith in what AI devs are trying to sell and not enough faith in what a human brain is doing, then don't bother because I will dismiss it as a fairy tale.

I'm curious, how do you feel about global warming? Do you pick and choose the scientists you listen to? You know that the people who develop these AIs are computer scientists and researchers, right?

If you're a global warming denier, at least you're consistent. But if out of one side of you're mouth you're calling what AI researchers talk about a "fairy tail", and out of the other side of your mouth you're criticizing other people for ignoring science when it suits them, then maybe you need to take time for introspection.

You can stop reading here. The rest of this is for people who are actually curious, and you've clearly made up your mind. Until you've actually learned a bit about how they actually work, though, you have absolutely no business opining about how policies ought to apply to them, because your views are rooted in misconceptions.

In any case, curious folks, I'm sure there are fancy flowcharts around about how data flows through the human brain as well. The human brain is arranged in groups of neurons that feed back into each other, where as an AI neural network is arranged in more ordered layers. There structure isn't precisely the same. Notably, an AI (at least, as they are commonly structured right now) doesn't experience "time" per se, because once it's been trained its neural connections don't change anymore. As it turns out, consciousness isn't necessary for learning and reasoning as the parent comment seems to think.

Human brains and neural networks are similar in the way that I explained in my original comment -- neither of them store a database, neither of them do statistical analysis or take averages, and both learn concepts by making modifications to their neural connections (a human does this all the time, whereas an AI does this only while it's being trained). The actual neural network in the above diagram that OP googled and pasted in here lives in the "feed forward" boxes. That's where the actual reasoning and learning is being done. As this particular diagram is a diagram of the entire system and not a diagram of the layers of the feed-forward network, it's not even the right diagram to be comparing to the human brain (although again, the structures wouldn't match up exactly).