this post was submitted on 14 Feb 2024
675 points (95.5% liked)
Technology
59299 readers
4599 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I can't contest the first point cause I'm not a firefox junkie, so I won't.
What I will contest is that the existence of AI, or, deep learning, or LLMs, or neural networks, or matrix multiplication, or whatever type of shit they come up with next, I'll contest that it isn't problematic. I kind of think it is, inherently, I think it's existence is not great. Mostly because it obfuscates, even internally, the processing of data, it obfuscates the inputs from the outputs, the works from the results. You can do that with regular programming just fine, just as you can do most of the shit that AI does with normal programming, like that guy who made a program that calculates the prices for japanese baked goods and also recognizes cancer, right. But I think AI is a step further than that, it obfuscates it more. I kind of am skeptical of it's broad implementation.
For trivial use cases, it's kind of fine, but I think maybe use cases we might consider trivial, otherwise are kind of fucked, maybe. AI summary of an article? I dunno if that's good. We might think, oh, this is kind of trivial because the user should just not really trust what the AI says, but, as with all technology, what if the user is an idiot and a moron? They might just use it to read the article for them, and then spout off whatever talking points and headlines it gives them. I can't really think of a scenario where that's actually a good thing, and it's highly possible. It might make it easier to parse an article, like that, but I don't think that's actually a good or useful tool, it's just presented a kind of illusion of utility, most especially because it was redundant (we could just write a summary and have it at the top of the article, like every article on the face of the earth), and it was totally beyond our control, at least, in most circumstances.
Also, the Mozilla Foundation is nonprofit, but the Mozilla Corporation is not. The Foundation manages the Corp, which manages Firefox development. So depending on which one you're referring to, it might be a non-profit, or it might not be. In any case, the nonprofit is a step removed from Firefox development, which I think is an important side-note, even if it's not actually that relevant to whatever conversations about AI there might be.
Perhaps, comically, it is the perfect representation of the world as it is now: “knowledge” in people’s brains is created by consuming whatever source aligns with the beliefs that they think are theirs. No source or facts are required. Only the interpretation matters.