this post was submitted on 15 Sep 2023
466 points (97.2% liked)
Technology
59282 readers
3645 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I really hope public opinion on AI starts to change. LLMs aren't going to make anyone's life easier, except in that they take jobs away once the corporate world determines that they are in a "good-enough" state -- desensitizing people to this kind of stupid output is just one step on that trail.
The whole point is just to save the corporate world money. There will never, ever be a content advantage over a human author.
The thing is LLMs are extremely useful at aiding humans. I use one all the time at work and it has made me faster at my job, but left unchecked they do really stupid shit.
I agree they can be useful (I've found intelligent code snippet autocompletion to be great), but it's really important that the humans using the tool are very skilled and aware of the limitations of AI.
Eg, my usage generates only very, very small amounts of code (usually a few lines). I have to very carefully read those lines to make sure they are correct. It's never generating something innovative. It simply guesses what I was going to type anyways. So it only saved me time spent typing and the AI is by no means in charge of logic. It also is wrong a lot of the time. Anyone who lets AI generate a substantial amount of code or lets it generate code you don't understand thoroughly is both a fool and a danger.
It does save me time, especially on boilerplate and common constructs, but it's certainly not revolutionary and it's far too inaccurate to do the kinds of things non programmers tend to think AI can do.
It's already made my life much easier.
The technology is amazing.
It's just there's a lot of stupid people using it stupidly, and people whose job it is to write happen to really like writing articles about its failures.
There's a lot more going on in how it is being used and improving than what you are going to see unless you are actually using it yourself daily and following research papers on it.
Don't buy into the anti-hype, as it's misleading to the point of bordering on misinformation.
I'm going to fight the machines for the right to keep slaving away myself
And when I'm done, capitalism will give me an off day as a treat!
You're missing the point. If you don't have a job to "slave away" at, you don't have the money to afford food and shelter. Any changes to that situation, if they ever come, are going to lag far behind whatever events cause a mass explosion of unemployment.
It's not about licking a boot, it's that we don't want to let the boot just use something that should be a net good as extra weight as they step on us.
I am not going to purposefully waste human life on tasks that machines could perform or help us be faster at just because late capitalism doesn't let me, the worker, reap the value from them.
It removes human labor
On a bigger scale we had the loom, the printing press, the steam engine the computer. Imagine if we'd refused them
I can't see us get ensnared into some neu dark age propelled by some "i need to keep my job" status quo just because we found ourselves with a moronic economic system that makes innovations bad news for the workers it replaces
If it takes AI taking away our livelihoods to get a chance to rework this failing doctrine so be it
I'm not talking communism I'm barely hoping for an organic response to it, likely a UBI
As someone who works in content marketing, this is already untrue at the current quality of LLMs. It still requires a LOT of human oversight, which obviously it was not given in this example, but a good writer paired with knowledgeable use of LLMs is already significantly better than a good content writer alone.
Some examples are writing outside of a person's subject expertise at a relatively basic level. This used to take hours or days of entirely self-directed research on a given topic, even if the ultimate article was going to be written for beginners and therefore in broad strokes. With diligent fact-checking and ChatGPT alone, the whole process, including final copy, takes maybe 4 hours.
It's also an enormously useful research tool. Rather than poring over research journals, you can ask LLMs with academic plug-ins to give a list of studies that fit very specific criteria and link to full texts. Sometimes it misfires, of course, hence the need for a good writer still, but on average this can cut hours from journalistic and review pieces without harming (often improving) quality.
All the time writers save by having AI do legwork is then time they can instead spend improving the actual prose and content of an article, post, whatever it is. The folks I know who were hired as writers because they love writing and have incredible commitment to quality are actually happier now using AI and being more "productive" because it deals mostly with the shittiest parts of writing to a deadline and leaves the rest to the human.
I'm talking about future state. The goal clearly is to avoid the need of human oversight altogether. The purpose of that is saving some rich people more money. I also disagree that LLMs improve output of good writers, but even if they did, the cost to society is high.
I'd much rather just have the human author, and I just hope that saying "we don't use AI" becomes a plus for PR due to shifting public opinion.
No, it's not the 'goal'.
Somehow when it comes to AI it's humans who have the binary thinking.
It's not going to be "either/or" anytime soon.
Collaboration between humans and ML is going to be the paradigm for the foreseeable future.
The hundreds of clearly AI written help articles with bad or useless info every time I try to look something up in the last few months says otherwise....
Because the internet was so clear of junk and spam before LLMs were released?
There once was a time, long long ago, where the interwebs had good information on it. It was even easier to find then, before the googles went hard.
But really I have noticed a massive increase in AI junk writing popping up first in any thing I try to look up.
if you want to go back to the 90s or early 2000s sure. But 4 years ago the internet was full of blogspam clickbait articles and fake news. LLMs have not increased that percetptably to me, the first 10 results on google were often crap 4 years ago and theyre often crap now
Yes, some of us are old and still remember the hope and utility.
I will agree that things have been on the downslide for a while but maybe its just the way google now works or that AI articles are free but I get a ton of them for any "how to" or "walkthough" type search. At least if I look up "how to make taco sauce" the article will tell me how after the mandatory life story and other bullshit.