view the rest of the comments
news
Welcome to c/news! Please read the Hexbear Code of Conduct and remember... we're all comrades here.
Rules:
-- PLEASE KEEP POST TITLES INFORMATIVE --
-- Overly editorialized titles, particularly if they link to opinion pieces, may get your post removed. --
-- All posts must include a link to their source. Screenshots are fine IF you include the link in the post body. --
-- If you are citing a twitter post as news please include not just the twitter.com in your links but also nitter.net (or another Nitter instance). There is also a Firefox extension that can redirect Twitter links to a Nitter instance: https://addons.mozilla.org/en-US/firefox/addon/libredirect/ or archive them as you would any other reactionary source using e.g. https://archive.today/ . Twitter screenshots still need to be sourced or they will be removed --
-- Mass tagging comm moderators across multiple posts like a broken markov chain bot will result in a comm ban--
-- Repeated consecutive posting of reactionary sources, fake news, misleading / outdated news, false alarms over ghoul deaths, and/or shitposts will result in a comm ban.--
-- Neglecting to use content warnings or NSFW when dealing with disturbing content will be removed until in compliance. Users who are consecutively reported due to failing to use content warnings or NSFW tags when commenting on or posting disturbing content will result in the user being banned. --
-- Using April 1st as an excuse to post fake headlines, like the resurrection of Kissinger while he is still fortunately dead, will result in the poster being thrown in the gamer gulag and be sentenced to play and beat trashy mobile games like 'Raid: Shadow Legends' in order to be rehabilitated back into general society. --
hey maybe don't post that shit then
I don't see why. I don't care if the text is formatted by LLMs, it's the content that matters. This whole OMG LLM WAS USED TO WRITE SOMETHING hysteria is getting really old and tired.
If people can't be bothered to use actual writers then it's a pretty short hop to not being bothered to provide any real analysis. I'm always going to be skeptical of whatever those models shit out.
That doesn't follow at all. Styling has nothing to do with content. If you have an issue with what the article says, or the numbers used then feel free to point that out.
Not only does it follow, it's a casual relationship between the two. It's the same reason Boeing is experiencing design issues at the same time they're experiencing software control issues at the same time as they're experiencing basic manufacturing QC issues; not giving a shit about a main pillar of your industry dramatically increases the likelihood that you don't give a shit about multiple others too.
No, it really doesn't follow and there is no relationship between the two. New technology exists and it's now used to automate certain tasks that couldn't be automated before. This is is like arguing that when people stopped writing assembly by hand they stopped giving a shit about coding. People aren't writing articles in artisanal fashion the way they used to because automation has advanced.
Again, feel free to point out actual criticism of the content of the article. I can't help but notice that despite all the whinging, you haven't actually pointed out anything of substance wrong with the article.
It'll be a cold day in hell before I dig into the veracity of an article in Forbes, "artisanal" or not.
The economy is bad and inflation has been running rampant and people can't afford things anymore? Holy shit, someone inform the Nobel committee, this guy is really onto something special.
Right, so there's nothing actually factually wrong in the article, and provides relevant numbers and cites a recent report showing how much car payment delinquencies went up. Some people are actually interested in the details, even if you're not.
You just can't bring yourself to admit that your complaint is vapid. The reality is that whether something was edited by an LLM or not has fuck all to do with the quality of the content. Not only that, but it would also be completely absurd to assume that an article has any veracity merely because it was written without use of LLMs.
I don't know what's factually wrong in the article but what I can say is that any article I've read in Forbes in the last 10+ years on subject matter that overlaps with my areas of expertise they've been garbage. It's a pay-for-play clickbait publication and it has been that way for a long time.
Car payment delinquency is up among young people but mortgage payment delinquency is flat even with sky high prices and rates. So what? It's not 2007 where people think the economy is going to stay hyperbolic forever, everybody on the planet knows that shit sucks right now. It's a miracle that the wheels have stayed on as long as they have and this frankly isn't an indicator that they're finally coming off.
This is a filler article to try and drum up engagement for engagement's sake and you can bet your ass if I had to write something this meaningless I'd use chat gpt too. Now if it was something important I'd do it myself because I would want to show off to a future employer that I have a shred of talent in my body and that I'm not an dramatically more expensive equivalent of somebody in an Indonesian cube farm.
So, we're back to LLMs have nothing to do with anything here, and Forbes has been able to produce low quality content just fine without them.
Car payment delinquency is very much an important statistic, because as the article correctly points out, you need a car to go to work in most parts of the US. If you lose your car, you lose your job and then you lose everything else. Meanwhile, mortgage delinquencies are not flat. This is the kind of stuff you'd learn if you actually followed these things instead of just hand waving https://www.investopedia.com/mortgage-delinquencies-rise-faster-than-other-loans-11765607
I guess you have provided ample example that you can give chatgpt a run for its money by writing a whole thread that's grammatically correct, but devoid of all substance.
My problem is with the LLM in general. I purposely avoid the AI written shit if I can because I don't want to support the use of AI and/or LLMs.
I mean you do you I guess? 🤷
AI Luddites Unite! Anyone who says "I don't care" or "I don't see the problem with LLMs" needs to learn about the environmental damage being done by huge AI systems.
You gotta update your talking points. Energy needs for both training and running models have already dropped dramatically. Models that used to require a whole data centre just a year ago now run on a desktop. https://venturebeat.com/ai/deepseek-v3-now-runs-at-20-tokens-per-second-on-mac-studio-and-thats-a-nightmare-for-openai/
I'd rather see some cool uses of LLMs in an organizational capacity than constantly bemoaning how bad they are. I don't really see how we can put the cat back in the bag at this point.
It really is just perseveration at this point.