144
Majority of CEOs Alarmed as AI Delivers No Financial Returns
(futurism.com)
On the road to fully automated luxury gay space communism.
Spreading Linux propaganda since 2020
Rules:
Pretty sure there are returns, they're just negative:
Dude, Pandas and R exist... And they're incredibly easy to use... Why the fuck was no one even spot checking the numbers
You'd be amazed at how many "numbers" people still haven't mastered Excel.
I'm the solo developer at my company of 50 people. Literally everything we use was written by me because I got fed up with the "numbers" guys fucking up spreadsheets.
It's all SQL now baby, so they literally can't get rid of me because they don't even know how it works, only that it does lol
It's very telling that they just implemented the AI without even giving its answers any sanity checks near the beginning. Could have caught it day one but no, it's magic and checking would be a waste of time
I mean, it could've worked well at the beginning, then fallen off the rails for some reason or another.
That's the dumb and scary thing about AI stuff: it might work today, it might work for years (if you're lucky), but every time you execute a prompt, you're rolling the dice on whether the mystery box will decide to just make up some shit from here on out. If you need a person to check the AI's output to make sure it's not hallucinating, might as well cut the Ai off from the loop altogether and use the checker's output from the get-go.
It drives me absolutely bonkers that there are smart people out there groveling and scraping for jobs while gormless jokers like this have secure six-figure salaries.
This is like a perfect setup to commit fraud. Just blame AI for cooking your sheets and be done with it.
Lots of people have been raising the alarm about this. Ai is just a convenient excuse to remove humans from any accountability or decision making, therefore not being able to be liable.
Is AI a valid defense, or is it a confession to negligence?
depends how good your lawyer is
There are more guardrails but the company I work for relies heavily on salesforce and I wonder if this is applicable. I don't care I missed my bus and said fuck it and called in sick.
From what I know about Salesforce, it depends on how heavy the company has gone on AI stuff. By itself, Salesforce is just a client database with some extra things on top, but if you're using Ai to write reports or analyze data, might as well ask a magic 8-ball.
They want us to engage with "all the tools it offers" I have not been directed specifically to deal with the analytics part of it but I am sure the actual field reps are. I just do mostly customer service side stuff, processing orders/returns and assisting the remote sales team. I absolutely loath its "genius" AI powered search functions which I have to use constantly. It can't even do simple intuitive things like if I am searching for the name of the client I just spoke to to log my activity, it can't even figure out I am looking for Bob Smith from within the contact card I am on and instead I have to open up the full list of Bob Smiths and then find the one for that specific company, which I assume would be one of the few things an LLM should be able to do well.
That just sounds like management overpaid for a piece of software they don't really understand and want people to spend their day throwing shit at the wall to see what sticks, no matter how difficult it makes simple tasks. If you want to implement a piece of tech in a process, you have to very specifically define which parts of that software will be used and how, otherwise it's a headache for everyone involved. It's like giving a set of knives to someone who mostly chops vegetables, and asking them to engage with the knives it offers, even though they have no use for a jamón slicing knife.
Idk much about Salesforce tbh but what you describe does sound like one of those legacy ways of doing something that has worked the same way for 25 years even though it makes no sense, but it would be a disaster if someone changed it to make sense. Now you just put a chatbot in charge of it, and blame the user for not being able to prompt it right.
It is 100% the first part with a little bit of the second part.