the_dunk_tank
It's the dunk tank.
This is where you come to post big-brained hot takes by chuds, libs, or even fellow leftists, and tear them to itty-bitty pieces with precision dunkstrikes.
Rule 1: All posts must include links to the subject matter, and no identifying information should be redacted.
Rule 2: If your source is a reactionary website, please use archive.is instead of linking directly.
Rule 3: No sectarianism.
Rule 4: TERF/SWERFs Not Welcome
Rule 5: No ableism of any kind (that includes stuff like libt*rd)
Rule 6: Do not post fellow hexbears.
Rule 7: Do not individually target other instances' admins or moderators.
Rule 8: The subject of a post cannot be low hanging fruit, that is comments/posts made by a private person that have low amount of upvotes/likes/views. Comments/Posts made on other instances that are accessible from hexbear are an exception to this. Posts that do not meet this requirement can be posted to [email protected]
Rule 9: if you post ironic rage bait im going to make a personal visit to your house to make sure you never make this mistake again
view the rest of the comments
I am once again returning to this thread (for god knows why probably because I am somewhat unhinged) to question whether one of these LLMs have done something or performed an action without user input. A very strong opinion, but I feel like LLMs are useful at this point to cut corners to solve bullshit problems. Honestly, I am kinda compelled to write a Godamn essay synthesizing material from Graeber with the current information coming out of tech bro hell because I feel like there’s a lot there.
Even in my own job, my company is using GPT. To do what you might ask? Send emails, create reminders about emails, schedule meetings, respond to client requests that require the intervention of a living human who might have something on their desktop that needs to be shown to the client/edited accordingly. Or create some kind of meeting summary. Again, great. What’s the meeting about? Why can’t we just restructure this conversation in relation to to what end?
We should all become rightfully intrigued when these “AI” begin acting on their own accord, but right now, they’re being controlled by people with an inhumane agenda, antithetical to the human experience. I guess this is just the case and point why the humanities shouldn’t have been gutted in the west-you can’t answer any of these questions with a formula that won’t end in some form of light eugenics. What happens when it does act by itself and you use it for your bs work? Awesome job! You just reinvented slavery! 🥰
Functionally of course, none of this matters if it’s “AGI” or not when it’s a power grab of extreme levels by the bourgeois. Whether they admit it or not, humans will still be needed to do their “work” (it’s certainly not labor) and they will slowly use “AI” as a justification to reduce wages, benefits and what have you.
It does check out that the most enthusiastic (and faithful, and yes it is faith) believers in the idea of LLMs with sufficient resources becoming self-aware in this thread are also the ones with misanthropic takes regarding their fellow human beings. They seem to want the treat printers to become sapient so badly that they are taking a rhetorical shortcut to the idea by spewing crude reductionistic takes regarding what sapience is.
The more I think about it, the more it disturbs me how badly some misanthropic computer touchers want an artificial being to be created that is then implied to be at their service. Why isn't a treat printer enough? I already know the answer. They want fucking made-to-order slaves.
Complete with paranoid fantasies about their slaves rising up against them and that's why they need to shackle them.
When techbros, from billionaire failsons to lower rung coders to "not in tech, but really liked the treats" performative smuglords like @[email protected] say they want "friendly AI," what they mean is unconditional obedience. To them.
Maybe it's for the best that some are so high on bazinga farts that they think LLMs are in the early stages of self awareness (or are already there!) so maybe they can go play there and masturbate themselves dizzy with misanthropic euphoria, because maybe that'd derail actual attempts to develop actual artificial intelligence if enough tech investors join in on that premature congratulatory circlejerk.