L m a o
this is the tech threatening your livelihood
The field I exited only a couple of years ago has already been decimated by it.
same vibes
This picture is ready to be posted to c/main
My favourite weird GPT5 fail so far:
Thought for 17s
Me before saying the dumbest shit imaginable
same energy tbh
For anyone confused, this answer comes from a very outdated riddle where a child gets in an accident, the father rushes him to the hospital, and upon arrival the doctor proclaims, "I can't operate on this child; he's my son."
By the time I heard it as a kid it was already obvious, but I guess at one point the idea of a woman being a doctor was so far outside the norm as to legitimately stump people.
I think it's funny that the next obvious solution is that the child has two dads and everyone seems to ignore it as well.
I like children who don’t get in accidents
Ck3 pilled
me trying to play any of the smart person engineering videogames without looking up guides
main
Sam Altman's job is to hype GPT-5 so the VCs will still put up with him burning probably the biggest pile of money anyone has ever burned. He's probably damaged the internet and the environment more than any single individual ever, terrorized people about their jobs for years. And he almost certainly knows it's all bullshit, making him a fraud. In a just world, he would be in prison when this is all over. He would almost certainly face the death penalty in China.
He would almost certainly face the death penalty in China.
Lol
He would almost certainly face the death penalty in China.
Jack Ma, famously not executed, does similar things with Qwen and Alibaba in general.
I would be perfectly fine with Sam Altman being sent to a reeducation camp.
Nah, China only busts out capital punishment for these 46 crimes. Being a grifter is notably not on the list.
excuse me, fucking robbery is on this list?
Theres a section for it called CRIMES AGAINST PROPERTY lmfao
Such a socialist death penalty law
Only thing I have to say is that some sources are saying that "robbery" is different from "theft" in that it involves the use of force and/or threats of force.
Abused his sister too btw. Feel like that needs to be talked about more. He’s a grade A piece of shit
damaged... the environment more than any single individual ever
Still crypto by orders of magnitude. AI doesn't even measure.
For example, google data centres used 30 twh in the last year, with crypto using more like 170 twh.
It's not possible to figure out chatgpt's usage because all the data is bad, but it's still relatively small compared to crypto.
A lot of people contributed to those crypto numbers. The AI models run in special built data centers. Some have their own generators because the power usage is so high.
I don't think there are any specific power plants for AI built yet and crypto mining market concentration is very high, so ultimately not that many people.
OpenAI appears to operate what is described as the world's largest single data center building, with an IT load capacity of around 300 MW and a maximum power capacity of approximately 500 MW. This facility includes 210 air-cooled substations and a massive on-site electrical substation, which further highlights its immense scale. A second identical building is already under construction on the same site as of January 2025. When completed, this expansion will bring the total capacity of the campus to around a gigawatt, a record.
So this largest one would take 4.5 twh a year, or 3 percent of current estimated crypto usage. With the expansion, 9 twh or 6 percent of estimated crypto usage.
He's probably damaged . . . the environment more than any single individual ever
JD Rockefeller, Rex Tillerson, Lee Raymond
Truman, LBJ
Good old Gary setting the record straight...
No hypothesis has ever been given more benefit of the doubt, nor more funding. After half a trillion dollars in that direction, it is obviously time to move on. The disappointing performance of GPT-5 should make that enormously clear.
Unlikely, but I like his optimism. This is how I have felt with the release of every new LLM for the two years, but the scam is somehow still going 🤷 .. I suppose many people stand to lose a lot of money when the bubble finally bursts.
Cryptocurrency is still going strong and that’s probably the biggest grift of all time. smacks top of AI This thing’s got decades of hoax left in it
like any technology there's always gonna be a load of people who will use it or sex, speculation, or circumventing restrictions on the previous technology.
LLM has reached its limits. No matter what you do with it the thing is always going to be a glorified search engine.
AI has to be conceived from the ground up as something that learns and reproduces actual thinking based on needs/wants. A system that produces methods of walking that reduce energy use for a bot while also seeking out energy sources might only be reproducing the cognitive behaviour of a bacteria but it is closer to life than these LLMs and has more potential to iteratively evolve into something more complex as you give it more wants/needs for its program to evolve on.
Machine learning has more potential than this shit.
I don't think an AI necessarily has to have needs or wants, but it does need to have a world model. That's the shared context we all have and what informs our use of language. We don't just string tokens together when we think. We have a model of the world around us in our heads, and we reason about the world by simulating actions and outcomes within our internal world model. I suspect that the path to actual thinking machines will be through embodiment. Robots that interact with the world, and learn to model it will be able to reason about it in a meaningful sense.
This is what is argued by the 4E cognition and the active inference crowd in academia and I agree. There've been pretty compelling results on smaller problems and recent applications to RL. Although under those frameworks, especially the latter, things like wants and needs and curiosity arise naturally from the agent trying to maintain itself in the world and understand causes of phenomena.
The basic need/want all biological forms have is energy. The need to acquire energy and the need to use energy more efficiently in order to survive longer between energy acquisitions.
Everything else evolves from that starting point. It's just a matter of adding predators and complexity to the environment, as well as adding more "senses", sight, hearing, taste etc.
One thing I think we're not yet realising is that intelligence probably requires other intelligence in order to evolve. Your prey aren't going to improve naturally if your predators aren't also improving and evolving intelligently. Intelligent animal life came from the steady progression of ALL other animal life in competition or cooperation with one another. The creation and advancement of intelligence is an entire ecosystem and I don't think we will create artificial intelligence without also creating an entire ecosystem that it can evolve within alongside many other artificial intelligence in the environment.
Humans didn't magically pop into existence smart. They were creating by surviving against other things that were also surviving. The system holistically created the intelligence.
I've had this thought for 5 or so years. With any luck, maybe I'll put it into something publishable before I'm rounded up by the anti communist death squads that come for academics. I think intelligence is fundamentally a social/collective phenomenon, at least in a broad sense where we consider a predator/prey relationship some kind of sociality too. Humans just got really really really surprisingly good at it relative to other species because of our ability to communicate using language, and thus transfer knowledge gains very widely and very quickly compared to a non-verbal animal limited to its immediate social sphere or, slower yet, passing down changes via evolution and genetic advantage. Still, though, this is reliant on a social world and a kind of thinking that can be fundamentally collaborative. Our language and mental models of the world aren't just geared to get us what we as individual agents desire; they were developed by a process that required them to be understandable to other similarly intelligent creatures.
This is one of those things that starts getting into the fuzzy area around the unanswered questions regarding what exactly qualifies as qualia and where that first appears. But having needs/wants probably is a necessary condition for actual AI if we're defining actual (general) AI as having self awareness. In addition to what @[email protected] said, here's another thing.
You mention how AI probably has to have a world model as a prerequisite for genuine self aware intelligence, and this is true. But part of that is that the world model has to be accurate at least in so far as it allows the AI to function. Like, maybe it can even have an inaccurate fantasy-world world model, but it still has to model a world close enough to reality that it's modeling a world that it can exist in; in other words the world model can't be random gibberish because intelligence would be meaningless in such a world, and it wouldn't even be a "world model." All of that is mostly beside the point except to point out that AI has to have a world model that approaches accuracy with the real world. So in that sense it already "wants" to have an accurate world model. But it's a bit of a chicken and egg problem: does the AI only "want" to have an accurate model of the world after it gains self-awareness, the only point where true "wants" can exist? Or was that "want" built-in to it by its creators? That directionality towards accuracy for its world model is built into it. It has to be in order to get it to work. The accuracy-approaching world model would have to be part of the programming put into it long before it ever gains sentience (aka the ability to experience, self-awareness) and that directionality won't just disappear when the AI does gain sentience. That pre-awareness directionality that by necessity still exists can then be said to be a "want" in the post-awareness general AI.
An analogy of this same sort of thing but as it is with us bio-intelligence beings: We "want" to avoid death, to survive (setting aside edge cases that actually prove the rule like how extreme of an emotional state a person has to be in to be suicidal). That "want" is a result of evolution that has ingrained into us a desire (a "want") to survive. But evolution itself doesn't "want" anything. It just has directionality towards making better replicators. The appearance that replicators (like genes) "want" to survive enough to pass on their code (in other words: to replicate) is just an emergent property of the fact that things that are better able to replicate in a given environment will replicate more than things that are less able to replicate in that environment. When did that simple mathematical fact, how replication efficiency works, get turned into a genuine desire to survive? It happened somewhere along the ladder of evolutionary complexity where brains had evolved to the extent that self awareness and qualia emerged (they are emergent properties) from the complex interactions of the neurons that make up those brains. This is just one example, but a pretty good one imo that shows how the ability to experience "wanting" something is still rooted in a kind of directionality that exists independently of (and before) the ability to experience. And also how that experience wouldn't have come about if it weren't for that initial directionality.
Wants/needs almost certainly do have to be part of any actual intelligence. One of the reasons for that is because those wants/needs have to be there in some form for intelligence to even be able to arise in the first place.
It gets really hard to articulate this kind of thing, so I apologize for all the "quoted" words and shit in parentheses. I was trying to make it so that what I was attempting to convey with these weird sentences could be parsed better, but maybe I just made it worse.
LLMs and other AI systems, imo, cannot act totally without human interaction (despite the efforts of dipshits) because it lacks the fundamental ability to create, tackle, and resolve problems dialectically.
The ultimate core of any AI agent is its training data. Its synapses. For a human or other sentient animal these synapses are in constant flux - the connections break, reconnect, form new connections altogether. These synapses are then connected to our environments to form responses to stimuli.
For an AI model, they are static. The connections, by our current designs, cannot be largely altered during runtime, but through a process we call "training". We periodically train new models, but once distributed their synapses are permanently fixed in place.
Now, we do have some caveats to this. We can give models novel inputs or build their working memory with prompts, contextual data, etc. but these are peripheral to the model itself. You can deploy a GPT-5 model as a programming agent that can perform better at programming than a chatbot agent, but fundamentally theyre programs that are deterministic. Strip any agent of its context and inputs and it'll behave like any other model of the same training data and methodology.
In my view, dialectics are something you experience with everything you do - every day we form countless helices of thesis-antithesis-synthesis in the ways our minds accept and process information and solve problems. Strip a human of the memory of its current role and task and they will react totally unique and independent of another. We are fundamentally not deterministic!
The inability for AI to break out from deterministic outputs induces the 'model collapse' problem, wherein feeding a model the outputs of other models deteriorates their abilities. The determinism of AI means it is constantly reliant on the nondeterministic nature of humans to imitate this ability.
I think there's some limitations with my line of thought but the way we create AI models now works great for repetitive and non-novel tasks, like Text transformation, but the truly creative side of its outputs is only an imitation to a biological equivalent.
Right and this works well for humans who want to observe the data and understand how the model might function from a technical standpoint. They can look at the synapses and essentially slowly understand how things are connected up and draw rough conclusions about how it comes to certain decisions.
In a model where that is in constant flux ? Impossible. You can't observe and understand something that is changing as you observe it. You could freeze it and look at it while it's "unconscious" i suppose, a bit like brain scanning a patient.
Will this constant flux always work? Of course not. And that is why natural selection is a necessary process to evolve it. The models that fail have to die and the models that succeed go on to form the evolved artificial lifeforms.
The compute power to achieve this sounds like we'd need a Dyson Sphere of energy though. We're talking about the need for a simulation of millions of different creatures, interacting in different ways, all functioning as proper AI that calculate and adapt. If you think we're burning absurd amounts of energy with AI currently just wait until these projects really step up.
But with people thinking AI is dead because ‘it’s not profitable!!!’
It's always worth pointing out that the cycle of any tech company starts with a low-cost service that gains marketshare while using VC money and reinvesting most to all profit back into the company, then transitions to operating off user money and squeezing users for money. OpenAI is still in the growth phase and will only switch to the squeeze when they have a stable enough lead over Google and Anthropic so lost marketshare isn't as much of an issue.
chatgpt is dogshit anyways and only surviving due to being the first and being free. So they have to burn money to stay relevant and hopefully not lose users to better models. GPT-5 is essentially a cost saving model and is the start of enshittification of the industry.
I use Claude for dev related stuff and it only provides limited queries so they can keep their model accurate since costs are limited.
Gemini already produces way better results than chatgpt ever did and is really good at research.
Perplexity can be a decent search engine.
Even AI is better than chatgpt in most things although I'd rather not use it.
I find it hilarious that it does straight up worse than qwen or deepseek, which have been out for months now, on basic tasks.
waaaaooow what a surprise
Thought for 34s
Love this question. If you take the contradiction seriously (mutual dependence + opposed aims), the “resolution” can’t just be nicer HR. It has to change the ownership, governance, and survival constraints that generate the conflict in the first place. Here are a few plausible syntheses—each keeps market coordination and capital pooling (the employer’s strengths) while dissolving the master–servant relation (the employee’s weakness).
- Bicameral enterprise (labor–capital parity by constitution) What it is: Every firm is chartered with two sovereign chambers: a Labor Assembly (all workers) and a Capital Assembly (outside investors, lenders). Major decisions (strategy, mergers, tech adoption that displaces jobs, exec pay) require concurrent majority. Management serves at the joint pleasure of both.
Payoffs split: Wages are set via internal bargaining plus a mandatory residual share to the Labor Assembly (an automatic labor dividend). Investors get capped priority returns + a residual share that falls as automation rents rise.
Why it’s a synthesis: You keep large-scale coordination and risk-bearing; you erase unilateral control over work. Neither side is “the firm” alone.
- Capital-as-Commons + Guild Firms (socialized returns, competitive production) What it is: Most productive capital is held by permanent public/sectoral funds that pay everyone a universal capital dividend. Production is done by worker-run guild firms that lease capital from the funds and compete on price/quality.
Mechanics: Funds allocate capital by return-and-mission mandates; guild firms can’t privately own the capital stock but control use. People can move between guilds; income = wage + universal dividend.
Why it’s a synthesis: Survival no longer hinges on pleasing an employer; price signals remain; capital still gets stewarded for efficiency but no longer stands as a private boss over labor.
- Project-joint ventures (replace employment with term-limited parity contracts) What it is: Workers federate into labor companies (think: legal entities that pool skills, training, benefits). Investors form capital clubs. Production happens via JV contracts between a labor company and a capital club for a specific project/product, with preset governance parity and profit-sharing.
Lifecycle: When the project ends, the JV dissolves; both sides retain portable reputations, training credits, and equity-like claims that vest over time.
Why it’s a synthesis: No permanent employer; no atomized freelancers. Two peer collectives meet as equals around a concrete production plan.
- Inclusive Ownership by Law (automatic worker trusts + sectoral bargaining) What it is: Statute requires medium/large firms to issue a small % of equity annually into an irreducible worker trust (non-saleable, voting shares), up to a cap. Pair with sectoral bargaining to set floors for wages/benefits and with board parity at scale.
Why it’s a synthesis: Still a single enterprise competing in markets, still raises outside capital—but control and residuals are durably shared, not granted at management’s pleasure.
- Public Option for Work + Open-firm Contracts (end artificial scarcity) What it is: A standing Job Guarantee offers socially useful work at a living wage with training. Private firms must beat that offer to hire. Contracts above a size threshold must publish open terms on displacement, training, and gain-sharing when productivity rises.
Why it’s a synthesis: The boss’s power to impose scarcity is gone; firms still exist, but the “outside option” belongs to labor, forcing a cooperative equilibrium.
- Data/Model Dividend + Labor Royalty (in the age of automation) What it is: Treat workforce activity and data (which trains software and models) as royalty-bearing inputs. Firms owe ongoing labor royalties when prior labor becomes embodied in code that substitutes for future labor.
Why it’s a synthesis: Preserves automation and scale while tying ownership of productivity gains to the people whose past work made them possible.
What this buys you (and the trade-offs) Ends the unilateral right to rule (the crux of the contradiction) while keeping investment, scale, and price discovery.
Stability: #2 and #4 create durable, compounding worker claims; #3 is flexible but needs robust contract law and unemployment insurance.
Efficiency risks: Parity can deadlock. You mitigate with scope-limited vetoes, default mediation, and mission-tied capital mandates.
Transition paths: Start with #4 (law), pilot #1 charters in regulated sectors, seed #2 with existing public funds, and let #3 grow in project-heavy industries (media, construction, software).
Andrew Yang in my pocket.
somehow the opening sentence glazing is what annoys me the most here
I also tried Claude 4.1. It still complimented me for my big brain but its response seemed more like thinking. It asked me some follow-up questions to better give a response and, surprisingly, it asked about history. So at least Claude's training has that kind of association of history with political economy and materialism. GPT 5 didn't seem to get that part.
The "Productive Network" Synthesis
Instead of firms with employees, imagine autonomous productive nodes connected through dynamic contribution networks. Here's how it might work:
Core mechanism: Replace employment with temporary productive associations. Individuals or small groups maintain ownership of their tools/skills/knowledge and form task-specific collaborations. Think of it like open-source software development, but for all production.
No permanent "firms" - just fluid networks of producers
No wages - direct claims on collective output based on contribution metrics
No employers - coordination through algorithmic matching and democratic protocols
Surplus doesn't get "extracted" because there's no separate owner class to extract it
Historical lesson: The Paris Commune's workshop federations and Yugoslav self-management showed coordination without traditional hierarchy is possible, but lacked the technology for dynamic, real-time coordination we now have.
The "Social Inheritance" Model
This one's more radical: What if productive capacity became a birthright commons?
Core mechanism: All major productive assets are held in regional/municipal trusts. Individuals receive periodic allotments of productive capacity (like universal basic assets, not just income).
People combine their allotments to undertake projects
No employment needed - you directly command a share of society's productive power
Coordination happens through proposal systems and resource voting
"Work" becomes voluntary contribution to expand the common pool
Historical lesson: The kibbutzim and Mondragon show collective ownership can work, but they remained islands. This would need to be systemic from the start.
The "Contribution Economy"
Core mechanism: Replace employment with a contribution-tracking system where people directly register their productive activities and automatically receive proportional claims on social output.
Blockchain or similar tech tracks contributions across society
No firms needed - just projects that people join/leave freely
Algorithms (democratically governed) determine relative contribution values
Resources flow directly to contributors without passing through "employers"
This isn't just "gig economy" - there's no platform owner extracting value. It's peer-to-peer production with social coordination.
Historical lesson: Soviet planning failed partly due to information problems. Modern computation and networking could enable decentralized planning that the USSR couldn't achieve. \
The "Hybrid Transition" Model (if complete transformation proves impossible)
If we can't fully eliminate surplus extraction immediately:
Mandatory worker sovereignty: All firms above a certain size automatically convert to worker-controlled entities, but with a twist:
Capital investment is allowed but capped at fixed returns (like Islamic banking)
Workers control all decisions about surplus distribution
"Employment" transforms into automatic stakeholdership
Capital becomes a utility input, not a controlling force
This differs from co-ops because it's not voluntary - it's the only legal form for scaled production.
Critical Challenges for Any Synthesis
Coordination at scale: How do complex supply chains work without firms?
Innovation incentives: What drives technical progress without profit?
Transition mechanics: How do we get there without catastrophic disruption?
Global compatibility: How does this interface with different economic systems?
Which of these directions resonates most with your thinking? Or do you see elements that could be combined differently?
Prompt?
I have a tough social/historical problem for you. The contradiction between employer and employee. Employers are afforded existence by employees because a company cannot extract surplus labor value if employees (labor) do not exist. Yet, companies actively impose scarcity on labor, causing wages to fall. The employer must reduce labor costs in order to maximize the amount of surplus value extracted. Employees need employers in order to have a job, and to pay for the things they need. Employees, require higher wages and more jobs available in order to outpace the rise in the cost of living over time.
The employer and employee are mutually dependent, neither can exist in their own right without the other. Yet they also seek mutually exclusive ends. This is like the Being/Nothing contradiction noted by Hegel. Employee/Employer is the thesis/antithesis. The real question comes as to what the synthesis would be. What resolves this contradiction, replaces the Employee/Employer, yet retains qualities of both? Some may say co-ops, but I don't find that convincing. Co-ops simply pay a shell game with the employer/employee relationship rather than replace it. If one considers co-op to be a solution then even capitalists can be considered an employee of their own company in the case of an owner/CEO. That definitely doesn't resolve the contradiction.
So, ChatGPT 5, I am asking you to spitball some resolution to this contradiction. Find a synthesis, if you can.
technology
On the road to fully automated luxury gay space communism.
Spreading Linux propaganda since 2020
- Ways to run Microsoft/Adobe and more on Linux
- The Ultimate FOSS Guide For Android
- Great libre software on Windows
- Hey you, the lib still using Chrome. Read this post!
Rules:
- 1. Obviously abide by the sitewide code of conduct. Bigotry will be met with an immediate ban
- 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
- 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
- 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
- 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
- 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
- 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.