[-] [email protected] 9 points 11 hours ago

wait til you find out what the ml does stand for, it’s a real trip (and it sure as fuck ain’t Mali)

[-] [email protected] 5 points 11 hours ago

holy fuck please learn when to shut the fuck up

[-] [email protected] 11 points 1 day ago

programmers learned what N means in statistics and immediately realized that “this N is too small” is a cool shortcut to sounding smart without reading the study, its goals, or its conclusions. and you can use it every time N is smaller than the human population on earth!

[-] [email protected] 11 points 1 day ago
[-] [email protected] 11 points 1 day ago

this particular abyss just fucking hurts to gaze into

[-] [email protected] 11 points 2 days ago

the reason why we’re calling AI a bubble isn’t because we think the people illegally running gas generators to power their datacenters have suddenly grown a conscience

we’re calling it a bubble because just like with NFTs, there’s no use case for LLMs or generative AI that stands up to even mild scrutiny, but the people funneling money into this crap don’t seem to have noticed yet

[-] [email protected] 11 points 6 days ago

what the numbers show is that nobody gives a shit. nobody’s paying for LLMs and nobody’s running the models locally either, because none of it has a use case. masturbating in public about how invested you are in your special local model changes none of this.

[-] [email protected] 9 points 6 days ago

no, you fuckers wandered into an anti-AI community and started jacking off about local models

[-] [email protected] 8 points 6 days ago

Please calm down.

for some reason this has gotten people very worked up

Seriously I don’t know what I said that is so controversial or hard to understand.

I don’t know why it’s controversial here.

imagine coming into a conversation with people you don’t fucking know, taking a swing and a miss at one of them, and then telling the other parties in the conversation that they need to calm down — about racism.

the rest of your horseshit post is just you restating your original point. we fucking got it. and since you missed ours, here it is one more time:

race science isn’t real. we’re under no obligation to use terms invented by racists that describe nothing. if we’re feeling particularly categorical about our racists on a given day, or pointing out that one is using the guise of race science? sure, use the term if you want.

tone policing people who want to call a racist a racist ain’t fucking it. what in the fuck do you think you added to this conversation? what does anyone gain from your sage advice that “X is Y but Y isn’t X” when the other poster didn’t say that Y is X but instead that Y doesn’t exist?

so yeah no I’m not calm, go fuck yourself. we don’t need anyone tone policing conversations about racism in favor of the god damn racists

[-] [email protected] 123 points 3 months ago

jesus fuck

it’s not particularly gonna help or even make me feel better, but I’m probably gonna reopen that first Lemmy thread a little later and just start banning these awful fuckers from our instance. nobody attacking Asahi has a god damn thing to say to any member of our community.

17
submitted 5 months ago by [email protected] to c/[email protected]

after some extended downtime, I rolled out the following changes to our instance:

  • pict-rs was migrated to version 0.4 then 0.5. this should hopefully fix an issue where pict-rs kept leaking TCP sockets and exhausting its resources, leading to our image uploads and downloads becoming non-functional. let me know if you run into any issues along those lines!
  • NixOS was updated to 24.11.
  • the instance's storage was expanded by 100GB. this increased the monthly bill for our instance by €1.78 per month. to keep the bill low, I disabled an automated backup feature that became unnecessary when we started doing Restic backups.

I have one more thing I want to implement before our big Lemmy upgrade; I expect I should be able to fit it in tomorrow. I'll update this thread with details when I start on it.

16
submitted 5 months ago by [email protected] to c/[email protected]

since we’ve been experiencing a few image cache breakages, I’m scheduling some maintenance for January 24th at 8AM GMT to upgrade our pict-rs version, increase the total amount of storage available to our production instance, and do a handful of other maintenance tasks. this won’t include a lemmy upgrade, but I plan to do one soon after this maintenance round. I anticipate the maintenance should take around 2-4 hours, but will post updates on the instance downtime page and Mastodon if anything changes.

17
submitted 7 months ago by [email protected] to c/[email protected]

we have a WriteFreely instance now! I wrote up a guide to why it exists, why it's so fucking janky, and what we can do to fix it.

10
submitted 7 months ago by [email protected] to c/[email protected]

this is somewhat of a bigger update, and it's the product of a few things that have been in progress for a while:

email

email should be working again as of a couple months ago. good news: our old provider was, ahem, mildly inflating our usage to get us off their free plan, so this part of our infrastructure is going to cost a lot less than anticipated.

backups

we now have a restic-based system for distributed backups, thanks to a solid recommendation from @[email protected]. this will make us a lot more resilient to the possibility of having our host evaporate out from under us, and make other disaster scenarios much less lethal.

writefreely

I used some of the spare capacity on our staging instance to spin up a new WriteFreely instance where we can post long-form articles and other stuff that's more suitable for a blog. post your gibberish at gibberish.awful.systems! contact me if you'd like an invite link; WriteFreely instances are particularly vulnerable to being turned into platforms for spam and nothing else, so we're keeping this small-scale for instance regulars for now.

alongside all the ordinary WriteFreely stuff (partial federation, a ton of jank), our instance has a special feature: if you have an account, you can make a PR on this repository and once it's merged, gibberish will automatically pull its frontend files from that repo and redeploy WriteFreely. currently this is only for the frontend, but there's a lot you can do with that -- check out the templates, pages, less, and static directories on the repo to see what gets pulled. check it out if you see some jank you want to fix! (also it's the only way to get WriteFreely to host images as part of a post, no I'm not kidding)

what's next?

next up, I plan to turn off Hetzner's backups for awful.systems and use that budget to expand the node's storage by 100GB, which should increase the monthly bill by around 2.50 euros. I want to go this route to expand our instance's storage instead of using an object store like S3 or B2 because using block storage makes us more resilient to Hetzner or Backblaze evaporating or ending our service, and because it's relatively easy to undo this decision if it proves not to scale, but very hard to go from using object storage back to generic block storage.

after that, it'll be about time to carefully upgrade to the current version of Lemmy, and to get our fork (Philthy) in a better state for contributions.

as always, see our infrastructure deployment flake for more documentation and details on how all of the above works.

41
submitted 7 months ago by [email protected] to c/[email protected]

this post has been making the rounds on Mastodon, for good reason. it’s nominally a post about the governance and community around C++, but (without spoiling too much) it’s written as a journey packed with cathartic sneers at a number of topics and people we’ve covered here before. as a quick preview, tell me this isn’t relatable:

This is not a feel good post, and to even call it a rant would be dismissive of the absolute unending fury I am currently living through as 8+ years of absolute fucking horseshit in the C++ space comes to fruition, and if I don’t write this all as one entire post, I’m going to physically fucking explode.

fucking masterful

an important moderator note for anyone who comes here looking to tone police in the spirit of the Tech Industry Blog Social Compact: lol

59
submitted 10 months ago by [email protected] to c/[email protected]

this article is about how and why four of the world’s largest corporations are intentionally centralizing the internet and selling us horseshit. it’s a fun and depressing read about crypto, the metaverse, AI, and the pattern of behavior that led to all of those being pushed in spite of their utter worthlessness. here’s some pull quotes:

Web 3.0 probably won’t involve the blockchain or NFTs in any meaningful way. We all may or may not one day join the metaverse and wear clunky goggles on our faces for the rest of our lives. And it feels increasingly unlikely that our graphic designers, artists, and illustrators will suddenly change their job titles to "prompt artist” anytime soon.

I can’t stress this point enough. The reason why GAMM and all its little digirati minions on social media are pushing things like crypto, then the blockchain, and now virtual reality and artificial intelligence is because those technologies require a metric fuckton of computing power to operate. That fact may be devastating for the earth, indeed it is for our mental health, but it’s wonderful news for the four storefronts selling all the juice.

The presumptive beneficiaries of this new land of milk and honey are so drunk with speculative power that they'll promise us anything to win our hearts and minds. That anything includes magical virtual reality universes and robots with human-like intelligence. It's the same faux-passionate anything that proclaimed crypto as the savior of the marginalized. The utter bullshit anything that would have us believe that the meek shall inherit the earth, and the powerful won't do anything to stop it.

4
submitted 10 months ago by [email protected] to c/[email protected]

we’ve exceeded the usage tier for our email sending API today (and they kindly didn’t email me to tell me that was the case until we were 300% over), so email notifications might be a bit spotty/non-working for a little bit. I’m working on figuring out what we should migrate to — I’m leaning towards AWS SES as by far the cheapest option, though I’m no Amazon fan and I’m open to other options as long as they’ve got an option to send with SMTP

72
submitted 11 months ago by [email protected] to c/[email protected]

after the predictable failure of the Rabbit R1, it feels like we’ve heard relatively nothing about the Humane AI Pin, which released first but was rapidly overshadowed by the R1’s shittiness. as it turns out, the reason why we haven’t heard much about the Humane AI pin is because it’s fucked:

Between May and August, more AI Pins were returned than purchased, according to internal sales data obtained by The Verge. By June, only around 8,000 units hadn’t been returned, a source with direct knowledge of sales and return data told me. As of today, the number of units still in customer hands had fallen closer to 7,000, a source with direct knowledge said.

it’s fucked in ways you might not have seen coming, but Humane should have:

Once a Humane Pin is returned, the company has no way to refurbish it, sources with knowledge of the return process confirmed. The Pin becomes e-waste, and Humane doesn’t have the opportunity to reclaim the revenue by selling it again. The core issue is that there is a T-Mobile limitation that makes it impossible (for now) for Humane to reassign a Pin to a new user once it’s been assigned to someone.

92
submitted 11 months ago by [email protected] to c/[email protected]
39
submitted 11 months ago by [email protected] to c/[email protected]

as I was reading through this one, the quotes I wanted to pull kept growing in size until it was just the whole article, so fuck it, this one’s pretty damning

here’s a thin sample of what you can expect, but it gets much worse from here:

Internal conversations at Nvidia viewed by 404 Media show when employees working on the project raised questions about potential legal issues surrounding the use of datasets compiled by academics for research purposes and YouTube videos, managers told them they had clearance to use that content from the highest levels of the company.

A former Nvidia employee, whom 404 Media granted anonymity to speak about internal Nvidia processes, said that employees were asked to scrape videos from Netflix, YouTube, and other sources to train an AI model for Nvidia’s Omniverse 3D world generator, self-driving car systems, and “digital human” products. The project, internally named Cosmos (but different from the company’s existing Cosmos deep learning product), has not yet been released to the public.

48
submitted 1 year ago by [email protected] to c/[email protected]

so Andreessen Horowitz posted another manifesto just over a week ago and it’s the most banal fash shit you can imagine:

Regulatory agencies have been green lit to use brute force investigations, prosecutions, intimidation, and threats to hobble new industries, such as Blockchain.

Regulatory agencies are being green lit in real time to do the same to Artificial Intelligence.

does this shit ever get deeper than Regulation Bad? fuck no it doesn’t. is this Horowitz’s attempt to capitalize on the Supreme Court’s judiciary coup? you fucking bet.

here’s some more banal shit:

We find there are three kinds of politicians:

Those who support Little Tech. We support them.

Those who oppose Little Tech. We oppose them.

Those who are somewhere in the middle – they want to be supportive, but they have concerns. We work with them in good faith.

I find there are three kinds of politicians:

  • those who want hamburger. I give them hamburger.
  • those who abstain from hamburger. I do not give them hamburger.
  • those who have questions about hamburger. I refer them to the shift supervisor in good faith.
[-] [email protected] 118 points 1 year ago* (last edited 1 year ago)

there’s this type of reply guy on fedi lately who does the “well actually querying LLMs only happens in bursts and training is much more efficient than you’d think and nvidia says their gpus are energy-efficient” thing whenever the topic comes up

and meanwhile a bunch of major companies have violated their climate pledges and say it’s due to AI, they’re planning power plants specifically for data centers expanded for the push into AI, and large GPUs are notoriously the part of a computer that consumes the most power and emits a ton of heat (which notoriously has to be cooled in a way that wastes and pollutes a fuckton of clean water)

but the companies don’t publish smoking gun energy usage statistics on LLMs and generative AI specifically so who can say

51
submitted 1 year ago by [email protected] to c/[email protected]

who could have seen this coming, other than everyone who told the homebrew tree inverter guy this was a bad idea they absolutely shouldn’t do

view more: next ›

self

0 post score
0 comment score
joined 2 years ago
MODERATOR OF