7
submitted 2 weeks ago by [email protected] to c/[email protected]

we're finally on Lemmy 0.19.12! check out the changes here:

here's a quick summary of what changed:

  • our instance fork is now in line with the stable upstream version of Lemmy, 0.19.12. big shoutout to @[email protected] for their significant labor in documenting the upstream changes we could expect from 0.19.3 to the new version, in figuring out what the stable version of Lemmy even is (the 0.20.0 and 1.0.0 series of releases don't even talk to their own frontends), and to froztbyte, @[email protected], and @[email protected] for moral support during the upgrade process.
  • all our instance features merged fine into the new version (this, shockingly, wasn't the hard part).
  • our Lemmy Nix module configuration has been moved out of the infrastructure repo, flakeified, and brought in line with the current state of the Lemmy NixOS module. in the process, I fixed two major bugs in the Lemmy NixOS module around secret handling and federation. I will not be upstreaming these changes because the Nix people like murderbots and fascists more than they like having contributors.
  • we're now running on the latest stable NixOS, 25.05.
  • I've removed the infrastructure code for the now-unused staging instance; now we just have prod and dev.
  • we've migrated to PostgreSQL 16, the version currently in use by the Lemmy Docker container.

as always, post here or in the testing thread if anything seems extremely broken

17
submitted 3 weeks ago by [email protected] to c/[email protected]

the good news: we're now on the newest stable lemmy!

the bad news: federation feels a little off to me? sometimes this is a federation queue thing that resolves itself, sometimes it's an indication of a problem.

things to test if you want to help out:

  • see if you can see your posts on other lemmy and mastodon instances
  • post here from other instances
  • see if you can load communities, threads, and comments in non-local communities (this is a big one)
  • see if you can load our communities from other instances and see up-to-date threads and comments
  • make sure your own profile settings are as they should be
  • if you aren't getting email notifications and should be, let me know

I'll push all my changes and post a full changelog once we know 0.19.12's running stable!

18
submitted 3 weeks ago by [email protected] to c/[email protected]

our version of lemmy is old enough that clients like mlem are starting to break due to API drift, so I’m finally upgrading us to the latest stable version of lemmy. this will involve a bit of downtime and potentially a number of breakages; keep an eye out for anything that doesn’t look right after the upgrade and let us know!

[-] [email protected] 123 points 6 months ago

jesus fuck

it’s not particularly gonna help or even make me feel better, but I’m probably gonna reopen that first Lemmy thread a little later and just start banning these awful fuckers from our instance. nobody attacking Asahi has a god damn thing to say to any member of our community.

17
submitted 7 months ago by [email protected] to c/[email protected]

after some extended downtime, I rolled out the following changes to our instance:

  • pict-rs was migrated to version 0.4 then 0.5. this should hopefully fix an issue where pict-rs kept leaking TCP sockets and exhausting its resources, leading to our image uploads and downloads becoming non-functional. let me know if you run into any issues along those lines!
  • NixOS was updated to 24.11.
  • the instance's storage was expanded by 100GB. this increased the monthly bill for our instance by €1.78 per month. to keep the bill low, I disabled an automated backup feature that became unnecessary when we started doing Restic backups.

I have one more thing I want to implement before our big Lemmy upgrade; I expect I should be able to fit it in tomorrow. I'll update this thread with details when I start on it.

16
submitted 8 months ago by [email protected] to c/[email protected]

since we’ve been experiencing a few image cache breakages, I’m scheduling some maintenance for January 24th at 8AM GMT to upgrade our pict-rs version, increase the total amount of storage available to our production instance, and do a handful of other maintenance tasks. this won’t include a lemmy upgrade, but I plan to do one soon after this maintenance round. I anticipate the maintenance should take around 2-4 hours, but will post updates on the instance downtime page and Mastodon if anything changes.

[-] [email protected] 59 points 8 months ago

fucking wild you busted out a Dollar Tree word like abstruse but came here to brag about how you didn’t read the article because you couldn’t understand its extremely simply worded headline

[-] [email protected] 81 points 9 months ago* (last edited 9 months ago)

this is a gentle reminder to posters in this thread that the fediverse in general is nowhere near secure from an opsec perspective; don’t post anything that compromises yourself or us.

with that said, happy December 4th to those who celebrate. post commemorative cocktail recipes here.

e: remember, they call it the fediverse cause it’s full of feds

17
submitted 10 months ago by [email protected] to c/[email protected]

we have a WriteFreely instance now! I wrote up a guide to why it exists, why it's so fucking janky, and what we can do to fix it.

10
submitted 10 months ago by [email protected] to c/[email protected]

this is somewhat of a bigger update, and it's the product of a few things that have been in progress for a while:

email

email should be working again as of a couple months ago. good news: our old provider was, ahem, mildly inflating our usage to get us off their free plan, so this part of our infrastructure is going to cost a lot less than anticipated.

backups

we now have a restic-based system for distributed backups, thanks to a solid recommendation from @[email protected]. this will make us a lot more resilient to the possibility of having our host evaporate out from under us, and make other disaster scenarios much less lethal.

writefreely

I used some of the spare capacity on our staging instance to spin up a new WriteFreely instance where we can post long-form articles and other stuff that's more suitable for a blog. post your gibberish at gibberish.awful.systems! contact me if you'd like an invite link; WriteFreely instances are particularly vulnerable to being turned into platforms for spam and nothing else, so we're keeping this small-scale for instance regulars for now.

alongside all the ordinary WriteFreely stuff (partial federation, a ton of jank), our instance has a special feature: if you have an account, you can make a PR on this repository and once it's merged, gibberish will automatically pull its frontend files from that repo and redeploy WriteFreely. currently this is only for the frontend, but there's a lot you can do with that -- check out the templates, pages, less, and static directories on the repo to see what gets pulled. check it out if you see some jank you want to fix! (also it's the only way to get WriteFreely to host images as part of a post, no I'm not kidding)

what's next?

next up, I plan to turn off Hetzner's backups for awful.systems and use that budget to expand the node's storage by 100GB, which should increase the monthly bill by around 2.50 euros. I want to go this route to expand our instance's storage instead of using an object store like S3 or B2 because using block storage makes us more resilient to Hetzner or Backblaze evaporating or ending our service, and because it's relatively easy to undo this decision if it proves not to scale, but very hard to go from using object storage back to generic block storage.

after that, it'll be about time to carefully upgrade to the current version of Lemmy, and to get our fork (Philthy) in a better state for contributions.

as always, see our infrastructure deployment flake for more documentation and details on how all of the above works.

41
submitted 10 months ago by [email protected] to c/[email protected]

this post has been making the rounds on Mastodon, for good reason. it’s nominally a post about the governance and community around C++, but (without spoiling too much) it’s written as a journey packed with cathartic sneers at a number of topics and people we’ve covered here before. as a quick preview, tell me this isn’t relatable:

This is not a feel good post, and to even call it a rant would be dismissive of the absolute unending fury I am currently living through as 8+ years of absolute fucking horseshit in the C++ space comes to fruition, and if I don’t write this all as one entire post, I’m going to physically fucking explode.

fucking masterful

an important moderator note for anyone who comes here looking to tone police in the spirit of the Tech Industry Blog Social Compact: lol

[-] [email protected] 45 points 10 months ago

really stretching the meaning of the word release past breaking if it’s only going to be available to companies friendly with OpenAI

Orion has been teased by an OpenAI executive as potentially up to 100 times more powerful than GPT-4; it’s separate from the o1 reasoning model OpenAI released in September. The company’s goal is to combine its LLMs over time to create an even more capable model that could eventually be called artificial general intelligence, or AGI.

so I’m calling it now, this absolute horseshit’s only purpose is desperate critihype. as with previous rounds of this exact same thing, it’ll only exist to give AI influencers a way to feel superior in conversation and grift more research funds. oh of course Strawberry fucks up that prompt but look, my advance access to Orion does so well I’m sure you’ll agree with me it’s AGI! no you can’t prompt it yourself or know how many times I ran the prompt why would I let you do that

That timing lines up with a cryptic post on X by OpenAI Altman, in which he said he was “excited for the winter constellations to rise soon.” If you ask ChatGPT o1-preview what Altman’s post is hiding, it will tell you that he’s hinting at the word Orion, which is the winter constellation that’s most visible in the night sky from November to February (but it also hallucinates that you can rearrange the letters to spell “ORION”).

there’s something incredibly embarrassing about the fact that Sammy announced the name like a lazy ARG based on a GPT response, which GPT proceeded to absolutely fuck up when asked about. a lot like Strawberry really — there’s so much Binance energy in naming the new version of your product after the stupid shit the last version fucked up, especially if the new version doesn’t fix the problem

[-] [email protected] 60 points 11 months ago

I love Blade Runner, but I don’t know if we want that future. I believe we want that duster he’s wearing, but not the, uh, not the bleak apocalypse.

there’s nothing more painful than when capitalists think they understand cyberpunk

59
submitted 1 year ago by [email protected] to c/[email protected]

this article is about how and why four of the world’s largest corporations are intentionally centralizing the internet and selling us horseshit. it’s a fun and depressing read about crypto, the metaverse, AI, and the pattern of behavior that led to all of those being pushed in spite of their utter worthlessness. here’s some pull quotes:

Web 3.0 probably won’t involve the blockchain or NFTs in any meaningful way. We all may or may not one day join the metaverse and wear clunky goggles on our faces for the rest of our lives. And it feels increasingly unlikely that our graphic designers, artists, and illustrators will suddenly change their job titles to "prompt artist” anytime soon.

I can’t stress this point enough. The reason why GAMM and all its little digirati minions on social media are pushing things like crypto, then the blockchain, and now virtual reality and artificial intelligence is because those technologies require a metric fuckton of computing power to operate. That fact may be devastating for the earth, indeed it is for our mental health, but it’s wonderful news for the four storefronts selling all the juice.

The presumptive beneficiaries of this new land of milk and honey are so drunk with speculative power that they'll promise us anything to win our hearts and minds. That anything includes magical virtual reality universes and robots with human-like intelligence. It's the same faux-passionate anything that proclaimed crypto as the savior of the marginalized. The utter bullshit anything that would have us believe that the meek shall inherit the earth, and the powerful won't do anything to stop it.

4
submitted 1 year ago by [email protected] to c/[email protected]

we’ve exceeded the usage tier for our email sending API today (and they kindly didn’t email me to tell me that was the case until we were 300% over), so email notifications might be a bit spotty/non-working for a little bit. I’m working on figuring out what we should migrate to — I’m leaning towards AWS SES as by far the cheapest option, though I’m no Amazon fan and I’m open to other options as long as they’ve got an option to send with SMTP

[-] [email protected] 86 points 1 year ago

Copilot then listed a string of crimes Bernklau had supposedly committed — saying that he was an abusive undertaker exploiting widows, a child abuser, an escaped criminal mental patient. [SWR, in German]

These were stories Bernklau had written about. Copilot produced text as if he was the subject. Then Copilot returned Bernklau’s phone number and address!

and there’s fucking nothing in place to prevent this utterly obvious failure case, other than if you complain Microsoft will just lazily regex for your name in the result and refuse to return anything if it appears

[-] [email protected] 55 points 1 year ago

But don’t worry! Google’s AI summaries will soon have ads!

dear fuck, pasting ads onto the part of Google search that’s already known to be unreliable and annoying at best seems like a terrible idea. for a laugh, let’s see if there’s any justification for this awful shit in the linked citation

Ads have always been an important part of consumers’ information journeys.

oh these people are on the expensive drugs huh

72
submitted 1 year ago by [email protected] to c/[email protected]

after the predictable failure of the Rabbit R1, it feels like we’ve heard relatively nothing about the Humane AI Pin, which released first but was rapidly overshadowed by the R1’s shittiness. as it turns out, the reason why we haven’t heard much about the Humane AI pin is because it’s fucked:

Between May and August, more AI Pins were returned than purchased, according to internal sales data obtained by The Verge. By June, only around 8,000 units hadn’t been returned, a source with direct knowledge of sales and return data told me. As of today, the number of units still in customer hands had fallen closer to 7,000, a source with direct knowledge said.

it’s fucked in ways you might not have seen coming, but Humane should have:

Once a Humane Pin is returned, the company has no way to refurbish it, sources with knowledge of the return process confirmed. The Pin becomes e-waste, and Humane doesn’t have the opportunity to reclaim the revenue by selling it again. The core issue is that there is a T-Mobile limitation that makes it impossible (for now) for Humane to reassign a Pin to a new user once it’s been assigned to someone.

92
submitted 1 year ago by [email protected] to c/[email protected]
[-] [email protected] 61 points 1 year ago

it’s time for you to fuck off back to your self-hosted services that surely aren’t just a stack of constantly broken docker containers running on an old Dell in your closet

but wait, what’s this?

@[email protected]

oh you poor fucking baby, you couldn’t figure out how to self-host lemmy! and it’s so easy compared with mail too! so much for common sense!

[-] [email protected] 60 points 1 year ago

At the same time, most participants felt the LLMs did not succeed as a creativity support tool, by producing bland and biased comedy tropes, akin to ``cruise ship comedy material from the 1950s, but a bit less racist''.

holy shit that’s a direct quote from the paper

[-] [email protected] 118 points 1 year ago* (last edited 1 year ago)

there’s this type of reply guy on fedi lately who does the “well actually querying LLMs only happens in bursts and training is much more efficient than you’d think and nvidia says their gpus are energy-efficient” thing whenever the topic comes up

and meanwhile a bunch of major companies have violated their climate pledges and say it’s due to AI, they’re planning power plants specifically for data centers expanded for the push into AI, and large GPUs are notoriously the part of a computer that consumes the most power and emits a ton of heat (which notoriously has to be cooled in a way that wastes and pollutes a fuckton of clean water)

but the companies don’t publish smoking gun energy usage statistics on LLMs and generative AI specifically so who can say

[-] [email protected] 86 points 1 year ago

the secret sauce is always hiding labor exploitation behind a thick layer of bad ideas

[-] [email protected] 63 points 2 years ago

for anyone who wants to increase Amazon’s GPT bill by generating dildo limericks, it looks like this is only enabled for Amazon’s app, not their website

view more: next ›

self

0 post score
0 comment score
joined 2 years ago
MODERATOR OF