this post was submitted on 19 Jan 2025
190 points (99.5% liked)

Lemmy.ca's Main Community

3012 readers
16 users here now


Welcome to the lemmy.ca/c/main community!

All new users on lemmy.ca are automatically subscribed to this community, so this is the place to read announcements, make suggestions, and chat about the goings-on of lemmy.ca.

For support requests specific to lemmy.ca, you can use [email protected].


founded 4 years ago
MODERATORS
 

Hello everyone, we're long overdue for an update on how things have been going!

Finances

Since we started accepting donations back in July we've received a total of $1350, as well as $1707 in older donations from smorks. We haven't had any expenses other than OVH (approx $155/mo) since then, leaving us $2152 in the bank.

We still owe TruckBC $1980 for the period he was covering hosting, and I've contributed $525 as well (mostly non-profit registration related stuff, plus domain renewals). We haven't yet discussed reimbursing either of us, we're both happy to build up a contingency fund for a while.

New Server

A few weeks ago, we experienced a ~26-hour outage due to a failed power supply and extremely slow response times from OVH support. This was followed by an unexplained outage the next morning at the same time. To ensure Lemmy’s growth remains sustainable for the long term and to support other federated applications, I’ve donated a new physical server. This will give us a significant boost in resources while keeping the monthly cost increase minimal.

Our system specs today:

  • Undoubtedly the cheapest hardware OVH could buy
  • Intel Xeon E-2386G (6 cores @ 3.5ghz)
  • 32gb of ram
  • 2x 512gb Samsung nvme in raid 1
  • 1gb network
  • $155/month

The new system:

  • Dell R7525
  • AMD EPYC 7763 (64 cores @ 2.45ghz)
  • 1tb of ram
  • 3x 120gb sata ssd (hw raid 1 with a hot spare, for proxmox)
  • 4x 6.4tb nvme (zfs mirrored + striped, for data)
  • 1gb network with a 50mbit commit (See 95th percentile billing)
  • Redundant power supplies
  • Next day hardware support until Aug 2027
  • $166/month + tax

This means instead of renting an entire server and having them be responsible for the hardware, we'll be renting co-location space at a Vancouver datacenter PDF via a 3rd party service provider I know.

These servers are extremely reliable but if there is a failure, either Otter or myself will be able to get access reasonably quickly. We also have full OOB access via idrac, so it's pretty unlikely we'll ever need to go on site.

Server Migration

Phase 1 is currently planned for Jan 29th or 30th and will completely move us out of OVH and onto our own hardware. I'm expecting probably a 2-3 hour outage, followed by an 6-8 hour window where some images may be missing as the object store resyncs. I'll make another follow up post in a week with specifics.

Phases 2+ I'm not 100% decided on yet and have not planned a timeline around. It would get us into a fully redundant (excluding hardware) setup that's easier to scale and manage down the road, but it does add a little bit of complexity.

Let me know if you have any questions or comments, or feedback on the architecture!

top 42 comments
sorted by: hot top controversial new old
[–] [email protected] 29 points 1 month ago

That's quite a beefy machine.

Nice to know things are chugging along nicely.

[–] [email protected] 24 points 1 month ago

I'm always so glad that this is the instance I chose to join in the Great Migration, and equally glad that I've been welcome here. Keep up the awesome work and thank you for keeping the communication so open.

[–] [email protected] 20 points 1 month ago

Wow that's one fat machine. I like it!

Thank you for the continued support! I can see our monthly donations are used well. ☺️

[–] [email protected] 19 points 1 month ago

What a machine to donate! Thanks!

I donated for a couple months, and had to stop due to personal finances, hope tos tart again soon

[–] [email protected] 17 points 1 month ago

Thanks for all that you do, and the great communication!

[–] [email protected] 13 points 1 month ago (1 children)

I'm seeing a net debt of about $400 hanging over you guys after all is said and done. I don't think anyone is clamoring for that money to be paid back, but I'd be happier if you guys didn't have to worry about it. I'll send some money for that reason, as well as all the good work you all do in supporting and maintaining this. Thank you.

[–] [email protected] 8 points 1 month ago

Thanks, appreciate this and all the other donations I've seen in the past day!

[–] [email protected] 11 points 1 month ago (1 children)

I don't know enough about how Lemmy works. Can you have an active/passive cluster (physical or virtual) to assist in uptime for the instance?

[–] [email protected] 11 points 1 month ago (1 children)

Yes, can do active / active effectively. That's basically stage 3 but it's all on one box to keep costs down. We get software failure redundancy but not hardware.

[–] [email protected] 8 points 1 month ago (1 children)

The reason I ask is. Can I host a server for you to be a passive node to help out?

[–] [email protected] 14 points 1 month ago (1 children)

Ah thanks for the offer but there's PII concerns there.

[–] [email protected] 4 points 1 month ago

Ah, i didn't know that. No problem. Just trying to help. Love the Lemmy community.

[–] [email protected] 11 points 1 month ago (1 children)

If you are ok with saying, how much did that server cost?

[–] [email protected] 15 points 1 month ago (1 children)

When purchased brand new by a now dying tech company, it was about 20-25k. I put dibs on it as part of my commission for managing the shut down & sale of their datacenters, this was just one of many such servers they owned.

[–] [email protected] 7 points 1 month ago (1 children)

Is that something you do often, getting cheap gear like that? Smart move!

[–] [email protected] 8 points 1 month ago

Not usually to this extent.

[–] [email protected] 10 points 1 month ago (1 children)

Probably out of context, but do you have any plans of adding other networks up fedecan? Like mstdn.ca?

And are there any plans for other services like Pixelfed, Friendica, or Peertube?

[–] [email protected] 7 points 1 month ago (1 children)

Probably out of context, but do you have any plans of adding other networks up fedecan? Like mstdn.ca?

We're open to it, and it has a number of benefits, but we haven't formally discussed with their team on what that might look like.

And are there any plans for other services like Pixelfed, Friendica, or Peertube?

Yes, we definitely want to spin up more things once we are settled. Pixelfed is near the front of that list, as well as Friendica.

We haven't said no to any of them, but for example there isn't as much of a need for us to spin up Mastodon since mstdn.ca exists. A lot of us have accounts on there too

[–] [email protected] 4 points 1 month ago

That's awesome! I can't wait for it to grow. I'd love to help in any way I can.

[–] [email protected] 8 points 1 month ago (1 children)

What would be an ideal monthly contribution for lemmy.ca users? I've only done punctual donations so far. But I'm willing to do a monthly donation.

[–] [email protected] 7 points 1 month ago

I think that's just a personal decision.

[–] [email protected] 8 points 1 month ago (1 children)

What’s the average cost per user per year at this time?

[–] [email protected] 10 points 1 month ago (2 children)

Good question. Just going off the numbers on the main page side bar and doing some quick math, maybe about $1.10 per user per year.

[–] [email protected] 8 points 1 month ago (1 children)

That’s so much lower than I was expecting.

Do we have something like a week of fundraising or a pledge drive to get the word out on donations? $2 or $5 a year is very reasonable.

[–] [email protected] 9 points 1 month ago (1 children)

I'm not worried about it yet, we're covering our costs as it is today with existing donations. I don't want us to be like Wikipedia begging for money all the time.

[–] [email protected] 6 points 1 month ago

lol that was the case study I had in mind to avoid

[–] [email protected] 2 points 1 month ago

Wow that’s amazingly cheap! It doesn’t cost much to have digital independence.

[–] bdonvr 7 points 1 month ago (1 children)

Do you really feel you're lacking in compute resources?

How does Lemmy scale with more active users?

[–] [email protected] 11 points 1 month ago

Compute no, but memory yes. Lemmy is actually pretty lean and efficient, but 32gb is a bit tight for a few instances of it as well as postgres. We run multiple instances to reduce the impact when one stutters (not uncommon).

Upgrading to 64gb probably would have let us scale for the next year on the existing box, but I had this totally overkill hardware so might as well use it!

[–] [email protected] 7 points 1 month ago (2 children)

I've actually been investigating Postgres cluster configurations the past 2 weeks at work (though we're considering CloudNativePG+Kubernetes on 3 nodes spanning two physical locations).

One thing I might recommend is to investigate adding a proxy like PgBouncer in front of the databases. This will manage request differences where write-queries must go to the primary, but read-queries may go to any of the replicas as well.

It should also better handle the recycling of short-lived and orphaned connections, which may become more of a concern on your stage 3, and especially on some stage 4.

[–] [email protected] 5 points 1 month ago* (last edited 1 month ago)

PGBouncer is more of a connection pooler than a proxy.

Outside of the k8s ecosystem patroni with haproxy is probably a better option.

Pgbouncer is still good practice though because PG just can't handle a high number of connections.

[–] [email protected] 4 points 1 month ago

Ah yeah good call, thanks.

[–] [email protected] 6 points 1 month ago

Thank you for all your work.

[–] [email protected] 6 points 1 month ago

The new server is 😻

[–] [email protected] 5 points 1 month ago (1 children)

Sounds good!

Architecture question. Why are pictures stored in 2 locations? Why not just postgres and be done with it. Blob is a supported format after all.

[–] [email protected] 21 points 1 month ago (1 children)

Metadata is in the db, files are in object storage. This is pictrs's design and the correct way to build it.

We're at 1tb of images today, no way I'd want to deal with scaling postgres to multiple TB. Object storage is cheap, scalable, easy to distribute and manage across multiple providers, etc.

[–] [email protected] 11 points 1 month ago

So if I'm understanding the current architecture is the easier and cheaper of the options. Makes sense thank you.

[–] [email protected] 5 points 1 month ago

32Gb -> 1Tb is not an upgrade you can sneeze at. Plus, 6C -> 64C. Nice. And no mention of the HT jump either, which might be even more impressive if the first was a low-end chip with the same number of threads as cores.

Looks like you’ll be set for some time to come.

This was followed by an unexplained outage the next morning at the same time.

Very sus. You should demand an RCA (Root Cause Analysis).

[–] [email protected] 5 points 1 month ago (1 children)

Can you remind me how/where to donate?

[–] [email protected] 4 points 1 month ago

Great news!

[–] [email protected] 4 points 1 month ago