[-] ticoombs@reddthat.com 1 points 6 days ago* (last edited 6 days ago)

Pictrs Test:

Looks all good now!

5
submitted 3 weeks ago by ticoombs@reddthat.com to c/antim@reddthat.com

Hey #AntiMeme

Feel like becoming a moderator for a next to no content community? Every felt like your calling was to resurrect a community to only eventually leave it, fulfilling the communities inherit nature.

Let Me Know

Tiff

7

Hey #WebComics ,

We'd like to have a moderator who can keep the webcomics community alive and give it a nice refresh.

Please comment here (&/or send me a PM directly) if you wish to become a mod :) .

Thanks,

Tiff

32
submitted 1 month ago* (last edited 1 month ago) by ticoombs@reddthat.com to c/reddthat@reddthat.com

I found some time. and in 15 minutes from this post we will go down for ~1 hour to ensure we have complete data consistency.

  • 09:00 UTC to 10:00 UTC

~~See you soon!~~

HI!!!!!!!!

47

I've made my position pretty clear how this was the wrong move by our government...

Services that eSafety considers will be age-restricted social media platforms

Facebook
Instagram
Kick
Reddit
Snapchat
Threads
TikTok
Twitch
X (formerly Twitter)
YouTube

Services that eSafety considers will not be age-restricted social media platforms

Discord
GitHub
Google Classroom
LEGO Play
Messenger
Pinterest
Roblox
Steam and Steam Chat
WhatsApp
YouTube Kids
30
submitted 1 month ago* (last edited 1 month ago) by ticoombs@reddthat.com to c/reddthat@reddthat.com

The best time for an update, while hopped up on candy , coffee, and playing trick or treat on the neighbours, enjoying family, friends or just a relaxing day. I hope wherever you are, you are enjoying the holiday season(s).

Reddthat has been super stable and nothing major to report except for pictrs crashing recently (as my lack of updates may have proven).

This sat in my drafts for way too long! But now's the time to finally update you all as there are actually items to update you on!

Moderation

The other admins haven't been online as much lately and I'm doing most of the moderation. So I may end up asking people if they would like to become administrators of reddthat. Mainly I need some people in US time zones and EU time zones. (Which should nearly be all of you 😅!). The majority of the work results in actioning reports and ensuring we don't let bots/people not even trying to read the application process into Reddthat.

If you would like a position, please PM me with:

  • Your timezone/Active Hours
  • What makes you a good candidate for an Admin. (Prev experience?)

Storage storage Storage. New Server!?

Edit: we completed it in https://reddthat.com/post/57007804

As I went to purchase the new server in June/July right after posting our last update the company IMMEDIATELY jacked up their prices. It must have been a typo because it's now for $135/m. Which would give us 2x500GB and 32GB of Ram. The processor is a Xeon E-2136 , which only comes with 12 threads! With a single thread rating of 2700, which is about double of what OVH is giving us.
So hopefully our database will be even faster once we make the change.

On OVH, we managed to finally get the storage increase and it's worked without any problems. Which was perfect timing really. We now sit at around 75% usage of our 200GB drive. So I'm hoping to get 2-3 more months out of it and pray to the tech gods there are some black friday dedicated server deals or even Christmas ones which will allow us to get a better deal.

And that was what happened. We managed to get a deal on a dedicated server in OVH which was around the $90/m . So only a little bit more than what we are paying now. Which is great. From a brief benchmark we get at least 1.5x faster CPU which equates to shorter scheduled tasks, faster queries, and more resources available to showing everyone their own subscriptions and memes. That's not inclusive of the 500GB ssds, which will allow for our database to grow to 2-3x our current total. Which means this should last us around 4-6 years (at the current growth of Lemmy as a whole).

I've been getting it working over the past 2 weeks Oz. We're taking this time to also upgrade to Debian Trixie, which is currently not supported by the Lemmy ansible repo. So I will be upstreaming that change once I fixed it. This includes keeping it up to date so that when we do the swap over in a week's time it will be as short as 5 minutes and as long as 15 minutes for the swap over to happen. I'm pretty sure it will be closer to 15mins as even at 1Gb/s our 100GB database will take around 800s to completely migrate at the theoretical maximum. (And a brief 15 minutes, is better than mucking around with postgres' HA).

This server change is probably going to happen on the 27th/28th. I'll add another post outlining the timeline on the day.

Cheers,
Tiff


Note: On Liberapay, donations are paid in advance, but you are more than welcome to make it recurring monthly instead of paying yearly. Don't worry too much about the "fees". It's just the cost of doing business via the credit card monopoly.

💸 "Expenses":

  • August Costs: ~A$116
  • September Costs: ~A$117
  • October Costs: ~A$121 (Increased storage costs and falling AUD/USD)
  • Nov Costs: ~A$125
  • Dec Costs: ~A$125

⭐ Donation "Statistics":

  • New Donators in Aug,Sept,Oct: 2
    • Thank you again! September and October was dedicated to both of you!
  • New Donators in Nov, Dec: 0
  • Total Weekly: ~$24.54
  • ("Monthly": 24.54×52÷12 = ~$106.34)
  • Our Public Donators:
    • AppleStrudel
    • ~1890351
    • Matthew Fennell

🥅 Goal: 24.54 / 60.00

Want a month dedicated to you? -> https://liberapay.com/reddthat

PS: don't like fees? Use Crypto (Litecoin/Monero) for even better transaction fees than credit cards for your donation. (See the main sidebar for addresses). And validate them again on liberapay too if you want to ensure I get those dollary doos.

19
6
Mr TIFF (inventingthefuture.ghost.io)
submitted 3 months ago by ticoombs@reddthat.com to c/random@reddthat.com

Spoiler about the articleA sad story but a compelling one

11

Via Wayback machine as their site got hugged to death by Hacker News. But some lovely pictures people made on their internet connected thermal printer

1

While not technically YouTube drama it certainly has become drama.

Minio goes Source Only right before a nice CVE comes out for it leaving people who also paid for licences high and dry (if the comments can be believed)

7
4

Note by PHRACK STAFF:

A responsible disclosure was attempted to warn South Korea that China/North Korea has hacked them. The full article, dump and release schedule was shared with South Korea before it was published:

16th of June 2025, Informed Defense Counterintelligence Command.
26th of June 2025, Anonymous response from clearbear001 (dcc.mil.kr?).
16th of July 2025, Anonymous response from operation-dl (who is this?).
17th of July 2025, Informed KISA.
17th of July 2025, Informed Ministry of Unification.
17th of July 2025, Informed LG Uplus Corp.
18th of July 2025, Informed KrCERT.
1st of August 2025, Communication then ended abruptly.
14th of August 2025, The author received an ominous message via Signal, advising him that Proton is not secure (using a burner-phone). (Noticeable: The contact knew about “notfox”, a handle only shared with the South Korean government. Why the contact???).
15th of August 2025, Proton disables the whistleblower’s email account.
16th of August 2025, Proton disables the author’s email account.
18th of August 2025, The complaint fails. The appeal fails. Proton’s response: “your account will cause further damage to our service, therefore we will keep the account suspended”.
22nd of August 2025, Phrack vigorously tries to contact Proton-legal, Andy, and others at Proton. No response.
6th of September 2025, Phrack gives Proton 48h notice: Please respond, or we will try to reach you via social media.
9th of September 2025, Phrack reaches out to Proton on social media.
10th of September 2025, Proton re-enables both email accounts.

The email account was only used to communicate with South Korea. No ToS was violated. No crime was committed.

We trust that Proton will fix the appeal process and become more transparent. Don’t give up on Proton just yet.

So far Proton has not answered our emails or taken our calls. We wish to communicate and work this out together. This is not our first rodeo.

We thank the community for their support and courage.


Very interesting story.

Considering the time lines with the current ongoing S.Korean fires.

4
[-] ticoombs@reddthat.com 107 points 6 months ago

Hey! Sorry for the joke, I didn't expect it to be seen by a real user!

As we are one of the very few instances that has a no email policy there is very few ways in which we can determine if a person signing up is a bot or a regular user.

Recently a very very specific person or group of people have been abusing Reddthat to create accounts, then ask interesting questions (let's just say that), and then proceed to delete their account (which deletes all of their posts and comments). This makes it impossible to figure out what they have done unless someone quotes the reply or reports it before they delete it.

I'm sorry you got caught up in the little bit of fun us admins have with writing little anecdotes or fun catch phases!

You are welcome to come say hi on Reddthat any time!

[-] ticoombs@reddthat.com 13 points 7 months ago

Good news! We managed to get all of the donation money! So none of it is lost and we're back in business! 🎉🎉

It seems our host decided to come back online or see our messages? Still no communication from them, but now that we have completed managed to get all our money back we are good to migrate to Librapay without any issues!

[-] ticoombs@reddthat.com 19 points 9 months ago* (last edited 9 months ago)

Looks easy : https://www.ifixit.com/Guide/Steam+Deck+SSD+Replacement/148989

Edit: Is it worth 30-60minutes of your time, the screwdrivers, maybe the spatchula, and reinstalling steamOS onto the drive?

[-] ticoombs@reddthat.com 16 points 1 year ago

This is sso support as the client. So you could use any backend that supports the oauth backend (I assume, didn't look at it yet).

So you could use a forgejo instance, immediately making your git hosting instance a social platform, if you wanted.
Or use something as self hostable like hydra.

Or you can use the social platforms that already exist such as Google or Microsoft. Allowing faster onboarding to joining the fediverse. While allowing the issues that come with user creation to be passed onto a bigger player who already does verification. All of these features are up for your instance to decide on.
The best part, if you don't agree with what your instance decides on, you can migrate to one that has a policy that coincides with your values.

Hope that gives you an idea behind why this feature is warranted.

[-] ticoombs@reddthat.com 18 points 1 year ago

We enabled the CloudFlare AI bots and Crawlers mode around 0:00 UTC (20/Sept).

This was because we had a huge number of AI scrapers that were attempting to scan the whole lemmyverse.

It successfully blocked them... While also blocking federation 😴

I've disabled the block. Within the next hour we should see federation traffic come through.

Sorry for the unfortunate delay in new posts!

Tiff

[-] ticoombs@reddthat.com 36 points 2 years ago* (last edited 2 years ago)
[-] ticoombs@reddthat.com 20 points 2 years ago* (last edited 2 years ago)

That's a big decision I won't make without community input as it would affect all of us.

If we purely treated it as just another instance with no history then I believe our stance on it would be to allow them, as we are an allow-first type of instance. While there are plenty of people we might not want to interact with, that doesn't mean we should immediately hit that defederate button.

When taking history into account it becomes a whole different story. One may lean towards just saying no without thought.

All of our content (Lemmy/Fediverse) is public by default (at the present time) searchable by anyone and even if I were to block all of the robots and crawlers it wouldn't stop anyone from crawling one of the many other sites where all of that content is shared.

A recent feature being worked on is the private/local only communities. If a new Lemmy instance was created and they only used their local only communities, would we enact the same open first policy when their communities are closed for us to use? Or would we still allow them because they can still interact, view comments, vote and generate content for our communities etc?

What if someone created instances purely for profit? They create an instance corner stone piece of the "market" and then run ads? Or made their instance a subscription only instance where you have to pay per month for access?

What if there are instances right now federating with us and will use the comments and posts you make to create a shit-posting-post or to enhance their classification AI? (Obviously I would be personally annoyed, but we can't stop them)

An analogy of what threads is would be to say threads is a local only fediverse instance like mastodon, with a block on replies. It restricts federation to their users in USA, Canada and Japan and Users cannot see when you comment/reply to their posts and will only see votes. They cannot see your posts either and only allow other fediverse users to follow threads users.

With all of that in mind if we were to continue with our open policy, you would be able to follow threads users and get information from them, but any comments would stay local to the instance that comments on the post (and wouldn't make it back to threads).

While writing up to this point I was going to stay impartial... But I think the lack of two way communication is what tips the scales towards our next instance block. It might be a worthwhile for keeping up-to-date with people who are on threads who don't understand what the fediverse is. But still enabled the feature because it gives their content a "wider reach" so to speak. But in the context of Reddthat and people expressing views and opinions, having one sided communication doesn't match with what we are trying to achieve here.

Tiff

Source(s): https://help.instagram.com/169559812696339/about-threads-and-the-fediverse/

PS: As we have started the discussion I'll leave what I've said for the next week to allow everyone to reply and see what the rest of the community thinks before acting/ blocking them.

Edit1:(30/Mar) PPS: we are currently not federated with them, as no one has bothered to initiate following a threads account

[-] ticoombs@reddthat.com 21 points 2 years ago

It's a sad day when something like this happens. Unfortunately with how the Lemmy's All works it's possible a huge amount of the initial downvotes are regular people not wanting to see the content, as downvotes are federated. This constituted as part of my original choices for disabling it when I started my instance. We had the gripes people are displaying here and it probably constituted to a lack in Reddthat's growth potential.

There needs to be work done not only for flairs, which I like the idea of, but for a curated All/Frontpage (per-instance). Too many times I see people unable to find communities or new content that piques their interest. Having to "wade through" All-New to find content might attribute to the current detriment as instead of a general niche they might want to enjoy they are bombarded with things they dislike.

Tough problem to solve in a federated space. Hell... can't even get every instance to update to 0.18.5 so federated moderation actions happen. If we can't all decide on a common Lemmy instance version, I doubt we can ask our users to be subjected to not using the tools at their disposal. (up/down/report).

Keep on Keeping on!

Tiff - A fellow admin.

[-] ticoombs@reddthat.com 23 points 2 years ago

Don't forget & in community names and sidebars.

Constantly getting trolled by &

[-] ticoombs@reddthat.com 20 points 2 years ago

Updates hiding in the comments again!

We are now using v0.18.3!

There was extended downtime because docker wouldn't cooperate AT ALL.

The nginx proxy container would not resolve the DNS. So after rebuilding the containers twice and investigating the docker network settings, a "simple" reboot of the server fixed it!

  1. Our database on the filesystem went from 33GB to 5GB! They were not kidding about the 80% reduction!
  2. The compressed database backups went from 4GB to ~0.7GB! Even bigger space savings.
  3. The changes to backend/frontend has resulted in less downtime when performing big queries on the database so far.
  4. The "proxy" container is nginx, and because it utilises the configuration upstream lemmy-ui & upstream lemmy. These are DNS entries which are cached for a period of time. So if a new container comes online it doesn't actually find the new containers because it cached all the IPs that lemmy-ui resolves too. (In this example it would have been only 1, and then we add more containers the proxy would never find them). 4.1 You can read more here: http://forum.nginx.org/read.php?2,215830,215832#msg-215832
  5. The good news is that https://serverfault.com/a/593003 is the answer to the question. I'll look at implementing this over the next day(s).

I get notified whenever reddthat goes down, most of the time it coincided with me banning users and removing content. So I didn't look into it much, but honestly the uptime isn't great. (Red is <95% uptime, which means we were down for 1 hour!).

Actually, it is terrible.

With the changes we've made i'll be monitoring it over the next 48 hours and confirm that we no longer have any real issues. Then i'll make a real announcement.

Thanks all for joining our little adventure!
Tiff

[-] ticoombs@reddthat.com 27 points 2 years ago

These were because of recent spam bots.

I made some changes today. We now have 4 containers for the UI (we only had 1 before) and 4 for the backend (we only had 2)

It seems that when you delete a user, and you tell lemmy to also remove the content (the spam) it tells the database to mark all of the content as deleted.

Kbin.social had about 30 users who posted 20/30 posts each which I told Lemmy to delete.
This only marks it as deleted for Reddthat users until the mods mark the post as deleted and it federates out.

The problem

The UPDATE in the database (marking the spam content as deleted) takes a while and the backend waits(?) for the database to finish.

Even though the backend has 20 different connections to the database it uses 1 connection for the UPDATE, and then waits/gets stuck.

This is what is causing the outages unfortunately and it's really pissing me off to be honest. I can't remove content / action reports without someone seeing an error.

I don't see any solutions on the 0.18.3 release notes that would solve this.

Temp Solution

So to combat this a little I've increased our backend processes from 2 to 4 and our front-end from 1 to 4.

My idea is that if 1 of the backend processes gets "locked" up while performing tasks, the other 3 processes should take care of it.

This unfortunately is an assumption because if the "removal" performs an UPDATE on the database and the /other/ backend processes are aware of this and wait as well... This would count as "locking" up the database and it won't matter how many processes I scale out too, the applications will lockup and cause us downtime.

Next Steps

  • Upgrade to 0.18.3 as it apparently has some database fixes.
  • look at the Lemmy API and see if there is a way I can push certain API commands (user removal) off to its own container.
  • fix up/figure out how to make the nginx proxy container know if a "backend container" is down, and try the other ones instead.

Note: we are kinda doing #3 point already it does a round-robbin (tries each sequentially). But from what I've seen in part of the logs it can't differentiate between one that is down and one that is up. (From the nginx documentation, that feature is a paid one)

Cheers, Tiff

view more: next ›

ticoombs

0 post score
0 comment score
joined 2 years ago
MODERATOR OF