this post was submitted on 25 Apr 2025
7 points (100.0% liked)

bless this jank

108 readers
1 users here now

post instance suggestions and complaints here

founded 2 years ago
MODERATORS
 

More info: https://lemm.ee/post/62300637 This instances nodeinfo: https://awful.systems/nodeinfo/2.0.json is reporting 351810 local posts, thats posts on this server, not federated ones.

top 2 comments
sorted by: hot top controversial new old
[–] [email protected] 4 points 3 days ago

I've found a root cause and issued a very temporary fix. here's what's up:

nodeinfo and the sidebar statistics are both pulled from a database table named site_aggregates. some of the columns in that table, like the ones for active users per time period, are calculated by the Lemmy backend on a schedule. the posts and comments columns are calculated live, but not by the backend; those columns are driven purely by database triggers calling stored procedures when the post and comment tables update. this is an awful pattern.

anyway, to illustrate the bug in lemmy, the database migration that established the site_aggregates table correctly initializes the posts column like so:

SELECT coalesce(count(*), 0) FROM post WHERE local = TRUE) AS posts

but that's not what keeps the column up to date. on updates to the post table, the database calls the site_aggregates_post_insert() stored procedure, which has the following body -- see if you can spot the mistake:

 CREATE OR REPLACE FUNCTION public.site_aggregates_post_insert()
  RETURNS trigger
  LANGUAGE plpgsql
 AS $function$
 BEGIN
     UPDATE
         site_aggregates sa
     SET
         posts = posts + (
             SELECT 
                 count(*)
             FROM
                 new_post)
     FROM
         site s
     WHERE 
         sa.site_id = s.id;
     RETURN NULL;
 END
 $function$

(and for completeness, this is called by this database trigger:)

CREATE OR REPLACE TRIGGER site_aggregates_post_insert
    AFTER INSERT ON post REFERENCING NEW TABLE AS new_post
    FOR EACH STATEMENT
    EXECUTE PROCEDURE site_aggregates_post_insert ();

did you spot the mistake? no shame if you didn't, stored procedures can be hard to follow. here it is: the statement that initializes the posts column has a WHERE local = true clause that correctly filters out non-local posts from the statistics it pulls. the stored procedure doesn't have that; there's no mechanism in place that filters out non-local posts from the count, so our database was incrementing the count every time anything was stored in the posts table, including posts discovered via federation.

I have temporarily corrected our posts column and our nodeinfo along with it by setting the value of that column to the value of the initialization statement above. the stored procedure and trigger in our database are still incorrect; I will need to carefully fix the stored procedure in our database in a way that won't break future migrations when we upgrade Lemmy.

as far as I can tell trying to piece together the SQL over a year of migrations (another reason why you don't use stored procedures if you can help it), this bug was never fixed. a migration dated 2024-02-24 dropped all of the procedures and triggers that used to update site_aggregates. I don't know what mechanism replaced them, and I won't find out until I evaluate the newest "stable" version of lemmy for suitability to be deployed into production.

someone should probably inform db0 that nodeinfo statistics for lemmy instances running anything before the commit with that migration are incorrect; this likely affects small instances much more than large ones. also tell him the following:

  • I'm still suspicious as fuck
  • I'll fuckin do it again
[–] [email protected] 4 points 4 days ago

thank you for following up on that! I’m surprised it’s returning an invalid value — we’re not botting (to my knowledge at least, I’ll check the DB later and make sure nobody’s doing anything weird) and I haven’t written any code that touches nodeinfo directly. to be honest, if I were to modify our stats in any way I’d do it to make our instance look smaller, as larger instances tend to attract more bad actors.

I see that @[email protected] tried to initiate contact in that thread, but unfortunately I’m unable to reply directly because posts between our instances don’t seem to be federating — this could be due to a federation queue delay, or possibly an automated quarantine due to us being on the suspicious instances list.

my hunch is that our instance may be misreporting its nodeinfo due to a bug in our (now rather old) version of lemmy. I’ve been meaning to upgrade us for a while, but there is a bit of outstanding infrastructural work I’d like to do as part of that. since it seems to be impacting the health of our federation, I’ll prioritize an upgrade to the newest stable version.