2
submitted 1 day ago* (last edited 1 day ago) by [email protected] to c/[email protected]

Reddit currently has a feature titled:

“Someone is considering suicide or serious self-harm”

which allows users to flag posts or comments when they are genuinely concerned about someone’s mental health and safety.

When such a report is submitted, Reddit’s system sends an automated private message to the reported user containing mental health support resources, such as contact information for crisis helplines (e.g., the Suicide & Crisis Lifeline, text and chat services, etc.).

In some cases, subreddit moderators are also alerted, although Reddit does not provide a consistent framework for moderator intervention.


The goal of the feature is to offer timely support to users in distress and reduce the likelihood of harm.

However, there have been valid concerns about misuse—such as false reporting to harass users, or a lack of moderation tools or guidance for handling these sensitive situations.


Given Lemmy's decentralized, federated structure and commitment to privacy and free expression, would implementing a similar self-harm concern feature be feasible or desirable on Lemmy?


Some specific questions for the community:

Would this feature be beneficial for Lemmy communities/instances, particularly those dealing with sensitive or personal topics (e.g., mental health, LGBTQ+ support, addiction)?

How could the feature be designed to minimize misuse or trolling, while still reaching people who genuinely need help?

Should moderation teams be involved in these reports? If so, how should that process be managed given the decentralized nature of Lemmy instances?

Could this be opt-in at the instance or community level to preserve autonomy?

Are there existing free, decentralized, or open-source tools/services Lemmy could potentially integrate for providing support resources?


Looking forward to your thoughts—especially from developers, mods, and mental health advocates on the platform.


https://support.reddithelp.com/hc/en-us/articles/360043513931-What-do-I-do-if-someone-talks-about-seriously-hurting-themselves-or-is-considering-suicide

top 13 comments
sorted by: hot top new old
[-] [email protected] 2 points 10 hours ago

From a software feature perspective I don’t know if this falls onto the platform needing to support.

But seems like an opportunity for a separate bot. Either by keyword or sentiment analysis and/or by other users reporting to the bot.

[-] [email protected] 2 points 10 hours ago

No way. If anything, that kind of thing just supresses people from expressing themselves honestly in a way that might help them.

Real human connection and compassion might make a difference. A cookie cutter template message is (genuinely) a "we don't want you to talk about this here" response

We aren't beholden to advertisers, we don't need this

[-] [email protected] 7 points 17 hours ago

ime as a subreddit mod that was nearly exclusively used for harassment, usually transphobic harassment. In the one or two cases where there was a report for someone who had suicidal or self-harm ideation, there's still zilch I could have done; I would just approve the post so the user could get support and speak to others (the subreddit was a support group for a sensitive subject, so it wouldn't be out of place for a post to say that the stress of certain things was making them suicidal).

[-] [email protected] 7 points 20 hours ago

I'm inclined to believe not a single actually suicidal person received one of these messages.

You can't automate concern for fellow humans.

[-] [email protected] 11 points 1 day ago

The one on reddit is used almost exclusively for harassment. Don't be more like reddit.

[-] [email protected] 20 points 1 day ago

So people can send it to others to harass them? It doesn’t work on Reddit why implement it here? Talking about suicide could actually increase the likelihood of it happening so beyond the fact it will be used to harass people it might be making things worse

[-] [email protected] 27 points 1 day ago

The only time I saw one of these on Reddit was when some asshole sent me one after a heated thread.

[-] [email protected] 4 points 1 day ago

I got them on the fairly regular before I caught my ban and I never even argue I say my piece and gtfo, I don’t respond to people who respond to my comments… it serves no real purpose

[-] [email protected] 26 points 1 day ago

In my experience this feature was abuse by malicious party

[-] [email protected] 13 points 1 day ago

Pretty often. I remember when I first came out exploring my gender identity, getting active on the Trans subs, I got hit by at least a couple. Felt really shitty, and wasn't an uncommon issue from the complaints I saw surrounding it.

[-] [email protected] 5 points 1 day ago

I was critical of the discourse a voluntary sniper in the Ukrainian war had. His bozo flag me as « having a death wish ». It’s one of the rare moment I felt really bad/angry/scared in the internet.

This was so shitty, abusing a system only because your friend/hero said some Nazi shit ? What the fuck ?

[-] [email protected] 18 points 1 day ago* (last edited 1 day ago)

The best help you can give someone in distress is hearing them, whilst you redirect them to a place that can help with empathy and compassion.

Any form of automated message comes across as the exact opposite of empathy and compassion.

In addition, speaking as the admin of a trans and queer community, I don't have any special tools or abilities to help people. Sending the report to me doesn't let me help them, because they're almost certainly not in my country, and I don't have any special access that enables me to contact them or reach out to them. The tool I do have, is the instance itself that we host, that allows people to connect with their community and their peers, that allows them to struggle, and that shuts down anyone who would try and add to the hurt of someone on the edge.

Which is to say, I don't think a reddit style feature has a place here. It will let people think they're helping, without actually doing so, as well as providing a new vector for abuse (though that would be less of an issue than on reddit). In theory, an automated list of resources that could be called on could be useful, but again, if someone is struggling, they need to feel heard, and automated replies can come across as dispassionate and uncaring.

[-] [email protected] 15 points 1 day ago

The existing reporting framework already works for this. Report those so that they can be removed ASAP.

Mods/admins should not be expected to be mental health professionals, and internet volunteers shouldn't have to shoulder that burden.

this post was submitted on 30 May 2025
2 points (53.6% liked)

Asklemmy

48283 readers
651 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_[email protected]~

founded 6 years ago
MODERATORS