this post was submitted on 23 Jun 2023
29 points (91.4% liked)
Asklemmy
43761 readers
1368 users here now
A loosely moderated place to ask open-ended questions
Search asklemmy ๐
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- [email protected]: a community for finding communities
~Icon~ ~by~ ~@Double_[email protected]~
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Because there is no karma system on lemmy (thank goodness, I'm against karma), you can easily create a 1000s of bots which will upvote your post and bring it to front page.
The solution is not some custom anti-abuse system which can be game. (Stuff like "you can't vote because of the age of your account", ...) IMHO, the solution is bot detection. Since everything is public an an instance, somebody at some point will start scraping instance to detect bot behavior and inform instance owner. It will come with maturity.
Not- actually accurate- There IS a karma system. I can lookup your overall post karma, both positive and negative. I can lookup your comment karma. Separated by positive and negative.
Its just not exposed through the Lemmy UI currently. I will note, kbin does who is upvoting and downvoting posts as well.
Congratulats! You just false-positive marked 10โ of humans as bots.
This is a horrible idea.
If humans have a bot-like behavior, it's okay to mark them as bot. If a human is only posting to promote products/astroturf, who cares if it's misclassified, it doesn't add anything to the discourse. IMHO, that's good riddance.
And in my solution, at the end the instance owner takes action, it's not like there is no human recourse.
The bot-like behaviour usually isn't what they post but what they look like.
If it targets their account actions and not what they look like, that's fine. But that's not how most anti-bot systems work. Mostly they discriminate how you look before you do anything.