Hi guys I’m new to lemmy (using lemmy.fmhy.ml). I’m curious if their is a way to automod (to prevent spam, swearing, etc similar to reddit auto mod).
As others have said, the closest thing to the Reddit AutoMod is the word filtering REGEX, which is configured at instance level by your instance’s admins.
My team and I are planning on building an AutoMod bot of our own for our modded Lemmy instance, but since we don’t consider it a priority it will come in due time, right now we are all busy working on other projects (all Lemmy related).
There’s a slur filter regex where you can ban words, its available to admins via admin page
If 4chan sets one you can be sure they’ll ban a random thing per week like “no ‘the’ this week, that is a slur”
is there specific words you’re trying to prevent? You may DM about them :)
Not particularly.
Here is my current point of view (I may be extremely wrong):
In reddit say someone posts spam and/or racism, sexism etc. Usually subreddit automods would catch and ban offenders. Only in extreme cases does reddit admins automatically or manually block unwanted content (with usually the worst case being a poorly moderated sub leads to the sub reddit being shut down).
In this Lemmy node by contrast would the admin of the Lemmy node be responsible for setting up a global auto mod to attempt to curb spam, hate speach with smaller group mods (ie subreddit mods not sure proper term) supplementing?
Just curious how moderation dynamics would work on Lemmy in general
Sry if I am being a bit broad
Yes admins set global auto ban for words for the instance, admins can remove, lock and purge content across the instance and mods can remove / lock things within their community. Seeing at 12th has/is approaching across the globe, we’ll add more filters for slurs etc just in case trolls or spam bots attack :)
Gotcha thanks for the clarification 😊
Are there any slurs banned right now?
Currently no, I haven’t gotten the chance to do so yet, definitely will get it done soon.
I would be careful with using regex for content moderation (see the Scunthorpe problem).
There are open source machine learning models that could be used to detect toxic content. See
I dont know how much compute power you would need to process the content of a lemmy instance using this. But it would be more reliable than regexes.