The devs explain here a clear intention to make this change difficult enough to prevent at least partially the migration of some communities they don’t want to support and/or give a platform to.
I’m happy it’s becoming harder for neonazis to find a home online, however i’m not happy that this makes lemmy english-centric, and i’m not happy that honest discussion about some topics (including thoughtful criticism) will be made harder.
Related example: on another message board a few weeks back i couldn’t post a message containing my criticism of “bitcoin” because bitcoin was part of the slur filter to filter out the crypto-capitalist clique… i understand and appreciate why it was put in place, but i felt really powerless as a user that a machine who lacks understanding of the context of me using this word, decided i had no right to post it. I appreciate strong moderation, but i don’t trust machine to police/judge our activities.
That’s also the case for me, in case that was not clear :)
I don’t think it’s that easy, because of the context. Should all usage of the n***** word by black people be prevented? Should all usage of w****/b**** words by queer/femmes folks in a sex-positive context be prevented? etc… I agree with you using these words is most times inappropriate and we can find better words for that, however white male technologists have a long history of dictating how the software can be used (and who it’s for) and i believe there’s something wrong in that power dynamic in and of itself. It’s not uncommon that measures of control introduced “to protect the oppressed” turn into serious popular repression.
Still, like i said i like this filter in practice, and it’s part of the reason i’m here (no fascism policy). As a militant antifascist AFK, i need to reflect on this and ponder whether automatic censorship is ok in the name of antifascism: it seems pretty efficient so far, if only as a psychological barrier. And i strongly believe we should moderate speech and advertise why we consider certain words/concepts to be mental barriers, but i’m really bothered on an ethical level to just dismiss content without human interaction. Isn’t that precisely what we critique in Youtube/Facebook/etc? I’m not exactly placing these examples on the same level as a slur filter though ;)