Make messages collected by the bots reportable #6552
Replies: 1 comment 1 reply
-
I am glad someone brought this up. After the new system was in place I thought "Oh, so Discord Inc./Microsoft wants to keep its users LESS SAFE, I see" This is a huge flaw. We use bots to auto delete harmful messages and content, can ban the user from server, pre-ban from other servers we are on and maybe even make a report to the mod group we are with so they can pre-ban, but that's it. These attackers are free to roam to other servers and victimize and terrorize other users. Well done, Discord Inc./Microsoft. The system here seems to make sense. They need to do something. Even if they added a 'Report User' to each user and the report only took the context of where the report came from; they look at user recent DMs if report user came from a DM, look at recent messages from a specific server if Report User was called in a server. Do something Discord, by doing nothing you are acknowledging that you have made your platform LESS SAFE by allowing attackers to go server to server victimizing the same users you should be protecting. |
Beta Was this translation helpful? Give feedback.
-
Discord's reporting system has a huge flaw where it doesn't account for required immediateness of the action. For example, if malicious users join your game community server that has lots of minors in it, and then start spamming explicit or violent content that Discord systems unfortunately still cannot filter, you as moderators have to ban and delete these messages as soon as possible to ensure safety of your members, which means skipping reporting, because it takes a lot of precious time that you don't have.
This, in turn, makes those explicit messages unreportable because they're deleted, which means you effectively let those malicious users' actions slide without consequences. After contacting Discord several times I'm fairly certain that you are not unwilling to address this issue, therefore I'm suggesting a compromise here.
Many of the moderation bots nowadays are equipped with temporary messages archives where bots would record
n
days of messages, so when a user deletes a single message, or those messages are deleted in bulk, they can always be recovered, logged, and used as evidence in moderation tracking system. Even when bots don't specifically collect the messages, they still can have messages that were bulk deleted in cache and log them.Unfortunately, there is no way to report this evidence to Discord's Trust and Safety, and even if there was, it cannot be trusted because, given the message deletion, Discord has no way to verify that those messages indeed existed and not were maliciously edited in to false flag some user.
I propose Discord creates a special intent for moderation bots where the bots will receive special cryptographic signatures for every message. These signatures can only be created by Discord, but can be verified by anyone using Discord's public key (including by Discord's own Trust and Safety team (it it's still real at this point)).
Those trusted bots can then send special metadata attached to their logs / messages mentioning archives, which would make it possible for Discord client to alter behaviour of the reporting action and prompt moderator to report the log entry / archive. When reporting a message containing a log / archive, Discord client would use metadata from the message and display that you are reporting a specific user, rather than the bot. It will allow moderator to select the category and report message on their behalf. Very similar to automod alerts.
Contents of metadata should contain the message objects that are being reported with their signatures, and IDs of the users that are being reported.
As for CDN content, Discord seems to already preserve attachments for some time after deletion. As moderators would tend to report logs / archives soon after actions have taken place, it should not be a huge concern. Alternatively, if Discord notices a log / archive posted by the bot with reportable metadata, it can automatically lockout CDN content deletion for a longer period.
Why not make reporting endpoint for bots? Reporting endpoint can be abused in case of a hack, where an attacker may create false reports on behalf of other users. By creating a flow where user manually reports the archive through a trusted environment you are avoiding this kind of attack.
This system would make it possible to enable post-factum reporting, and reporting for actions performed through moderation bots (which some server require for accountability reasons — both to the users (who receive a DM with ban reason) and administrators (who can check moderation actions)).
I believe it's a good compromise if Discord is unwilling to address the core issue, and I hope most popular moderation bots will be onboard with this.
Full disclosure: I am not a bot developer, so perhaps I'm being way too naive here. If actual bot developers see this, I encourage you to weight in with any concerns you have and suggest alternatives. It'd be ideal of course if bots didn't have to deal with lacking Discord's safety tooling, but they already do that (with logging, automodding, infraction keeping), so I think this is going one more thing to improve the automation.
Beta Was this translation helpful? Give feedback.
All reactions