Java Chat Reporting Concerns

(Page managed by Corman#0001 on discord, or CormanVG on Youtube)
(Concerns collected by Gildfesh#0213 (71511206883495936) and written into the original document by guro#0666 (71195665727750144) on the #saveminecraft discord.)

Technical Considerations

Trust in Reporting Process

The most fundamental concern regarding the chat reporting system is that of trust. Given the decentralised nature of Minecraft's multiplayer ecosystem, it is nearly impossible to trust any part of the reporting process to make honest reports of the chat. Many technical minds in the community have attempted to devise a sufficiently trustworthy system, but no attempts have succeeded. Fault tolerance in the system is both necessary and a vector for exploitation, which drives concern amongst the technical community.

Ghostwriter Exploit

The "ghostwriter" exploit (servers signing arbitrary text on clients, MC-253888) poses serious concerns. The issue was reopened, which is a good first step, but given the initial response, we have concerns about any potential solutions.

Keylogging Vulnerability

As a result of the design of Chat Preview, it is possible for servers to "keylog" everything you type in your chat box. It is foreseeable that someone could paste sensitive information into their chat box on accident (e.g. passwords, addresses, AWS keys, etc.) and have that information forwarded to the server, where it is free to keep that information forever, even if the end user did not end up sending the message. We believe this to be a grave design failure.

Chat Preview

The intended uses of Chat Preview are not apparent. It seems to fail in any circumstance that isn't purely formatting, and it contradicts the idea of message origin trust by allowing remote parties to "put words in people's mouths."

If Chat Preview were to be used as a swearing filter on family-friendly servers, a user could send a message faster than the 200ms signing delay (most touch typers will be more than capable of this), and effectively bypass the filter by showing their intended message alongside the modified one.

Potential Solution: Consider handling it on the client side. Allow servers to inform clients of formatting parameters, and include interfaces that the client can interact with to format their messages. This would significantly reduce, if not eliminate entirely, the possibility of a malicious server rewriting your messages.

Execution

Vague Definitions

Several terms are used in the proposed categories for reporting and in the FAQ that remain undefined. Terms such as ‘underage’, ‘illegal’ which vary depending on the country. Legal drinking age varies hugely and, with regards to ‘illegal drugs’ countries such as Germany, Netherlands have decriminalized all drugs while in other countries even drinking alcohol is illegal at any age. There have been some attempts to define hate speech but this is far too non-specific to be useful and there is no attempt to define any slurs; what is a slur for a homosexual in USA is in fact a food stuff in the UK for example. These are just a few examples of vague definitions within the FAQ.

Suggestion: All terms in the reportable categories list need to be specifically defined. How are players to avoid reportable behavior if they do not know what it is?

Unclear Penalties

Moderators will be imposing penalties on players who have been reported. There must therefore be a list of behaviors with proposed actions for dealing with them provided to those moderators. We believe that this list of behaviors with corresponding penalties should be published at least and, ideally, discussed with players beforehand. In this way players will know what the possible outcomes are.

Suggestion: Publish and keep to a list of transgressions and the related penalties that will be imposed. Use the same list that is provided to the moderators.

External Criteria

How will you know who is underage, who is gay, trans, and more in the reported chat messages? You have stated that "Hate speech is talk that is intended to offend", but how do you determine that offence is actually intended. For example there are a variety of tolerance levels within communities? If a private friend group has normalized a reclaimed slur, how will moderators take into account how offended the targeted user is? Are all parties to the conversation contacted and questioned to determine this? If not who determines what is offensive or not and how? What guidelines do moderators have about this?

Suggestion: Publish methods used to ensure moderators don’t consider a player intended to offend another when they didn’t and other similar non chat related criteria in a conversation.

Moderation Quality Control

Putting aside possible working conditions, rates of pay, training, first language skills, language skills in multiple languages, and more that are obvious causes for concern in how moderators will operate in a multilingual and multicultural environment, what processes of moderator quality control will you have in place? How do we know they are motivated to do the best job for each case and not the most cases possible? How do we know that some part of this is not automated? How do we know moderators producing too many false positives are retained or removed from their duties? This is a genuine cause for concern and the phrase "trust us" doesn’t work anymore. You have demonstrated very thoroughly that you will break our trust with no qualms

Context

We have concerns regarding the moderators' ability to properly assess the context of reports when full context is not directly available in chat. For example, a child could be bullied by another Minecrafter at school and stand up to them in Minecraft chat. A player could also express their anger towards a griefer who destroys their work. These scenarios could appear to moderators as unwarranted toxicity or harassment as a direct result of the full context not being available.

Edge Cases

Roleplay Servers

One possible context for malicious reports is roleplay servers. How will moderators handle roleplay servers, especially ones with darker themes?

Usage by Children

If this is intended to "protect children", how do you know that children understand and can use the system correctly? If false reporting accounts will be banned, how do you prevent a twelve year old from falsely reporting their friend for fun, for instance. How do you know that they will understand what is a slur and what is not or what hate speech is, if even we adults aren't sure?

Other Methods of Chat

With most in-game chat being a combination of voice chat, proximity chat, player chat in-game and discord chat in-game, in most cases half the context will be missing (the vocal part) and a large amount is unable to be reported (people chatting via Discord or Dynmap for instance.) What makes you think that the people you are targeting won’t just use these media to chat safe from reporting and banning instead? Will this actually solve the problem you think exists?

Edit
Pub: 15 Jul 2022 17:49 UTC
Edit: 15 Jul 2022 18:12 UTC
Views: 425