Twitter has slowly adapted to a world in which hoax posts and hate speech can have real-world consequences – ranging from foreign interference in US elections to parents of murdered children being threatened and harassed.
But in evolving its rules, it’s doing one thing it has never done before …
The company is for the first time asking for user feedback on proposed new rules.
The Twitter Rules apply to everyone who uses Twitter. In the past, we’ve created our rules with a rigorous policy development process; it involves in-depth research and partnership with the members of our Trust and Safety Council and other experts to ensure these policies best serve every person on the service. Now, we’re trying something new by asking everyone for feedback on a policy before it’s part of the Twitter Rules.
It is beginning the process with a draft version of a new policy on what it calls ‘dehumanizing language.’
For the last three months, we have been developing a new policy to address dehumanizing language on Twitter. Language that makes someone less than human can have repercussions off the service, including normalizing serious violence […]
With this change, we want to expand our hateful conduct policy to include content that dehumanizes others based on their membership in an identifiable group, even when the material does not include a direct target. Many scholars have examined the relationship between dehumanization and violence. For example, Susan Benesch has described dehumanizing language as a hallmark of dangerous speech, because it can make violence seem acceptable, and Herbert Kelman has posited that dehumanization can reduce the strength of restraining forces against violence.
You can read the wording of the proposed policy, and submit your feedback, on the Twitter blog. All the comment fields are limited to tweet-length responses of 280 characters.