A report by a reputable source suggests that parent company Facebook is somehow able to view the content of WhatsApp messages. This should, of course, not be possible with end-to-end encryption, where only message participants are able to decrypt the content, so would be an explosive revelation if it turned out to be true.
The report references metadata analysis – a method the company is known to use to try to detect problematic messages without knowing the content – but also directly claims that moderators are able to “examine users messages, images and videos,” citing both moderators and engineers within the company …
WhatsApp has been a source of significant and dangerous disinformation, leading to false claims of child abuse in India, and hoax coronavirus messages worldwide. The company is known to try to address these by limiting the ability to blindly forward messages, and to use other forms of metadata to try to identify messages likely to be spam.
However, the company has been adamant that WhatsApp uses end-to-end encryption, meaning that Facebook has no ability to see the private content of messages. A lengthy ProPublica piece claims that this isn’t true:
[An] assurance automatically appears on-screen before users send messages: “No one outside of this chat, not even WhatsApp, can read or listen to them.”
Those assurances are not true. WhatsApp has more than 1,000 contract workers filling floors of office buildings in Austin, Texas, Dublin and Singapore, where they examine millions of pieces of users’ content. Seated at computers in pods organized by work assignments, these hourly workers use special Facebook software to sift through streams of private messages, images and videos that have been reported by WhatsApp users as improper and then screened by the company’s artificial intelligence systems. These contractors pass judgment on whatever flashes on their screen — claims of everything from fraud or spam to child porn and potential terrorist plotting — typically in less than a minute […]
Many of the assertions by content moderators working for WhatsApp are echoed by a confidential whistleblower complaint filed last year with the U.S. Securities and Exchange Commission. The complaint, which ProPublica obtained, details WhatsApp’s extensive use of outside contractors, artificial intelligence systems and account information to examine user messages, images and videos. It alleges that the company’s claims of protecting users’ privacy are false. “We haven’t seen this complaint,” the company spokesperson said. The SEC has taken no public action on it; an agency spokesperson declined to comment.
The report says that WhatsApp moderators work under strict secrecy conditions.
The job listings advertise “Content Review” positions and make no mention of Facebook or WhatsApp. Employment documents list the workers’ initial title as “content moderation associate.” Pay starts around $16.50 an hour. Moderators are instructed to tell anyone who asks that they work for Accenture, and are required to sign sweeping non-disclosure agreements.
However, the description of what happens is confusing.
Because WhatsApp’s content is encrypted, artificial intelligence systems can’t automatically scan all chats, images and videos, as they do on Facebook and Instagram. Instead, WhatsApp reviewers gain access to private content when users hit the “report” button on the app, identifying a message as allegedly violating the platform’s terms of service. This forwards five messages — the allegedly offending one along with the four previous ones in the exchange, including any images or videos — to WhatsApp in unscrambled form, according to former WhatsApp engineers and moderators. Automated systems then feed these tickets into “reactive” queues for contract workers to assess.
Again, E2E encryption should mean that WhatsApp does not have the ability to decrypt messages in this way.
If the report were to turn out to be accurate, it would mean that one of the largest companies in the world has been lying about using end-to-end encryption. While Facebook doesn’t have a stellar reputation, it still seems hard to imagine that this could be true.
Strangely, a statement by Facebook didn’t directly address the issue of end-to-end encryption.
In written responses for this article, the company spokesperson said: “We build WhatsApp in a manner that limits the data we collect while providing us tools to prevent spam, investigate threats, and ban those engaged in abuse, including based on user reports we receive. This work takes extraordinary effort from security experts and a valued trust and safety team that works tirelessly to help provide the world with private communication.” The spokesperson noted that WhatsApp has released new privacy features, including “more controls about how people’s messages can disappear” or be viewed only once. He added, “Based on the feedback we’ve received from users, we’re confident people understand when they make reports to WhatsApp we receive the content they send us.”
ProPublica is a nonprofit investigative journalism organization with a solid reputation. All the same, the most likely explanation here is that there is a misunderstanding somewhere along the way, with moderators actually reviewing Facebook Messages (which do not use E2E encryption) instead of WhatsApp ones. If so, the company ought to be able to quickly correct the record, and we have reached out for comment.
Check out 9to5Mac on YouTube for more Apple news: