When our chats are broadcast on Whatsapp, nobody can read them, this fact is very important to Facebook. A current report now shows: There are exceptions.
When it comes to privacy, Facebook doesn’t have the best reputation, even founder and CEO Mark Zuckerberg had to admit. Because the users have to feel comfortable with a messenger, the company attached great importance to the fact that the messages on WhatsApp cannot be read through end-to-end encryption. “Not even from Facebook,” explain the chats themselves. And yet the company employs over 1,000 people who do just that.
This is reported by the US news site “Pro Publica”. According to this, these people commissioned by Facebook sit around the world to check millions of messages, photos and videos that have been sent via Whatsapp. The goal: to find “inappropriate content”.
Employed readers
They can be reported on Whatsapp as on Facebook or Instagram. If a user marks a message, photo or video as “inappropriate”, the content and the surrounding chat are forwarded to Facebook. External employees ultimately assess whether it is attempted fraud, child pornography or possible terrorist plans, according to “Pro Publica”.
Researching the site, the job is very similar to moderating other online services. For a wage of $ 16.50 or more, the 29 moderators surveyed would have a number of complaints in offices and, during the pandemic, at home as well. They manage up to 600 such tickets a day, which corresponds to an average processing time of less than a minute. The working speed is monitored.
This is how Whatsapp checks the messages
The process is always the same. If a user marks a message as inappropriate, this and the previous four messages are sent unencrypted to Facebook – including photos and videos. Then they are assigned to individual moderators.
“Proactive” messages appear somewhat more problematic. Here, the message is not sent by the user, but via artificial intelligence, which automatically evaluates chats and content and compares them with problematic content. The AI not only evaluates the user information, but also the frequency of messages sent, terms used or media that are already known to be problematic. Fraud attempts or the passing on of illegal recordings are conceivable. Apple had to struggle with massive PR problems precisely because of such an automatic search on the iPhone, even though the extent of the search was considerably smaller than the one now described on Facebook.
Once in the hands of the moderators, they have to evaluate the message – and decide whether the user has to be observed or even banned. It is not easy for the moderators. “Pro Publica” reported that if child pornography was suspected, the age of the person shown had to be estimated; in the case of a possible decapitation video, the authenticity of the corpse should be assessed. In addition, there are minor hurdles such as language barriers, which the tools offered do not always remove to the necessary extent. One moderator said, for example, that the program tried to translate Arabic text as Spanish.
Confirmed by Whatsapp
Faced with the allegations, Facebook’s PR boss Carl Woog confirmed the existence of the teams that examine the content. The aim is to catch and lock out “the worst” perpetrators, he explained. However, Woog emphasized that the group does not see this as content moderation. “We don’t normally use that term on Whatsapp,” he said. The group is about to operate the service reliably and prevent abuse, while protecting privacy, so the group.
In fact, the implementation of the moderation does not necessarily contradict the statements on end-to-end encryption. It only promises that the messages cannot be read out during transmission. They have to be decrypted on the device at the latest – otherwise they could not be displayed to the users at all. If only the content reported as inappropriate is passed on, the rest of the communication would continue to be protected. But that cannot be verified.
Data collector
In any case, the group seems to be well aware that users could question this representation. While clear figures for moderating the content are available for Instagram as part of the transparency report, there is no such report for Whatsapp. The signal effect that the group can read along even in very limited cases should hardly help the messenger’s fragile reputation.
Because Whatsapp is already considered a data octopus compared to other messengers. By evaluating the so-called metadata that arise around the chats, Facebook can read out far-reaching relationships about the chat ends. Anyone who only occasionally sends a message back and forth during working hours ultimately has a completely different meaning for a person than the person who is regularly in the same apartment in the evening and sometimes receives a video call at night.
In addition, there is data such as the profile picture, group names and access to the entire address book. “Another aspect are forwarded messages and images. Here, too, Facebook can track who shared them and when. Even if the content of the messages themselves is not known, that reveals a lot about the users,” explains security researcher Paul Rösler. “If you have enough metadata, you don’t need the content,” the former NSA chief advisor once said. Former CIA and NSA chief General Michael Hayden summed it up even more drastically: “We kill on the basis of metadata.”
Skeptical users
It was only just in spring that it became clear that the user did not like the current collection of data. As part of a change in terms and conditions, Facebook wanted to regulate communication with companies. It quickly spread that it was actually about merging customer data between Whatsapp, Facebook and Instagram. The users ran away in droves that Facebook postponed the deadline for consent to change twice and ultimately made it voluntary. No wonder that you want to avoid the impression that the company can always read the chats.
However, there are two options Facebook could use to easily increase confidence in Whatsapp. Competitors like Signal who really care about data protection collect significantly less data and have also disclosed the source code. And so they can credibly prove that they cannot see the users’ chats.

David William is a talented author who has made a name for himself in the world of writing. He is a professional author who writes on a wide range of topics, from general interest to opinion news. David is currently working as a writer at 24 hours worlds where he brings his unique perspective and in-depth research to his articles, making them both informative and engaging.