Apple automatically searches iPhones for child pornography

Apple automatically searches iPhones for child pornography

To help the authorities fight child pornography, iPhones in the US will soon be scanning their memory for known photos. The group emphasizes that the privacy of the users remains protected. Critics fear that this could change quickly.

For years, Apple has advertised special privacy protection on its devices. And it doesn’t just make friends. Critics fear that a new measure could now represent a paradigm shift. First of all, it should sound good to the vast majority of people: Apple automatically wants to find child abuse images on its devices.

The company announced this on Thursday (local time). The functions announced as “extended child protection measures” will appear as part of iOS 15 in the fall, according to the group. Parents will be warned when sending potentially problematic photos of their children via iMessage, abuse images stored on the iPhone will be recognized and searches for problematic terms will be prevented, explains the message, which is also accompanied by detailed technical explanations.

Adjustment without image evaluation

The second measure in particular makes you sit up and take notice: In order to discover possible depictions of abuse, the iPhone should automatically compare the images on the device with the corresponding databases as soon as the photo backup via iCloud is switched on, Apple explains in a document. The iPhone will automatically check all the pictures and videos stored on the device.

However, if you look at the measure in detail, it becomes clear that Apple does consider the privacy of its users. Instead of evaluating the images themselves, a so-called hash value is compared with a database of known abuse reports. This is a kind of individual fingerprint that is calculated individually for each image.

Protection against false suspicion

The procedure has two advantages for Apple: On the one hand, the company does not have to evaluate the images of its users itself, but only compare values, the content of its own images remains secret, and privacy is preserved. On the other hand, Apple itself does not have to store a collection of child pornographic material for comparison on its server. Instead, the database is supplied by the American child protection organization National Center for Missing and Exploited Children (NCMEC).

In order to avoid false reports from innocent users through randomly identical hash values ​​- Apple cites a probability of one in a trillion for this – the company has also built in another security measure. The system does not sound an alarm immediately after a single hit, but only when an unspecified minimum amount is reached. Only then is the employee account marked for manual review. If allegations of abuse are actually confirmed, there is an account block and a report to the law enforcement authorities.

Fear of the dam breaking

Nevertheless, privacy advocates are extremely critical of the measure. “You are sending a (very powerful) signal that it is okay to search users’ phones for prohibited content,” wrote computer science professor Matthew Green of the Johns Hopkins Institute on Twitter. “This is the message they are sending to the competition, to China and to you.”

Shortcut

The most common concern about the hash database. It is currently being made available by a non-governmental organization for a clear purpose. From a technical point of view, however, it does not take a lot of effort to fill the infrastructure with other data, such as pictures of political protests or the representation of homosexual sexuality, which is prohibited in many countries. If the technical prerequisites are in place, Apple will find it more difficult to defend itself against corresponding regulations in individual countries, according to the common argument. “That will break the dam – governments will ask everyone to do this,” Green told the Financial Times.

“This is a back door”

There are also critical questions about the protective function when sending possible nude pictures by minors. According to Apple, it is automatically activated when an iPhone has been set up as part of the family function with parental controls. If you now send a picture, it will be scanned as a possible nude picture and the children will be warned that the parents will be informed after it has been sent. The idea behind it is understandable: Sending nude pictures of minors is illegal in the USA even if both the sender and the recipient are not of legal age, which is why numerous young people have already faced serious legal problems.

But here too, critics see potential for abuse. “It is impossible to build a scan system that can only be used for sexual material by children,” criticized the established Electronic Frontier Foundation (EFF) on Twitter. “Even a well-intentioned implementation calls the basic features of the encryption into question and opens a back door for other misuse of the system.”

Not everyone takes Apple that hard. By scanning directly on the device, the system would be less invasive than if the images were first scanned on the servers, explains computer security expert Alan Woodward from the University of Surrey to the Financial Times. Other services like Facebook would not look for problematic content until it was uploaded. Woodward believes Apple’s approach is preferable. “This decentralized approach is the best imaginable if you want to go in that direction.”

Source Link

Leave a Reply

Your email address will not be published. Required fields are marked *

Latest Posts