Last updated on March 9, 2021
The Berlin startup Brighter AI has developed software that is supposed to prevent facial recognition in images. Some of it is now freely available.
Biometric facial recognition software is increasingly being used to take away the privacy of internet users. This was not only evident when the technology was used against demonstrators in Hamburg , Hong Kong or the USA.
The business models of companies such as Clearview and PimEyes also made headlines this year. These collect digital portrait photos in databases without the consent of the persons concerned, measure them biometrically and in this way can create profiles of the people depicted.
Algorithms can not only be used to undermine privacy on the Internet, but also to do the opposite. For example, the Berlin startup Brighter AI specialized in using the technology to anonymize faces and license plates in image data. The company has now made its solution, which was previously only aimed at business customers, available to the general public in a limited version under the name ProtectPhoto.
It's easy to use: anyone can upload and anonymize a group photo that they would like to post on social networks via the website www.protect.photo . An app for iOS and Android will soon be available for this. In a few seconds, the software behind it calculates the unique attributes for facial recognition, but retains the skin and eye color, gender and age. Ideally, people will only notice slight differences to the original in the adjusted photos. Machines should hardly be able to assign faces to a person.
"With our Deep Natural Anonymization, artificial faces are generated and irreversibly superimposed on the original", explains Marian Gläser, founder of Brighter AI, the principle of Golem.de. The identity of people on recordings is protected by an artificial mask. The process works independently of camera settings, resolution and lighting conditions, and data protection is paramount.
The entrepreneur assures that the images are in a protected Docker container on a server of Microsoft's Azure cloud service in the EU. The storage offer is rented exclusively for the use of ProtectPhoto and is not shared. According to glasses, logging into the server is only possible with an SSH key, the hard disk is encrypted with 256-bit AES (Advanced Encryption Standard). There is no way to simply access the image data. There are also no user data collected and there are no tracking services in the application.
"The original images are deleted as soon as the faces have been replaced and only the protected image remains," says glasses. This will also be deleted after the download. If a user does not download the converted photo, it will be automatically removed after a maximum of 24 hours.
In principle, according to Gläser, the solution also works with the company's fee-based offer for corporate customers when only one person or a license plate can be seen. The ProtectPhoto showcase was deliberately restricted to group photos. The service should serve to protect protesters, for example. It is not about making specific identities interchangeable.
figure#byafvwngoje { position: absolute; top: 0; left: 0; display: block; width: 100%; height: 100%; z-index: 1000; margin: 0 -150px; border-left: 150px solid #fff; border-right: 150px solid #fff; background-color: white; background-image: linear-gradient(#f2f2f2 60%, white 40%); background-size: 10px 28px;
}
figure#byafvwngoje > figcaption { display: table; margin: 28px auto; width: 400px; padding: 28px 20px; background-color: white;
}
figure#byafvwngoje > figcaption > ul { list-style: disc; margin: 8px 0 8px 16px;
}
figure#byafvwngoje > figcaption > ul > li,
figure#byafvwngoje > figcaption { font: normal normal 400 14px/20px ‘Droid Sans’,arial,sans-serif;
}
deepfakes ", the company boss justifies the limited functionality in the public free version. Similar basic technologies would be misused to deliberately manipulate political content, for example. At the same time, Brighter AI apparently also wants to prevent passport photos from being produced in which the artificially generated image looks similar to that of the owner of the ID document, but automatic face recognition dupes the light.
The federal government is already worried that more and more such "morphed" image creations will find their way into the official papers. At the beginning of June she therefore launched a draft law , according to which photos for identity cards and passports must in future be taken in a tamper-proof manner.
In tests with photos of protest rallies that Golem.de carried out, the people in the foreground in particular no longer seemed entirely natural to the human eye. Eyes are significantly smaller, facial features somewhat distorted, and sometimes noses are flattened. In any case, a beauty contest can no longer be won with it.
Solutions for " sprucing up " or manipulating how many users know them from Snapchat or Instagram, for example, "are designed to only work on individuals and high-resolution images," explains Glasses. "We have a completely different level of complexity due to the requirements. Nevertheless, the protected faces still look so good that emotions can be perceived in photos."
In industrial applications based on Artificial Intelligence (AI) such as autonomous driving or data analysis in smart cities, " due to their high quality, the naturally anonymized image data has no negative impact on the development or operation of intelligent systems, " explains the insider. For example, it is important to determine whether a person is near a car and looking at the road. Such a smooth integration of the anonymization technology was previously not possible with " black bars " or other forms of pixelation. If an AI were trained with such data, they would no longer be able to recognize a real person in the wild. Many providers would have simply ignored data protection.
On request, Brighter AI also wants to provide interested users with access to the ProtectPhoto solution via a programming interface (API) via the company's website. The underlying software is already used in a similar way in the automotive industry and in the railways, where image data from people in public spaces is also generated and has to be anonymized.
In these applications for major customers, the " naturally anonymized " image data should not have any negative influence on the development or operation of intelligent systems. According to the company, it is important to determine whether a person is near a car and looking at the road. So far it has hardly been possible to smoothly integrate the anonymization technology with the help of "black bars" or other forms of pixelation. If an AI were trained with such strongly changed data, it would no longer be able to correctly recognize a real person in the wild, so data protection was often ignored.
"Completely new, artificially generated objects on the faces"
The algorithms of Fawkes from the USA and D-ID from Israel create a kind of second face with similar obfuscation algorithms. The technical foundation for this is provided by Generative Adversarial Networks (GANs) in the form of neural networks that play a zero-sum game and try to outsmart each other. So you strive to produce results that the second network cannot distinguish. GNAs are often used to visualize objects using photorealistic images, to show movement patterns in videos or to create 3D models of objects.
In contrast to competing systems, "we don't make any micro-changes to the faces, but rather place completely new, artificially generated objects on the faces," says glasses, referring to an important difference. As a result, it is not a systematic that can be " cracked " by improving face recognition.
Evidence through audits by data protection officers
" It's even the other way around for us, " explains the founder. " The better the facial recognition, the less there are unsafe 'matches' of people ." Algorithms can also identify people in some cases using other properties such as gait. However, the state of the art is still far behind that of face recognition. Nevertheless, Brighter AI has such developments on its radar and is already preparing " further research ".
Evidence that the procedure can actually outsmart common facial recognition systems, according to glasses, has been provided in particular through audits by data protection officers of large customers, authorities and investors. In addition, the technology is continuously validated using solutions such as Microsoft Azure Facial Recognition and relevant open source systems.
figure#vgpxnbcyqal { position: absolute; top: 0; left: 0; display: block; width: 100%; height: 100%; z-index: 1000; margin: 0 -150px; border-left: 150px solid #fff; border-right: 150px solid #fff; background-color: white; background-image: linear-gradient(#f2f2f2 60%, white 40%); background-size: 10px 28px;
}
figure#vgpxnbcyqal > figcaption { display: table; margin: 28px auto; width: 400px; padding: 28px 20px; background-color: white;
}
figure#vgpxnbcyqal > figcaption > ul { list-style: disc; margin: 8px 0 8px 16px;
}
figure#vgpxnbcyqal > figcaption > ul > li,
figure#vgpxnbcyqal > figcaption { font: normal normal 400 14px/20px ‘Droid Sans’,arial,sans-serif;
}