Privacy and Protection are not mutually exclusive

Hannah Mercer
4 min readMay 26, 2020

--

With debates raging constantly about the benefits of End to End Encryption versus the need for law enforcement to access data, finding a way to appease the privacy advocates as well as keep technology users safe from heinous content is one of the biggest industry questions that needs to be answered.

At the end of last year, Matthew Green, a cryptographer and professor at Johns Hopkins University, wrote an article discussing this debate as it was primed to move into congress. This blog will focus primarily on the ideas brought forward in this article about how it may be possible to keep detecting content even with E2E Encryption, discussing DragonflAI’s thoughts on the importance of end to end encryption, and how preserving users’ privacy does not mean tech companies can’t protect them.

From an open letter to Facebook written in part by US Attorney General William Barr, there is a clear demand from law enforcement officials to ensure they can still have the power to detect and remove illegal content, including child sexual abuse imagery (CSAI). A key phrase that was noted by Matthew Green as well as DragonflAI is the request for Facebook to;

“embed the safety of the public in system designs, thereby enabling you to continue to act against illegal content effectively with no reduction to safety, and facilitating the prosecution of offenders and safeguarding of victims.”[1]

This highlights the paramount idea that embedding the safety policies and software into various communication and social media platforms could allow for moderation to continue without reducing the privacy of users or requiring loopholes in the E2EE debate in order for content to be found and removed. However, Matthew Green does push back somewhat on this idea, mentioning that the current forms of technology we have might not be able to do this through E2E, stating programs like PhotoDNA requiring the image being detected to already be in a database of abuse images, and Google’s newer machine learning process still “only work if providers to have access to the plaintext of the images for scanning,”[2] which E2EE stops.

So it seems like this would be a crushing end to this debate, with only user privacy OR safety being possible at one time. But fortunately it’s not, which brings us to the crux of this blog — on-device moderation.

Following up the limitations from E2E, Green introduces how scanning and image on the user’s device before it is sent and encrypted could be a way to maintain content detection without needing external servers to scan and check content. However, he struggles with the nature of the algorithms and need for secrecy to prevent circumventing this moderation model.

This is where DragonflAI has pushed forward technologically, as we don’t believe there is ever need to compromise user privacy or safety online. With a number of tech companies and groups like OTSIA currently working towards creating a safer and more secure environment online, creative solutions are available to help companies keep their users safe and stop CSAI from being created and spread.

DragonflAI has been working towards bringing private on-device moderation to companies, similar to what Green touched on in his article, and we firmly believe that having AI content detection software embedded into app platforms will allow for quicker and more targeted moderation, without affecting users privacy. Placing this on the user’s device eliminates the need to any third-party to see the content, and can even help alleviate some of the psychological issues that can arise in human moderators who deal with horrific content daily.[3]

The overview of how DragonflAI integrates with apps is outlined below:

As a final thought, and teaser for our follow up blog, there is a clear thought (especially in the United States) that encryption and effective enforcement of content moderation and prosecution of offenders are not compatible. This is abundantly clear through the efforts of several congressmembers to slyly pass a bill that would massively expand the communication surveillance abilities of the government, at the expense of user privacy. Hidden within a bill that is set out to reduce the spread of child abuse imagery, the EARN IT Act provides a ‘backdoor’ to E2E, and is a step in the wrong direction for promoting privacy, even though it could help with illegal content on the web.

As you can see, the privacy and protection debate is difficult, but hopefully our advances, and the advances of many other tech companies can prove that privacy and protection are compatible, and there is no need to sneak potential encryption limiting bills through congress during a global pandemic where attention is certainly not focused on this debate.

--

--

Hannah Mercer
Hannah Mercer

Written by Hannah Mercer

Founder of DragonflAI — On-Device Nudity Moderation. My mission is protect children by reducing the volume of child abuse online. www.dragonflai.co

No responses yet