Hosting CSAM on Social Media: revealing the truth behind the giants

Hannah Mercer
9 min readSep 2, 2021

Problems and issues created by the online world are the reason why we are developing our tech, and we believe that education on these matters is the best way for people to understand why we do what we do. Most of our previous blogs have mentioned a variety of subjects including moderation, new technology and even the billionaire space race, but in this piece we are really hoping to outline the extent of one of the biggest issues that we are working so hard to put a stop to — Child Sexual Abuse Material or CSAM. Our intention with this blog is to raise awareness about the staggering amount of abuse material being spread across social media platforms, by discussing some of the most alarming stories to have come out about how we communicate with one another.

The phenomenon of social media has skyrocketed in the last decade, and more than ever is influencing every aspect of our lives. The impression that it has on its users is more than visible on younger generations, with most kids having some exposure to the Internet and social media. And although social media for most is a means of communication, there are also cases where these types of platforms are a necessity in the lives of children to stay in touch with peers, and even stay on top of school events and projects. But it can also cause mental, physical and psychological suffering, especially if CSAM is related to them.

— —

Putting Twitter in hot water recently, Delhi police filed a case against social media platform through its Cyber Crime Cell for allegedly displaying child sexual abuse material after acting on a complaint from the National Commission for Protection of Child Rights — a federal government watchdog in India. The watchdog claimed that Twitter had not reported the cases of child abuse and other related content, which in accordance to Indian laws, is mandatory after the social micro-blogging site had to suspend over 22,500 accounts between May and June of 2021.

Complaints received by Twitter’s Grievance Officer led to a statement that really shouldn’t need to be made, confirming Twitter does not tolerate this type of content on their platform. However, as you can see from the number of accounts that were suspended, it’s clear that their moderation, either human or machine, is not working effectively or efficiently in combating this issue, and begs the question of whether or not user safety is something they care about.

Unfortunately, the case levied against Twitter in India is not the first time allegations of hosting CSAM have been made. Back in January of 2020, Twitter was accused of aiding child abuse, after allegations were made that the platform was allowing pedophiles to openly organise and form ‘associations’ using the service across many countries. Then, almost exactly a year later, Twitter once again came under fire with accusations of hosting inappropriate content in the form of pornographic images and videos which were shared widely across the social media platform.

These pictures and videos depicted a 13 year old child.

Later it was found that he was a victim of teenage sex trafficking and that Twitter did not remove the images from their platform as they were not believed to violate the company’s policies. The boy’s mother filed a lawsuit, accusing Twitter of monetising the images of her now 17 year old child. The traffickers, who were posing as a classmate of the boy, exchanged several nude images before this ‘relationship’ took a more sinister tone as blackmail and the threat to share the images with family and friends was involved in order to receive more images and videos of this child. Nearly two years after contact had stopped between the parties, the CSAM material, which also included other children, were uploaded again to two accounts on Twitter. Accounts which were already known for posting this type of content.

This once again highlights not only the dangers that can be involved when sharing nude images and videos online, like we discussed in one of our latest blogs but also the dangers that are presented when platforms and social media channels like Twitter, who have such a large number of users, don’t have real concern for their victims and don’t appropriately moderate what gets shared. Despite the clear emphasis and evidence in their terms of service, stating their no-tolerance to illegal content such a child abuse and pornographic material, it’s obvious from the stories we have mentioned that this is a real problem for Twitter. Unfortunately it can also be seen that the money, influence and number of continued users on large social sites allows them to avoid making dramatic and needed changes to how they operate. The reality is that pornographic content that involves children is child abuse, and despite claims they might present to the world about the zero-tolerance for CSAM, it is a case of we will believe it when we see it.

— —

There a numerous cases like the ones we have discussed here only a quick google away, making it blatantly obvious that more needs to be done in dealing with instances of pedophilia and child abuse on social media and related channels. With other companies like us battling to prevent these types of images and videos from being uploaded to the Internet, we hope that in the future there will be no more stories like those that we have mentioned in this blog and are far too abundant across the internet. But surely there are more that multi-million pound companies like Twitter can do in the meantime? While policies, terms and conditions, and Grievance Officers all state that they are fighting to tackle the ever-growing concern of child exploitation on the Internet, it is apparent that a lot more needs to be done for this claim to become a reality.

In one of our previous blogs we discussed the response from social media companies after we had posted nudity on their platforms and the time it took them to remove the content. Some of the results were very shocking, and were a clear indication that the moderators, like the 35,000 strong Facebook moderation team, are either being bombarded with too much content to moderate that they cannot cope with the sheer amount, or that the moderation technology that they have in place is simply not as effective as it could be. And as frustrating as it is to see large companies not being able to effectively moderate, or not wanting to, the ramifications are widespread and lifelong for many who are victims of child abuse content being shared online.

Business Insider revealed that, during the global Coronavirus pandemic, online child sexual abuse images being posted and sent on social media platforms rose by a shocking 31%. The figure comes from the National Center for Missing and Exploited Children (NCMEC) who only reported on the increase in the number of image of child sexual abuse reported to them, mostly being hosted on Facebook and Instagram. The increase is around 5 million, with 16 million being reported to NCMEC in 2020 and so far 21 million being reported in 2021. In 2019 it was found that Facebook recorded more CSAM than any other technology platform, and were responsible for 99% of reports to NCMEC. With the global pandemic constraining tech companies moderation efforts, distributors and creators of child sexual abuse have been utilising major platforms to grow their audience. Some platforms however warn users that when they report questionable, inappropriate or illegal content, they may not be able to respond quickly.

Facebook claims that in 2020 between July and September, they detected 13 million images alone on Facebook and Instagram- clearly highlighting the severity of this issue and the increasing danger that this proposes. The figures that come from NCMEC comes from data given to them by CyberTipLine, who collect reports from both one-time occurrences from the public as well as tech companies, who like DragonflAI are working to remove this type of content from the Internet. There are so many claims that CSAM content is found on ‘the dark web,’ and that any regular person would not be able to find it, however that is simply not the case. Instagram was reported by the UK’s NSPCC in 2019 to be the most popular social media for those who seek to groom children, predominantly between the ages of 12 and 15, however some are even as young as five who have been targeted by abusers.

In 2019, NCMEC’s CyberTipLine received 16.9 million reports relating to suspected child sexual exploitation. The reports mostly flagged by users of social media platforms or search engines went on to reveal 69.1 million images, videos and files. 15,884,511 were found on Facebook, 449,283 were found on Google, 82,030 were found on Snapchat and 45,726 were found on Twitter. This just shows that even reporting one image or video seen online can be traced back to reveal multiple time offenders of this type of abuse, unveiling even more profanity than once imagined. These are just some of the statistics of the ‘big name’ companies that were reported for inappropriate and illegal content containing child sexual abuse and exploitation. These figures are hard to comprehend, and I am sure will shock many, however it is important that awareness is raised about this issue and the many others like it. Without shock or surprise it is often hard for individuals to understand the full extent of the problem that we are trying to solve, and it is simply the only way for appropriate education and change to begin. The other thing to bear in mind is that these figures are only for America. Can you imagine the statistics if the remaining 95% of the world’s population was considered?

— —

But this type of abuse is not limited to popular social media platforms. Nicholas Kristof, a New York Times writer, reported in December 2020 that the pornography website PornHub, was monetising content that included minors being sexually abused. After this was revealed, several credit card companies parted ways with the company and PornHub took action, and underwent a large overhaul of their content to remove the illegal videos, and to work on their verification for future uploads. Currently in the United States there is no legal requirement for companies such as PornHub and Instagram to proactively seek out CSAM on their platforms, however when it is found they are legally obliged to report it to NCMEC and remove the content immediately.

Zoom has also been left victimised by hackers who have ‘zoom-bombed’ images and videos of CSAM to innocent individuals who have now been traumatised during work video calls. Michael Oghia an Advocacy and Engagement Manager at the Global Forum For Media Developments was on a business meeting Zoom call when the presenter’s screen was hacked to display an explicit pornographic video involving an infant. After this experience, Oghia was left unable to sleep and understood fully why content moderators are left with psychological traumas such as PTSD. The incident occurred during the height of the pandemic, with Zoom stating that this act was “devastating and appalling” and encouraged the company to make significant changes to its policies and practices in order to safeguard its users from this type of unwanted and illegal content. The introduction of passwords to gain access to meetings became mandatory in order to prevent this type of incident from happening again, at the time Michael Oghia’s meeting was not password secured.

But if businesses are not legally obliged to seek out CSAM, what is the benefit for them to highlight that they may have a serious issue on their platform? Other than the obvious moral dilemma of wanting to protect children, right?

We have only touched on a couple companies that are playing their part in allowing CSAM to be readily available through neglect or ineffective moderation measures, but there are so many more. It’s so important that social media platforms and others like them are held accountable for their lack of response to illegal or harmful material, especially those like Facebook, who claim that their 35,000 strong safety and security team are working well to stop CSMA, despite reported numbers of images still over 15 million in 2019. Yes, reporting may lead to removal, but at that point it’s already too late. If you’ve seen it, so have others, and with a simple click may have shared and contributed to the spread of CSAM, and to the ruin of a child’s life. World-wide legislation and integrating effective, preemptive moderation on social media sites really is the only way forward and away from this type of abuse from being online, and the way to a healthy and safe online world.

--

--

Hannah Mercer

Founder of DragonflAI — On-Device Nudity Moderation. My mission is protect children by reducing the volume of child abuse online. www.dragonflai.co