Detecting Online Harms

There is widespread concern about access to harmful content in online imagery, audio, and text. This content may be deliberately sought, but it can also be accidentally exposed.

The largest challenge to online platforms is the massive volume of content to be moderated. Text and audio are essentially linear and more amenable to rapid, bulk review. But message-bearing content in imagery, and particularly in video, is deeply embedded and requires complex interpretation by the viewer or automated system. This combination of volume and complexity makes human discovery and moderation of harmful content in online still and video imagery wholly impractical.

Human moderators need help.

Pimloc believes that it is possible to develop and deploy systems which could automate and augment content moderation processes at scale, to enable all companies, even those which are smaller and resource constrained, to comply with future regulation of online harms.

To be practical, a system needs to combine: a) ‘Automated monitoring and notifications’, which highlight known illegal/illicit or suspect content for review; and b) Search and discovery tools that allow moderators to quickly search online platform content, based on identified online harm content for specific actions, objects and scenes.

Pimloc’s Pholio system can be used to support online harm detection and moderation.


Interested in finding out more?