Filtering Techniques for Twitter and other User-Generated Content

As more social media content is shown in various public medium like digital billboards, TV and corporate websites, there is an increasing pressure from brands and enterprises to be able to only show certain appropriate User‐Generated Content (UGC).

UGC Is Sticky But Get Ready for Moderation
While UGC can be very sticky and engaging content, brands or enterprises certainly don’t want to be associated with inappropriate content. So the natural question we get is “How is the content filtered?” Filtering or moderation of any UGC, including social media submissions, for appropriateness is an important aspect which must be taken seriously. Some systems have automated moderation for some texts and tweets, but even those must be carefully moderated by a human as second‐stage moderation. Images and videos should definitely be human‐moderated, to prevent any mishaps. Since this area is quite new, most companies do not have a moderation interface, while others have very sophisticated moderation for text, image and videos as well as web‐based, authenticated and measured for audit purposes. All serious brands, media buyers, networks and enterprises and perhaps even individuals will require moderation features.

Systems can Assist Where Practical
Computer systems are good at processing lots of text‐based UGC and can also use many rules, such as exclusion of UGC with “Expletive Dictionaries” or highlighting questionable words or phrases.

However, detecting questionable content in images or videos automatically is fraught with less‐than perfect results, which means for most projects that must allow zero leakage of inappropriate content, system‐assist does not work well. For texts, system‐assist with human override works best. And here at Aerva, we have used it successfully for many Twitter‐based campaigns or SMS campaigns. We typically recommend human moderation for images and videos.

Design for Scale
Where computer systems are used, one should design for scale, both on the back‐end and on the human‐moderation end. When back‐end systems are overloaded, fail-safe mechanisms are the best option to design for scale.

How to design human moderation aspect for scale? Having a multi‐moderator system such as a web-based system with many (or unlimited) moderators can solve some of the scale issues. However, the system can be used to highlight or flag questionable content or allow paging of material to be viewed (for images and videos). For URLs and videos, there may not be an easy way besides inspecting the content. Many campaigns reject content with URLs for this reason.

Post‐Moderation Recourse
A good moderation system must also have a way to recall moderated UGC, which means several levels of moderation is also possible. For instance, super admins can reconsider already filtered material from high speed or junior moderators. In addition, there are many quirks that may allow users to change portions of their content after the fact, such as change of Twitter Avatar from something acceptable to something offensive. Indeed when URLs are involved, the endpoint of the URL can be changed by the user at anytime.

The trend is towards more user‐generated content and more social media content. It adds a dimension of engagement that cannot be replicated by produced content. However, tread with your eyes wide open—a PR snafu or worse is never too far when you want to spice up with UGC. A well‐designed system in the hands of experts can produce all the benefits of social media and UGC without the headaches.