Google's announcement Monday (Dec. 4) came the same after Facebook said it was launching an app to help young kids communicate via social media but with their parents' permission and under their supervision.
Google said that it was adding more staffers to monitor content, saying it was applying the lessons it learned from trying to ferret out extremist content to tackling other problematic content.
It said the goal by 2018 was to have over 10,000 people reviewing videos. It also said it would be give users notice of how it was vetting and flagging content. It will also apply stricter criteria for where it places advertising.
Related: OTT Content on YouTube Offers New Ad Opportunity
"We are planning to apply stricter criteria, conduct more manual curation, while also significantly ramping up our team of ad reviewers to ensure ads are only running where they should. This will also help vetted creators see more stability around their revenue. It’s important we get this right for both advertisers and creators, and over the next few weeks, we’ll be speaking with both to hone this approach," Susan Wojcicki, CEO of YouTube, blogged.
That sounded like good news to PTC.
“We applaud Google’s decision to increase monitoring of violent and extreme content – both the videos and comments – on YouTube. This is a great first step towards not only protecting advertisers, but also protecting the health and safety of young viewers who may be watching," said PTC President Tim Winter. Our past research on YouTube found that children entering ‘child-friendly’ search terms were confronted with highly offensive content in the text commentary posted by other site users. YouTube continues to be a site that needs constant monitoring and today’s announcement should assist with that goal.
“Additionally, with YouTube Kids most recently announcing that it is adding safeguards on this site geared specifically for children, we urge the company to extend increased monitoring to this site as well. Safety of kid-targeted content should also be an extremely high priority for Google.”
Edge providers are on a bit of a political and policy knife edge with content monitoring. On one side, there are those who say that is online discrimination on the basis of content, which sounds like a non-neutrality net. On the other side there is pressure from the government not to facilitate terrorist recruiting or hate speech or sex trafficking, and to take more ownership of the content allowed on social media.
Edge providers say that if they become liable for all the speech on their platforms, the social media model will be an immediate casualty.