Inconsistencies behind the company’s ability to police advertising on controversial content are coming to light
Last modified on Wed 1 Jul 2020 17.27 BST
Google’s decision-making process over which YouTube videos are deemed “advertiser friendly” faces scrutiny from both brands and creators, highlighting once again the challenge of large-scale moderation.
The company last week pledged to change its advertising policies after several big brands pulled their budgets from YouTube following an investigation that revealed their ads were shown alongside extremist content, such as videos promoting terrorism or antisemitism.
Havas, the world’s sixth largest advertising and marketing company, pulled all of its UK clients’ ads, including O2, BBC and Domino’s Pizza, from Google and YouTube on Friday, following similar moves from the UK government, the Guardian, Transport for London and L’Oreal.
Google responded with a blog post promising to update its ad policies, stating that with 400 hours of video uploaded to YouTube each minute “we don’t always get it right”.
However, the inconsistencies behind the company’s ability to police advertising on controversial content are coming to light – and it’s not just advertisers who are complaining. Some YouTube creators argue their videos are being unfairly and inconsistently “demonetized” by the platform, cutting off their source of income that comes from the revenue share on ads placed on videos.
Matan Uziel runs a YouTube channel called Real Women, Real Stories that features interviews with women about hardship, including sex trafficking, abuse and racism. The videos are not graphic, and Uziel relied on the advertising revenue to fund their production. However, after a year, Google has pulled the plug.
“It’s a nightmare,” he said. “I can’t trust YouTube any more.”
“It’s staggering because YouTube has a CEO [Susan Wojcicki] who is a feminist and a big champion for gender equality,” he said, pointing out that there were other far more extreme videos such as those promoting anorexia and self-harm that continued to be monetized. He also referenced PewDiePie’s videos featuring antisemitic “jokes” that were allowed on the platform for months.
“It’s bad that YouTube attempts to censor this very important topic and is not putting its efforts into censoring white supremacy, antisemitism, Islamophobia, racism, jihadists and stuff like that,” Uziel said.
He wants Google to be more open about how exactly they moderate content. “I want them to be transparent about what they think to be advertiser friendly,” he said.
Google currently uses a mixture of automated screening and human moderation to police its video sharing platform and to ensure that ads are only placed against appropriate content. Videos considered “not advertiser-friendly” include those that are sexually suggestive, violent, contain foul language, promote drug use or deal with controversial topics such as war, political conflict and natural disasters.
Transgender activist Quinby Stewart agrees there needs to be more transparency. He complained after YouTube demonetized a video about disordered eating habits. “I definitely don’t think the video was even close to the least advertiser-friendly content I’ve posted,” he said.
lmao of course the first video i had marked as not advertiser-friendly was the one about my disordered eating habits pic.twitter.com/UObYPe4fmM
He complained to the platform and the company has since approved the video for monetization.
“YouTube’s policy is just very vague, which makes sense because I think demonetization needs to be handled on a case-by-case basis. Their policies seem more reasonable when you ask a human to check it, but the algorithm that catches videos originally is really unfair,” he said.
Sarah T Roberts, an information studies professor from UCLA who studies large-scale moderation of online platforms, said that large technology companies need to be more honest about their shortcomings when it comes to policing content.
“I’m not sure they fully apprehend the extent to which this is a social issue and not just a technical one,” she said.
Companies such as Google and Facebook need to carefully think through their cultural values and then make sure they are applied consistently, taking into account local laws and social norms. Roberts said the drive to blame either humans or algorithms for decisions was based on a false dichotomy as human values are embedded into the algorithms. “The truth is they are both engaged in almost every case,” she said.
The fact that it is now hitting Google’s bottom line should be a wake-up call. “Now it’s financial and is going to hit them where it hurts. That should create some kind of impetus.”
The Guardian asked Google for more clarification over how the moderation process works, but the company did not respond.