YouTube Slammed in Senate Hearing on Online Engagement

It was kind of like an episode of "Algorithms Gone Wild" on Capitol Hill Tuesday (June 25).

The Senate Commerce Communications Subcommittee took a long look at the use of persuasive technologies on internet platforms that optimize engagement--AI, algorithms, web design--and whether mandating algorithmic transparency is a policy option. YouTube was cited by both sides for conduct they suggested was unbecoming a consumer-friendly platform.

That hearing was followed by a briefing on "dark patterns," which are ways edge providers get their users to make choices they might not ordinarily make as a way to keep them on a site, collecting data or money or both.

Keeping users engaged--on a site longer so they can be marketed to directly or their data harvested for use later--is how edge provider giants make their billions.

Sen. John Thune (R-S.D.), chairman of the subcommittee, took aim at Google-owned YouTube early in the June 25 hearing, citing a Bloomberg report that the social media platform had been chasing such engagement for years while ignoring calls from its own employees to address "toxic video" like vaccination conspiracies and "disturbing content" aimed at children.

He also cited a New York Times story about YouTube automatically recommending video of children playing in a pool to people who had viewed sexually themed content. He called that "truly troubling" and a consequence of using algorithms and AI to optimize engagement.

He also invoked Facebook filter bubbles as a possible contributor to political polarization because it allows users to remain in their own comfort zones and echo chambers.

Thune said Congress has a role in ensuring those companies keep consumers at the forefront as they innovate.

Thune said there should be a degree of personal responsibility when users take advantage of supposedly free services, platforms have a responsibility to be transparent about "how the contact we see is being filtered.

Thune said users should not be subject to manipulation by opaque algorithms.

Sen. Brian Schatz (D-Hawaii), ranking member of the full committee, took aim at YouTube as well, citing it as one of those sites trying to keep users engaged in a stream of increasingly inflammatory content "pushed out with very little transparency of oversight by humans." He suggested that would need to change. Schatz cited a Wall Street Journal article that found YouTube's recommendation engine often recommended conspiracy theories, misleading videos and partisan viewpoints, even when users weren't seeking that content out.

Also pointing to the online distribution of the Christ Church, N.Z., mass shooting and the viral video of a slowed down to appear drunk House Speaker Nancy Pelosi, saying: "Something is really wrong here."

He said he thought the problem was that Silicon Valley has the premise that society would be better, more efficient, smarter and more "frictionless" if "we would just eliminate steps that include human judgment."

But he said if YouTube, Facebook or Twitter employees were making the recommendations, he questions whether they would have recommended those "awful videos."

He said he wasn't saying all decisions should be made by humans, but that Silicon Valley was letting "algorithms run wild" and only using their humans to "clean up the mess."

John Eggerton

Contributing editor John Eggerton has been an editor and/or writer on media regulation, legislation and policy for over four decades, including covering the FCC, FTC, Congress, the major media trade associations, and the federal courts. In addition to Multichannel News and Broadcasting + Cable, his work has appeared in Radio World, TV Technology, TV Fax, This Week in Consumer Electronics, Variety and the Encyclopedia Britannica.