South San Francisco, CA April 30, 2021 rebroadcast from Moz://a News Byte
CLICK HERE for more valuable information from the Mozilla Foundation
U.S. Senate Vs Algorithms
Executives from big tech companies like Facebook, Twitter and YouTube gathered this week in front of the U.S. Senate Judiciary subcommittee on privacy and technology to discuss their platforms’ use of algorithms. The spotlight on algorithms grows brighter, as society feels the effects of social feeds and suggested videos that (sometimes) aid in spreading misinformation and radicalizing content.
The hearing centered around social media algorithms and how they play a role in incentivizing extreme content like hate speech and disinformation. During the hearing, the U.S. government leaned on the knowledge of experts like Harvard University’s Joan Donovan (who’s appeared on Dialogues & Debates) and the Center for Humane Technology’s Tristan Harris. The hearing makes for an interesting watch, you can tune into it below.
CSPAN: Video — Senate Hearing On Social Media Algorithms
Facebook, Twitter, and YouTube executives testified before a Senate Judiciary subcommittee on social media companies’ use of algorithms in their platforms.
Ars Technica: Algorithms Were Under Fire At A Senate Hearing On Social Media
“‘Algorithms have great potential for good,’ said Sen. Ben Sasse (R-Neb.). “‘They can also be misused, and we the American people need to be reflective and thoughtful about that.'” …
… “[Joan Donovan] pointed out that the main problem with social media is the way it’s built to reward human interaction. Bad actors on a platform can and often do use this to their advantage. ‘Misinformation at scale is a feature of social media, not a bug,’ she said. ‘Social media products amplify novel and outrageous statements to millions of people faster than timely, local, relevant, and accurate information can reach them.'”
We were heartened to see U.S. Senators asking these important questions about algorithmic amplification. Starting in 2019, we have been urging YouTube to increase transparency about the scale and impact of its content recommendation algorithms. We called on YouTube to work with third-party researchers to verify its claims that they reduced ‘borderline content’ on YouTube by 50%. And we began working directly with people to document their own experiences with ‘regretful content’ on YouTube by collecting and analyzing data from our ‘RegretsReporter’ tool.
After the hearing, here’s what we had to say:
Mozilla Foundation: Senate Hearing Confirms YouTube Won’t Fully Release Recommendations Data Without More Pressure from Public and Congress
“We urgently need to understand how algorithmic amplification is impacting the content we are recommended and consume. We also need to empower independent, third-party research and analysis into their algorithms in order to identify and disclose crucial problems.
Through its silence, YouTube has made it clear that they won’t share this crucial information without additional pressure from lawmakers and the public.”
– Ashley Boyd, Mozilla
YouTube has made it clear that they do not intend to release information about the scale and impact of their content recommendation algorithms globally. To address the information gap, Mozilla built a browser-based extension, RegretsReporter, that allows YouTube users to report a ‘regrettable’ video when it’s recommended. More than 30,000 people have already downloaded our extension as a way to help us pressure YouTube to act.
The News Byte
Written By Xavier Harding
Edited By Audrey Hingle, Will Easton
Art Direction Nancy Tran
Email Production Alexander Zimmerman, Will Easton
Mozilla is a non-profit organization — so our campaigns for your privacy and security online, and to keep the web open, healthy and accessible to all as a global public resource … depend on contributions from subscribers like you. If you haven’t already contributed this year, could you please chip in a small donation today? Thanks!