Facebook now has about 30,000 content moderators, and Twitter, a smaller company, doubled its content moderation workforce to 1,500 in recent years, according to The Washington Post.  In 2017, Susan Wojcicki, the CEO of YouTube, announced in a blog post that Google, in response to public outcry about offensive and violent content on YouTube, would expand its workforce responsible for reviewing content to more than 10,000 people. When asked about the hourly quota, Graham said the company prioritizes accuracy and measures productivity along with “a variety of dimensions to evaluate an employee’s overall job performance.” Graham also declined to comment on whether the company supported the Santa Clara Principles, a set of voluntary content moderation practices that include publishing the number of post removals and account suspensions, notifying and providing explanations to impacted users, and implementing an appeal system. Last year, the Electronic Frontier Foundation reported that Facebook, LinkedIn, Medium, Snap, Tumblr, and YouTube all support the initiative. In addition, Facebook, Twitter, and YouTube all issue transparency reports, though, as the advocacy group New America noted in its assessment of those reports last year, the companies failed to give a full picture of takedown figures. “If products that are against our policies are found on our site, we immediately remove the listing, take action on the bad actor, and further improve our systems,” Graham said. “Their very business model is flawed … allowing anyone with very little vetting to sell pretty much anything—that’s the fundamental problem, ” said Natasha Tusikov, an assistant professor at York University in Toronto and author of “Chokepoints: Global Private Regulation of the Internet.” “It’s impossible to weed out all the people who are trying to get around government regulations or defraud people, because you require so very little of them when they sign up,” she said. The article reported that several titles mentioned were removed from Kindle Direct Publishing. “As a bookseller, we believe that providing access to the written word is important,” Graham said in an email to The Markup, repeating the company’s response in the article. “We invest significant time and resources to ensure our guidelines are followed, and remove products that do not adhere to our guidelines.” “Curation algorithms are largely amoral,” DiResta wrote in a 2019 piece for Wired about her findings. “They’re engineered to show us things we are statistically likely to want to see, content that people similar to us have found engaging—even if it’s stuff that’s factually unreliable or potentially harmful.” Adelin Cai, formerly of Pinterest and Twitter, said both human review and machine detection are critical in making platforms their best selves. These tools, she pointed out, can scan potentially harmful images “without subjecting [the staff] to a lot of the exposure to bad content.” But Cai doesn’t think automation will replace human judgment entirely, even as the industry matures and AI advances. “Nothing supersedes the ability of the human mind to be nimble and understand context,” Cai said. Just last month, Cai and Clara Tsao, a former Mozilla fellow and chief technology officer of a U.S. government interagency anti-violent-extremism task force, launched the first membership-based professional organization for the field of trust and safety, the Trust and Safety Professional Association. (They also launched a corresponding foundation devoted to education, case studies and research.) Google, AirBnB, Slack, and Facebook are just a few of the group’s star-studded list of inaugural funders. This article was originally published on The Markup by Annie Gilbertson, and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.

Unlike other tech giants  Amazon won t say how many workers review posts - 59