Big Tech Ditched Trust and Safety. Now Startups Are Selling It Back As a Service

Facebook
Twitter
LinkedIn
Pinterest
Pocket
WhatsApp

The same is true of the AI systems that companies use to help flag potentially dangerous or abusive content. Platforms often use huge troves of data to build internal tools that help them streamline that process, says Louis-Victor de Franssu, cofounder of trust and safety platform Tremau. But many of these companies have to rely on commercially available models to build their systems—which could introduce new problems.

“There are companies that say they sell AI, but in reality what they do is they bundle together different models,” says Franssu. This means a company might be combining a bunch of different machine learning models—say, one that detects the age of a user and another that detects nudity to flag potential child sexual abuse material—into a service they offer clients.

And while this can make services cheaper, it also means that any issue in a model an outsourcer uses will be replicated across its clients, says Gabe Nicholas, a research fellow at the Center for Democracy and Technology. “From a free speech perspective, that means if there’s an error on one platform, you can’t bring your speech somewhere else–if there’s an error, that error will proliferate everywhere.” This problem can be compounded if several outsourcers are using the same foundational models.

By outsourcing critical functions to third parties, platforms could also make it harder for people to understand where moderation decisions are being made, or for civil society—the think tanks and nonprofits that closely watch major platforms—to know where to place accountability for failures.

“[Many watching] talk as if these big platforms are the ones making the decisions. That’s where so many people in academia, civil society, and the government point their criticism to,” says Nicholas,. “The idea that we may be pointing this to the wrong place is a scary thought.”

Historically, large firms like Telus, Teleperformance, and Accenture would be contracted to manage a key part of outsourced trust and safety work: content moderation. This often looked like call centers, with large numbers of low-paid staffers manually parsing through posts to decide whether they violate a platform’s policies against things like hate speech, spam, and nudity. New trust and safety startups are leaning more toward automation and artificial intelligence, often specializing in certain types of content or topic areas—like terrorism or child sexual abuse—or focusing on a particular medium, like text versus video. Others are building tools that allow a client to run various trust and safety processes through a single interface.

source

Facebook
Twitter
LinkedIn
Pinterest
Pocket
WhatsApp

Never miss any important news. Subscribe to our newsletter.

Recent News

Editor's Pick