In The Face Of Attacks, TikTok Tries To Charm Its Critics With Transparency



The beleaguered company is taking a new tack to give the public more visibility into its inner machinations with “transparency and accountability centers”. But it may be creating more questions than answers.


For the last three years, TikTok has been touting to Congress that it would be launching “Transparency and Accountability Centers” as an answer to growing criticism of how the company safeguards Americans using the app and their data.

Finally, as scrutiny of the China-owned social media giant hits a fever pitch in the United States, the elusive centers are actually opening. They’re part of a major shift in TikTok’s strategy at a time when investigations, and potential state and federal bans on the app, are becoming an existential threat in its third largest market (behind only China and Indonesia). Just Thursday morning, Senate Democrat Michael Bennet demanded Apple and Google yank TikTok from their app stores.

With these physical spaces, announced to much fanfare in early 2020, the company will welcome policymakers and outside experts with a goal of helping to demystify how TikTok moderates and recommends content. The company also hopes these centers will allay concerns about its approach to data privacy and security and deepen trust in the platform. On Tuesday, TikTok opened the doors to its new Los Angeles facility to a small group of journalists (outposts in Washington, Dublin and Singapore are also in the works). Despite the center’s stated focus on transparency, journalists who toured it had to agree to do so “on background.” TikTok also said that a day earlier, it had its first in-person visit from a lawmaker. It would not say who.

The neon-lit center felt akin to an interactive room in a museum—outfitted with touchscreens where guests could swipe through TikTok’s community guidelines, computers where they could learn about TikTok’s recommendation engine, and booths where they could simulate the experience of a content moderator. (That section made clear how challenging and taxing human moderation can be.) Off limits was a server room where engineers from Oracle, which is working to review TikTok’s systems and localize its user data and traffic in the U.S. in Oracle Cloud, can study the platform’s source code; Oracle staffers must sign NDAs, lock up their phones and pass through a metal detector to access it. (Oracle engineers are also reviewing code at a center opened last month in Columbia, Maryland.)

While the tour focused heavily on TikTok’s trust and safety work, particularly for teens and families, it left as many questions as it had answers. The purported inside look into TikTok’s algorithm hardly scratched the surface, offering only a high-level overview of the three-step process its machine learning models use to narrow down and recommend personalized content. Also notably missing from the transparency center was information about TikTok parent ByteDance and its ties to China.

For years, as the leaders of TikTok’s biggest American rivals made rounds in Washington, appeared at major conferences and interacted with the public—Meta had even opened its own version of a transparency center—TikTok largely avoided engaging. But as the Biden administration struggles to strike a national security deal with TikTok, state attorneys general investigate the app, and state and federal lawmakers try to restrict or outright ban it, that strategy is changing. In the past year and a half, the company has started going to greater lengths to more aggressively defend itself and reshape the narrative. TikTok’s head of safety, Eric Han, has started speaking on panels. Chief operating officer Vanessa Pappas and head of U.S. public policy Michael Beckerman have both testified before Congress. And next month, on the heels of the opening of the transparency center in Los Angeles, CEO Shou Zi Chew will testify on Capitol Hill for the first time ever.


Got a tip about TikTok or issues facing creators? Reach out to the author Alexandra S. Levine on Signal at (310) 526–1242 or email [email protected]


At the briefing on Tuesday, TikTok would not discuss the status of its reportedly stalled CFIUS negotiations, being steered by TikTok’s U.S. data security leads Will Farrell and Andy Bonillo. But it did highlight some steps it’s taking to protect users. As part of Project Texas—an internal effort aimed at addressing concerns over the potential for China to access U.S. user data or influence the content that Americans see—TikTok is forming a new subsidiary called TikTok U.S. Data Security. That arm, expected to look more like a defense contractor than a tech company, will be staffed by personnel approved by CFIUS and governed by an independent board of national security and cybersecurity experts. CFIUS will also approve inspectors, auditors and other third parties that, along with Oracle, will be responsible for vetting, securing and deploying TikTok’s software code and reviewing its moderation and recommendation technology.

TikTok will also soon begin testing a way for users to reset the algorithm that pushes the videos they see in the “For You” feed, which currently surfaces content based on the user’s past activity. It will separately begin testing a feature that will explain to creators why some of their videos may not be eligible for the “For You” page—which can mean the difference between a video going viral and hardly being noticed—and offer an opportunity to appeal that. Finally, TikTok is updating the way it may take enforcement action on the accounts of its more than a billion users, a process that has historically been somewhat opaque, with little communication to creators as to why a particular video or account has been suspended.

“We’ve heard from creators that it can be confusing to navigate,” said TikTok’s global head of product policy, Julie de Bailliencourt. “We also know it can disproportionately impact creators who rarely and unknowingly violate a policy, while potentially being less efficient at deterring those who repeatedly violate them.” The new strike system, currently taking effect globally, will aim to weed out repeat offenders, she said. “We will continue evolving and sharing progress around the processes we use to evaluate accounts and assure accurate, nuanced enforcement decisions.”



Source link

Denial of responsibility! insideheadline is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave A Reply

Your email address will not be published.