A Privacy Grading System Could Warn TikTok Users on Data Access

Feb. 19, 2025, 9:30 AM UTC

The ongoing legal scuffle around TikTok’s divestment from its China-based ownership raises a range of questions, including why TikTok, why now, and does the remedy fit the risk? All are legitimate questions. One of the concerns about the popular app is that, in the words of the US Supreme Court, it collects “vast amounts of sensitive data” from 170 million US TikTok users.

But what companies—foreign and domestic—don’t? And if our digital privacy is a legitimate government concern, there is a simple, less intrusive, and less constitutionally suspect way to rein in abusive privacy practices of any company that has access to our most sensitive data.

The solution may lie in wielding a tool that would provide meaningful transparency over commercial practices involving our data: a grading system, akin to what cities such as New York use to grade restaurants for their compliance with the local health code—or the warnings tobacco and alcohol products must display on their packaging to alert consumers to the health risks associated with their use.

It isn’t difficult to imagine a system giving clear and consistent warnings to consumers about the practices of companies that have access to our digital lives—naming and shaming those who don’t protect our privacy in a very simple and straightforward way.

This idea has precedent: Because of the EU’s General Data Protection Regulation and the laws of some states such as California, many tech companies now provide consumers with the opportunity to review their cookies policies. But those notices rarely stand in the way of us breezing right through them to get to the apps and sites we want to access. A consumer generally needs a law degree and the ability to read reams of end-user agreements obscured by legalese to understand exactly what a company’s data-protection policies are before they click “accept.”

Such weak disclosure systems sometimes do more harm than good. They can serve as a fig leaf for abusive practices, giving consumers a false sense of security when, in reality, they’re masking what’s going on behind those disclosures.

TikTok and ByteDance aren’t the only companies that collect sensitive data from Americans. We view information that is pushed to us from algorithms, and our own and even our friends’ activities online. This information isn’t only used to sell us products, but also to target us with unwanted information, and can be used to invite abuse, harassment, and suppression of dissident voices: even those who might advocate for greater digital privacy.

In its recent opinion on the TikTok ban, the Supreme Court created an opening for Congress to take a hard look at the data-collection practices of all businesses. That’s exactly what Congress should and can do—in a meaningful and actionable way. And, no, the federal government shouldn’t seek to take an ownership stake in TikTok, as President Donald Trump has floated as a possible solution to the ByteDance problem. This move would allow public officials to peer into our private online activities—a truly dystopian and chilling scenario.

Is there a better way? Throughout the country, local health departments conduct health-code inspections of restaurants, issue violations, and even shut down restaurants where the kitchens don’t engage in healthy practices. But few localities publish the health department’s findings and, short of a restaurant being closed for egregious violations, few consumers will ever know that there might be more than a fly in their soup. In contrast, some cities loosely follow the approach similar to that adopted by New York City, which publicizes restaurants’ compliance with the local health code. In New York City, a grade is issued based on a restaurant’s compliance with the health code, and that grade is posted prominently in the restaurant’s front window.

Would this grade-based disclosure system work for companies that harvest our data? This system would identify a series of characteristics for digital operations that protect consumer privacy and those that don’t. It would then cluster those characteristics under particular grades, from “A” through “F”, with the practices most protective of privacy receiving that highest score.

When a company is basically in the business of selling its users’ data—even if what they market themselves as is a wellness brand or something else, such as a social media site or a giant online marketplace—it would receive a far lower grade, even an “F.” As the saying goes, if you get a service for free, it’s likely you’re the product.

Companies would have to choose among these different clusters of characteristics and publicly declare, in binding and enforceable agreements, that they’re providing those protections clustered under a particular grade. And here’s the kicker: Instead of burying their practices in opaque end-user agreements, they would have to display this letter grade any time a consumer seeks to access their site or app, making the company’s privacy practices crystal-clear and easy to understand. This approach would give the consumer a simple read on that company’s practices and allow them to easily accept or reject those practices with eyes wide open.

Since this is a mere disclosure system, it won’t run afoul of First Amendment concerns because it doesn’t involve the government making choices among different company practices. It allows consumers to do that. Federal and state governments, in collaboration with trust and privacy scholars and even industry to an extent, could come up with the suite of protections under each grade, but then the companies would choose among them, and decide what protections they will provide and which they won’t.

Increasing consumer knowledge and choice, without making such a choice a burdensome one for the consumer, is what a grading system would catalyze. In addition, stiff penalties would follow should a company fail to adhere to the practices it professes to follow.

Such a system could even create a race to the top, with companies, like the restaurants who want that coveted “A” grade, to improve their practices. What’s more, it wouldn’t involve the specter of government intrusion into (let alone ownership over) our online identities, and would reduce the opportunities for abusive practices.

The concerns about TikTok are certainly legitimate. But we should have such concerns about the data-collection practices of many other companies as well, both foreign and domestic. There’s no reason Congress can’t explore remedies that can protect all Americans’ privacy regardless of the source of the risk and do so without raising any constitutional concerns. A robust and muscular grading system around digital privacy would do just that.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Author Information

Raymond Brescia is the Hon. Harold R. Tyler chair in law and technology at Albany Law School.

Write for Us: Author Guidelines


To contact the editors responsible for this story: Jada Chin at jchin@bloombergindustry.com; Alison Lake at alake@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.