Before we can even consider how to protect data privacy online, we have to decide who is in charge of protecting data privacy.
Data privacy advocates believe that tech companies should create services that are ethically and socially responsible. Conversely, tech companies insist that they don't have the bandwidth to shield the public from every conceivable harm, nor is it their role to do so. The companies claim that doing things like limiting personal data use and closely moderating user content would hamper their abilities to make innovative products and services. Mark Zuckerberg's motto might be "move fast and break things," but privacy advocates want the companies to move more carefully and protect the public interest.
You can see the conflict between technological innovation and public oversight play out in government approaches to tech governance. When the government moves to regulate online platforms, data brokers, and other tech enterprises, its proposals get plenty of pushback from the tech industry. That pushback often includes arguments about why adding oversight or limitations will hurt our online systems by slowing them down, limiting their content or accessibility, or making them too costly to maintain.
There is also political disagreement over how to regulate tech companies and tech infrastructure. Concerns about data privacy are bi-partisan. Everyone agrees that the ways companies collect and use our data and information are creepy. But the rationales for why they're creepy are vastly different depending on political perspectives and motives.
One statutory example of the friction around who is in charge of the public interest online has to do with a law that doesn't directly implicate data privacy: Section 230 of the Communications Decency Act of 1996. With few exceptions (including copyright protections and criminal law provisions) Section 230 shields providers of internet services and content from liability for the content that appears on their products. It promises that internet companies won't be treated like publishers - they won't be subject to defamation laws, responsible for moderating hate speech and other harmful content, or liable for the actions people take as a result of the content on their products. Some public interest tech advocates call Section 230 tech companies' "get out of jail free card."
Section 230 does not deal directly with data privacy (although it can play a role in doxxing, revenge porn, and other situations where people share the private information of others without their consent). However Section 230 reveals the complexity of regulating the internet. It is hard to create a free, uncensored internet that serves the public interest that doesn't become a dangerous, uninhabitable online landscape for at least some of its users.
Building a free, accessible internet that is useful and welcoming to all is a tough task that we have yet to complete. We can't eradicate things like content moderation and data collection.
Many of our favorite internet technologies need personal data in order to function seamlessly and to serve us. Our weather and mapping apps need to know where we are, flu and COVID trackers need to know whether we are sick, and our workout apps need to be able to record our steps and heart rates. Our shopping platforms and music apps can only recommend things we like by knowing what we've purchased before. But we also want more control over our personal data, and most of us would like to prevent it being used by certain entities, or in certain ways. For example, we may want our Google maps app to work, but we might not want the police to be able to use that mapping data to surveil us. Or we may be fine with Facebook having our information, but maybe we'd be less fine with having Facebook sell that data to our health insurance company.
Similarly, we want Section 230-like protections for online platforms, so they don't censor our speech for fear of legal punishment, but we also want to ensure that people aren't attacked online, and that internet fora don't become digital incubators for hate groups and violence. We'd like to be able to get all of the content that delights us, and to be able to communicate with everyone we care to talk to, but we'd like to prevent Facebook from inciting genocide or Amazon from platforming unsafe products for sale.
Here are some examples of differing perspectives on Section 230, to demonstrate how complicated these debates and decisions about how to build the best systems are:
- On the other hand, Carrie Goldberg, an attorney who specializes in digital sex crimes, says Section 230 should be reformed, calling the law "a monster" birthed by Congress.
The debate over who is responsible for online public interest highlights a significant tension between innovation and regulation. Data privacy advocates argue that tech companies should prioritize ethical and socially responsible practices, while tech firms claim that extensive oversight hampers their ability to innovate. The friction extends to government regulation, with tech companies pushing back against proposals that they believe could hinder their operations.
Section 230 of the Communications Decency Act exemplifies this complexity, offering broad protections to internet platforms but also attracting criticism for shielding companies from liability for harmful content. While Section 230 doesn't directly address data privacy, it underscores the broader challenge of balancing a free, accessible internet with the need for public safety and ethical standards.