Regulating social media may prove to be a tricky proposition. To what organization should the responsibility fall, if any? Access Partnership’s Greg Francis unpacks the complexities involved in regulating content on social media platforms.
President Trump’s social media summit at the White House this past July notably didn’t include Facebook or the President’s own favorite social media channel, Twitter. The event was animated by anxiety around an apparent anti-conservative social media bias.
In August a proposed executive order was leaked that would put the Federal Communications Commission (FCC) and the Federal Trade Commission (FTC) in charge of developing and enforcing new regulations to, in effect, suspend social media companies’ right to curate or suppress content on their platforms. There followed rumblings of a possible mutiny from within these agencies which were coming to the subversive conclusion that being handed this much power over online content — what often amounts to fora for free speech — was unconstitutional (so much for bureaucratic scope creep).
The inside-the-Washington-beltway approach to social media regulation notwithstanding, these false-starts underscore the complexity of managing social media platforms, particularly under an administration bent on getting its hands deep into management matters that Republicans have historically left to the wisdom of America’s citizenry. So it is no surprise that Facebook itself has called attention to its own preference for regulation in the form of Nick Clegg’s extrapolation of Mark Zuckerberg’s March call. If nothing else, the leaked draft executive order makes other, less statist solutions appear attractive, a notion which, in combination with Facebook’s call, paints the outlines of something like an independent third-party adjudicator for social media companies.
It is a fact that businesses can create employment, innovate, cut waste and carbon, but they cannot tell us the truth. More exactly, their goal is to deliver services efficiently and not to adjudicate on what is true, what is not and how to decide. But in light of recent events, governments seem to expect the tech sector to do that (in a way they never did of agribusinesses, newspapers or banks), and the resulting failure is widening the trust gap between the two.
Then Came the Lawyers
When the European Court of Justice (ECJ) recognized the right to be forgotten it also required, in a fairly clear-cut way, that search engines such as Google “appraise” a request to be forgotten. The policy chatter at the time (May 2014) wondered how’s that supposed to work, exactly? These few years later we know: It works because Google took on the not insignificant task of reviewing and deciding on the merits of each application to be forgotten, effectively validating whether someone should be able to apply the newly named right.
Since then, though not necessarily because of the ECJ decision, a general trend has been to outsource decisions on other tough questions to tech companies. This includes calls for them to develop their own standards for child online protection, what is and what is not offensive and what taxes they should pay, all of which comes with an instruction to comply with any official request to provide personal data if their government asks for it (the 2016 Apple-FBI kerfuffle being one high-profile example of this). This is more than a requirement for high levels of self-awareness; it is asking rational economic actors to take a longer view than is reasonable.
Then Came the Rules
For this reason, internet companies were not entirely unhappy with their purgatory, wishing, as they did, to continue living in a world of light-touch regulation. Received wisdom was that this light touch enabled the innovation, growth and – as was articulated by the political class that championed the idea – U.S. dominance of the internet economy. The lack of taxation certainly did the growth of cross-border internet commerce no harm. And if Google had to step up to a quasi-public function from time to time at the insistence of paranoid Europeans, or if YouTube had to decide double-quick when images of violence were too profitable or too gruesome to remain accessible, then that was a small price to pay.
I Used to Like to Go to Work, But They Shut it Down
While the world is seeing a broad techlash, one can generalize just so much about internet companies. B2B cloud services are different from social media. Social media companies are internet companies, though the reverse is not always true. But their common policy denominator lies in the challenges they face with data management, and it is part of the reason data became something developed economies decided to regulate themselves. Many countries are, therefore, busy setting up and ceding powers to data protection authorities. Some governments are starting to pass laws to limit use of social media. Some, such as the U.K., are developing laws on online harms, requiring companies to keep users safe “and tackle illegal and harmful activity on their services.” Malaysia has already defined fake news and made it illegal to create or circulate it, with sentences of up to six years’ imprisonment and high fines.
Six Lanes of Traffic, Three Lanes Moving Slow…
If uptake of the data protection authorities has been slow across the world, recognition that they are not enough has moved fast. Even in the business-friendly U.K., currently struggling to demonstrate its dynamic possibilities to the international community, the Competition and Markets Authority believes a new regulator might be needed to manage the power of digital platforms, which is becoming an ever-broader church.
This amounts to a load of regulation, which itself demands a generalized solution. No government wants to have separate regulators for data, privacy, competition, telecommunications, access to content, child safety and the dozens of other potentially controversial functions that digital platforms enable. No business wants the compliance costs, still less the headache of engaging with dozens of regulators with different but overlapping mandates; this is a recipe for stifling the enablers of growth that the internet and digital platforms still represent.
From Out of This Darkness and Into the Day
What emerges from this churn are the contours of a new thing: a flexible regulatory body that manages online challenges dynamically and according to local laws, regulation and norms. It does not need to be expert in everything from competition to press regulation, but it will need to coordinate how these issues are framed and assessed in each market. Crucially, it takes the technology sector out of the business of deciding for others what is news, what is harm, what is truth and the many points in between, so they can get on with the holy work of creating jobs, driving growth and solving problems, rather than defending themselves against norms they did not know they had transgressed. And if this sounds as though it might lead to technology being less democratizing – spreading less access to information and extending less far the reach of inclusive, parliamentary-democratic values, one need only look around. That ship may already have sailed into the bay, and not just in one-party places where the internet economy is alive and well, such as China or the UAE.
In return, governments will have to agree on and, in the absence of any established best practice to copy (which is mostly their preference anyway) or define some workable top-level frameworks for the regulation of data, content and digital platforms. The challenge will always be in convincing companies to help drive the development of regulation, but the reasons for their doing so are increasingly compelling: If you miss the planning meeting, you have to live with someone else’s design choices, and few tech companies, forward-thinking as they are, are likely to submit to such dire straits.
Author: Greg Francis, Managing Director, Access Partnership
This article was originally published at Corporate Compliance Insights on 18 October 2019.