Lilla Nóra KISS: Professional Ethics and Morality Can Prevent Social Media From Becoming Sovereign
When top Russian diplomat Maria Zakharova explains that George Orwell’s dystopian classic Nineteen Eighty-Four was written to describe the dangers of Western liberalism and not totalitarianism, we may feel as though we are watching an absurd Monty Python satire. In those parodies, artists question facts and overkill conversations with extreme statements to criticize an existing system and discourage the audience from becoming participants in the absurd comedy. While such plays used to primarily cater absurdity for entertainment purposes only, they are gradually starting to normalize the reality that has come of absurdity, which is definitely less enjoyable.
To substantiate this claim, our current reality can be broken down into multiple components of narration: the producers of a play can be viewed as analogous to the owners of media outlets and social media platforms, the directors to the censors, the main actors to the influencers—journalists, politicians, policymakers, and other public figures—and the members of the audience to the users of said media outlets or citizens.
As the reality slowly becomes absurd, members of the passively consuming audience become active participants of the play. Obviously, ownership makes profit-oriented decisions; the aim is to maximize the audience–and thus, the profit. The more extreme and negative the content is, the more people it reaches. The competition to become the most popular outlet slowly pushes the focus from professional, objective, and ethical information-sharing towards somewhat sensationalist content (also known as ‘clickbait’) as human beings struggle with the ‘limited rationality’ mindset, identified by Herbert A. Simon in 1947. Consequently, this competition promotes partially irrational decision-making capabilities. Essentially, humans have to make decisions based on the information available, but due to their cognitive and time limitations, people are vulnerable to the sources of information. Today, social media serves as a general source of news for Americans. According to the Pew Research Center’s survey conducted in January 2021, Facebook stands out as the regular source of news for Americans (54%), while a large portion of Twitter users regularly gets news on the site (59%). Since the resources and capacities are limited, platforms have the green light to filter and pre-digest the news for their users. The cherry-picked news comes from well-selected sources and is directly delivered to the users’ newsfeed. The filtered information behaves as a sub-threshold stimulus that unconsciously supports users' interpretations of certain topics. Complemented by the content, the description of which lacks objectivity, users are easy targets of polarization. As a result, the demarcation lines between those who agree with a certain opinion and those who disagree become more acute.
In today’s age of surveillance capitalism – as Shoshana Zuboff named the “bloodless battle for power and profit as violent as any the world has seen” – the limit of professional ethics of journalists, politicians, and other influencers is a key question. Another significant point is the owners’ liability for intentionally (trans)forming public opinion. As Count István Széchenyi – often referred to as the Greatest Hungarian –famously expressed, “ownership and knowledge come with responsibilities.” In a world where all information is available on the internet and owners of digital platforms are free to decide what to show to or hide from the masses, ownership over information becomes the most powerful means to shape the future of society. There is no doubt that owners structure societies; the question is if they do it with moral observations or purely for their own financial benefits.
The former approach would be the idealistic scenario: it necessitates a social media environment where platforms’ owners do not intend to form the public opinion and therefore: (1) allow all forms of speech as free speech even if prone to expressing extremism, (2) users could pick and choose freely from millions of pre-generated information upon their consciously and explicitly preselected priorities (which obviously pushes the boundaries of limited human capacities and timeframes), and (3) would not tolerate or apply any cancel culture. This also inherently implies that (4) even personae non gratae — Latin for “people not welcome”— would be allowed to use these platforms even if their views are controversial to the views of the ownership and as such, considered undesirable on their platforms. This would also entail a lack of double standards and a state of objective fairness. At the same time, such an ideal form of social media management would not automatically excuse crossing certain thresholds, such as sharing hate speech content, child pornography, or any other criminal acts, as the platforms would still be legally obligated to take the necessary measures in enabling established public institutions to interact and restore the balance. By the conclusion of this description of the ideal social media platform, there should be no doubt that this utopian scenario does not currently exist.
In the latter case, however, without being overly pessimistic, the world becomes a worse place every day. In this sad but more realistic scenario, private entities are interested in playing with information and using readers’ limited rationality. As a result, owners can intentionally form a public opinion as a side effect of their profit maximization. Of course, the profit-oriented approach is the legitimate interest of corporations—there is nothing wrong with that – until profit maximalization happens in compliance with ethical and moral standards. However, this leads to a very interesting legal dilemma. On the one hand, corporate decisions on allowing or restricting content are legitimate based on Section 230 of the CDA. (The US law setting the standards for ‘decent communications’ since 1996.) However, on the other hand, that decision may lead to illegitimate consequences because corporations have no legitimate authority to act as sovereigns and to form, deform, or transform public opinion by using their power over information. To attain unmanipulated public opinion is, of course, unimaginable and unnecessary in general, but identifying the influencer is crucial. Yet, tracing the influencer is almost impossible in the virtual sphere, hence the question of accountability for these platforms.
Translating the situation into the language of legal theory, the debate is about the relationship between law and morals. Natural law theory holds that law should reflect moral reasoning and should be based on moral order, whereas the theory of legal positivism holds that there is no connection between law and moral order. A symbolic example that highlights the differences from a practical point of view is that Nazi Germany and the Stalinist Soviet Union – two infamous totalitarian regimes of the 20th century – were rule of law regimes from the context of a purely legal positivist interpretation. Under natural law, however, these states were not operating under the rule of law, and their laws were not valid due to the lack of morality of their content. This is the case because natural law requires morality to validate legal content, while legal positivism does not.
Reflecting the opposing views on the current issue, the legal positivist would raise the question, “what does the law say?” for the situation and would provide the ‘easy answer’: private entities, including the owners of social media platforms, are legally entitled to make discretionary decisions regarding the content they share or ban on their own platforms, regardless of the influence they exert over their users. However, natural law would require adding moral values of ‘good’ or ‘bad’, ‘right’ or ‘wrong’ to make an adequate evaluation and give a ‘legal’ or an ‘illegal’ answer. Does these private entities’ exercise of their freedom to influence their users by the content they share lead to legal or illegal outcomes? In addition, if technically an act is legal, does it constitute the use or misuse of corporate freedom under Section 230 CDA? In other words, does Section 230 license these corporations to shape public opinion? Is there any moral standard that the ownership should follow when making their private decisions based on Section 230, especially knowing that the decision may influence and manipulate users? Of course, it is difficult to measure morality as its levels are very relative. It is even more complicated to evaluate morality in the digital sphere. Yet even so, basal minimum moral standards would support both the ownership in making fair decisions and the creation of the most objective environment for the news cycle. Introducing content-neutral and impartial minimum standards based upon morality might therefore help shift the emphasis back to normalcy from a partisan path.
When the producers (owners) introduce moral principles to reach fairness, directors (censors) are free to manage their tasks within the frames of their professional ethics. The main players (the influencers: journalists, politicians, policymakers, and other public figures) are forced to serve the public interest instead of their own interests. The ownership has a huge responsibility for doing good within and for the society. Otherwise, the play becomes an absurd reality produced by quasi omnipotent owners, directed by unethical censors, and influenced by self-interested public figures.
Morality and law together can prevent social media ownership from becoming uncontested, illegitimate sovereigns. Saving the checks and balances to maintain a healthy balance between private and public is important. We could see what happens when the public-private balance is distorted. In communist dictatorships, private entities are weak compared to state actors and have an extremely narrow room for maneuver in their interest advocacy. In two famous communist countries, People's Republic of China and the Russian Federation, privately-owned media is virtually non-existent: the balance is distorted and private actors are dependent upon public institutions. Dependency, in turn, leads towards toleration of oppression and ideology-based manipulation. As a result, absurd things ensue: the people of Russia may not know they have been invading Ukraine for roughly three months now. In China, Western “traditional” social media is geo-blocked and Chinese have their own platforms. Strong totalitarian states do not bear any private intervention to their decision-making. On the other hand, it is also a mistake when private actors overreach their competences and influence public opinion to serve their private interests.
There is evidence that most people prefer normalcy over extremes. That is good news. Normalcy requires people in the middle to keep a healthy balance between private interest and public interest. Professional ethics, morality, and ownership liability are able to prevent private entities from becoming the new sovereigns that influence alternative movies about digital absurdities.
Lilla Nóra KISS is a postdoctoral visiting scholar at Antonin Scalia Law School, George Mason University, Virginia. Lilla participates in the Hungary Foundation’s Liberty Bridge Program and conducts research in social media regulation and regulatory approaches. Formerly, Lilla was a senior counselor on EU legal affairs at the Ministry of Justice and she has been a researcher and lecturer at the University of Miskolc (Hungary), Institute of European and International Law for five years, where she taught European Union law. Lilla obtained her Ph.D. degree in 2019. The topic of the dissertation is the legal issues of the withdrawal of a Member State from the EU.
Her current research interests cover the legal dimensions of Brexit, the interpretation of the European Way of Life, and the perspectives towards social media regulation in the USA and in Europe.