Facebook and Cambridge Analytica: let this be the high-water mark for impunity
The problems we see at social media companies today are the by-product of a laissez-faire approach to regulation, writes MacKenzie F. Common.
Picture: Pixabay via CC0
The latest revelations represent more than just the most recent and inevitable controversy emanating from Facebook’s beleaguered offices. The scandal over Cambridge Analytica’s participation in electoral manipulation and gross breaches of privacy have resonated more widely with users than the earlier allegations about fake news and Russian connections.
On an individual level, Facebook users have to contend with the fact that through no fault of their own, their personal information was harvested and weaponised by the fractious company that provided analytics for the Brexit campaign and Donald Trump’s presidential campaign. On a societal level, we are only beginning to understand how our democratic institutions are being manipulated by unethical technological practices that transform a loose amalgamation of interests into targeted advertising. This meddling could have disastrous consequences, facilitating the rise of a new Radio Rwanda or form of agitprop for the Web 2.0 world. We are now in the era of bespoke, personalised propaganda.
There is still time, however, to reverse the trend of declining privacy and technological elitism. This latest Facebook scandal can be a catalyst for regulatory change, ensuring that 2018 becomes the high-water mark for impunity in Silicon Valley (and all of its various incarnations around the globe). It is time to apply pressure to ensure that Facebook can no longer turn a blind eye to the actions of third-party software developers and the other clients it allows access to user data. The company may not have intended to become synonymous with electoral manipulation and fake news but by failing to address key issues (including being aware of this particular privacy breach since 2015) they now must accept responsibility for their inaction. This article will therefore offer a number of suggestions for how we can ensure this scandal results in real change in how social media companies and their clients do business.
First, when trying to rectify this massive breach of trust, one must prioritise transparency. The famous US Supreme Court Justice Louis Brandeis once opined that “Sunlight is said to be the best of disinfectants; electric light the most efficient policeman.” Unfortunately, for too long, social media companies have been allowed to keep users and regulators in the dark, shielding their practices behind claims of proprietary technology and excessive secrecy. Without transparency, would-be critics and regulators struggle to collect information, often at the detriment of the everyday user who would benefit from their intervention. This may seem like an obvious place to begin but it is the only way to start.
Second, we must create systems of oversight so we are not entirely reliant on the actions of whistle-blowers. It cannot be emphasised enough that for every Christopher Wylie, there are hundreds (or even thousands) of people who work in tech who would have been aware of some of the underlying issues in this scandal but did nothing. One notable feature of the current controversy is how much the parties involved tried to “pass the buck” when discussing who was really at fault. The Guardian reported, for example, that “Cambridge Analytica said that its contract with GSR stipulated that Kogan should seek informed consent for data collection and it had no reason to believe he would not.” Instead of allowing data to be traded in wilful blindness, we need to ensure that we have systems of oversight that act as interlocking chains, where each party that passes along data must be diligent in ensuring that the receiving party adheres to certain data protection principles.
So, for example, Facebook must take an active interest in the actions of third-party companies like Global Science Research (GSR) who, in turn, must contract with companies like Cambridge Analytica in a legal and ethical manner. These chains need to begin at the governmental level and must include tech experts who can audit the practices of these companies and report back to policy-makers. In the last twenty years, our lives have become profoundly affected by two major systems that laymen cannot understand (the global financial market and the tech industry) and we need specialist watchdogs who possess the expertise to identify risks to the public.
Finally, we must empower users to become true guardians of their personal data. This stewardship can only occur if the requirements of informed consent are strictly enforced. One of the most disturbing aspects of the Cambridge Analytica scandal has been the complete disregard for informed consent. While users participating in the original GSR quiz did provide their consent for their data to be used in academic settings, they were not informed that their data would eventually be packaged and sold to commercial companies like Cambridge Analytica. Even more disturbingly, the GSR app also collected the information of all of the quiz-takers’ Facebook friends and harvested their data without even the thinnest veneer of consent.
We need the equivalent of the warning label on cigarette boxes for personal data exchanges through social media. Users need to be explicitly informed that by maintaining a profile in general, but also by completing quizzes or installing third-party apps that their data is being collected and can be transferred between many companies, rendering future attempts at erasure difficult.
It should be noted, of course, that self-help remedies must only ever be a complement to regulation and corporate change. There are commentators online querying whether, in light of the Cambridge Analytica scandal, users should just delete their social media accounts since these companies seem to act with such impunity. This argument side-steps the fact that deleting Facebook accounts may be a viable alternative for some users but it cannot be the only solution. It is possible to have social media services that are transparent, audited, and prioritise informed consent.
The problems we are witnessing at social media companies today are not inextricably entwined with the services they are offering, they are rather the by-product of a laissez-faire approach to regulation that has permitted these risks to fester into the mess we’re in today.
This article represents the views of the author and not those of Democratic Audit. It was originally published on the LSE Business Review. It is based on the author’s current PhD research on social media, at LSE’s Department of Law.
About the author
MacKenzie F. Common is a PhD student at LSE’s Department of Law. She holds a B.A. (Honours) in Political Science from the University of Guelph (Canada) where she graduated with Distinction in 2011. She earned her LLB (Graduate Entry) from City University in 2013 and her LLM from the University of Cambridge in 2015, where she was a blog editor on the Cambridge Journal of International and Comparative Law. MacKenzie has worked at the Conduct and Discipline Unit, a specialised unit in the United Nations Department of Field Support which handles criminal complaints against peacekeepers and civilian staff working on peacekeeping missions. While at the CDU, she drafted a handbook on investigation procedure and evidentiary standards to be disseminated to all of the peacekeeping missions around the world. In 2013, MacKenzie worked in the Office of the Prosecutor (OTP) at the International Criminal Tribunal for the former Yugoslavia (ICTY). MacKenzie also worked for the Nanaimo Crown Attorney’s Office in Nanaimo, British Columbia and the Law Society in London, England.
All very laudable in its own right, but whatever we do will make very little difference if users of facebook et al so happily hand over their data and have no redress over its use elsewhere; in any event, for all the cries of (certain parts of) the media, no-one seems to be complaining about what has actually been done to them personally by facebook or third party users of their personal info. In any event, these systems can be deftly altered or refined by the ever changing world of social media, which is so little understood by those who comment on it, to effectively allow the same thing to happen in future by facebook (the choice of the old right now) or whoever follows it as the medium or media of choice: those reporting in the old-style media on what has actually happened do not really give a true picture of what it means in real terms outside of emotive cries relating to “influencing the process” (shock horror). What does this mean in real actual facts? What did people receive to dramatically influence them? A free meal for their vote? A threat that “we know where you are”.
More likely some mad piece of supposedly personalised propaganda surely, ignored by almost all? How many have complained to facebook or others about what they have received during this ‘scandal’, or how they have been brainwashed? Or are there millions all gibbering and broken by the torrent of propaganda that dramatically changed their minds over facebook one night and guided their hands in the ballot box? Had it done so, we would be seeing major examples of people saying how it had influenced them, and (more importantly) been shown examples. At the time. The lack of any of this detail in all the howling outrage means that the use of this type of thing is right now is as chaotic and probably useless as most political campaigning is. And if we all saw it we would just laugh: “Is that it? Is that what all the fuss is about?” Maybe someone – the author or a reader on here – can point me towards the exact content of this overwhelming and heroic deluge of propaganda which knocked everyone off their feet and led them in their droves to change their minds about how they would vote. An example would do. I am genuinely open to be persuaded but uneasy that all this furore currently appears to be just another ‘sore loser’ tactic from those who are mad that they lost in 2016.
Something barely referred to in the same UK media is the decision by the authorities to effectively hand over patients’ NHS medical records (in a way that is ultimately very easy to identify them, just look at how it is presented!) to private companies. With barriers being placed in your way if you want to opt out. This type of thing is far worse, because of its very serious potential impact upon other aspects of individuals’ private lives and the fact that they did not sign up at their doctor’s to broadcast their medical history to the world… but because it cannot be laid at the feet of Trump or Brexit it is presumably OK, or just to be ignored.
“…how they have been brainwashed? Or are there millions all gibbering and broken by the torrent of propaganda…”
The best – the only – way of brainwashing people is to slip the message in without anyone noticing i.e. under the radar. If you knew it was happening you could and would reject it.
Indeed…and there is no evidence at all that Russia or facebook or anyone else ran a massive torrent of propaganda or any form of campaign that suddenly made everyone vote one way instead of another. And surely we cannot retreat into suggesting that if a vote goes the way we don’t want it that it can only be achieved by the propaganda of dark actors – and now hidden propaganda that we can’t see, smell, touch or even read.
The allegations appear to suggest or imply (but never prove) that loads of propaganda was enabled by spending vast sums on accessing people via facebook or wherever. That is not hidden propaganda – if it did happen it has to have actually had a physical aspect, something we can see and point to – “so they sent out 300 things over the previous week and changed people’s opinions”. All I really want to know is what happened (not what might have). The Guardian stories on this are laughable, inept, full of errors and clearly do not understand the technology anyway – thus confusing everyone even more. Hence they appear to have run into the ground.
Micro-targeting based on individuals personalities amounts to brainwashing in my book.
Collins dictionary: “If you brainwash someone, you force them to believe something by continually telling them that it is true, and preventing them from thinking about it properly.”
And I see brainwashing as the very antithesis of democracy.
So rather than getting systems in place to accommodate micro-targeting, I suggest some way is found to ban it completely: no political advertising in social media. Possible?