On Intelligent Technology Liberty

On Intelligent Technology Liberty

What are the responsibilities of governments (local, national, transnational) in determining and regulating the place of intelligent technology in law, commerce, and society? What are the responsibilities of the individual programmer? Should any other entities hold responsibility?

Abstract

This article discusses how the 19th century philosopher John Stuart Mill’s essay “On Liberty” can guide the regulation of intellectual technology by governments, organisations and individuals. The essay discusses how the entertainment industry would have us believe the human race is under threat of a cybernetic super race, it looks at the juxtaposition of governments regulating and exploiting this technology, how businesses has taken advantage of self pushed data via social media, the effect regulation has had on the pharmaceutical industry and how the systematic harvesting, exploitation and uncontrolled regulation of government agencies could be of a real and present danger in nature, and may represent the Tyranny of the majority J.S. Mill so feared.

Technological Singularity

There is a common belied that machines will inevitably reach the point of technological singularity, and in a good many of us, this creates a deep seated fear[1]. This is either caused by, or simply developed further from a Hollywood business obsessed with the notion of artificial intelligence taking over the world and enslaving the human race for eternity [2]. But that view assumes that we can directly compare human and machine intelligence on a single axis of intelligence. Humans and machines currently have very different strengths and weaknesses, and there is no simple way of comparing their ‘level of intelligence’ [3]. Indeed, the combinatorial complexity inherent with a machine intelligence already exceeds that capable of a human, and for example, humans will forever be better natural language users not the least since natural language was invented by us, and developed in a way that is to a large extend dependent on our culture, brains and bodies [4].

It has been suggested that organisational researchers of creativity and innovation should invest significant energy in studying AI and computer-assisted human intelligence, the ways in which they might yield creative breakthroughs, and how those innovations might impact—and be impacted by—workers, consumers, organisations, and society [5]. However adopting this process has yet to demonstrate an intelligence which can in its self create and crucially recognise something novel and of value, and therefore worth perusing further. Indeed, any idea or invention would in all likelihood simply get lost in a sea of machine generated brute force mutations.

I argue that this approach is no match for the tenacity, problem solving and determination of an inventors ‘hunch’. So given it is unlikely a piece of intelligent technology will broadly out compete a human and lead to a Hollywood style existential threat to humanity, what is left to regulate, and what is the case for regulation specifically regarding intelligent technology?

The Tyranny of Government

John Stuart Mill states “that the only purpose for which power can be rightfully exercised over any member of a civilised community, against his will, is to prevent harm to others” [6]. Whilst the musings of a 19th century philosopher may not immediately seem relevant to the cut and thrust of intellectual technology, I argue that the principles and themes can equally apply to the modern day growth in technological intelligence. Indeed, it is from a stand point of ‘protecting others from harm’ that I believe any individual or organisation should live by.

Yet, the predicament of governments and regulators is a tragic double bind: the obligation to protect citizens from potential algorithmic harms and the temptation to increase its own efficiency. Such dual role may not even be possible, has been a matter of debate, the challenge stemming from the fact algorithms rule-based programming lacks no longer exists. There are a number of legal and policy instruments associated with the use of intelligent technology, such as strengthening the immigration process control system in Canada, “optimising” the employment services” in Poland, and personalising the digital service experience in Finland.

The effects of these automated decision support systems and the crucial role of governments in the digital society, and in light of the current COVID-19 crisis, there is a real possibility that intelligent technology can strengthen and/or harm trust in governance systems. [19]

As government and public administration lag behind the rapid development of AI in their efforts to provide adequate governance, they need respective concepts to keep pace with this dynamic progress. One propsal to solve this issue is for different interest groups that have a shared motivation or shared values of generating beneficial effects with technology. Such collaboration could take many forms, like committees, foundations or agencies. This agency would also govern the communication between private and public organisations and propose standards and laws developed with the best knowledge of both the legislative realm and developers or organisations in the field of AI applications. [20]

Of the liberty of thought and discussion

The western world is currently going through a period of change and providing a safe space for individuals and groups to express themselves without fear or favour [7]. The use of self publish platforms such as blogs, vlogs and social media allows the suppressed the ability to fosters greater awareness of the foreclosure of individuals, although not without a degree of peril for their own cause [8]. Briefly, the use of free speech to incite violence must cause governments to enact legal and regulatory control, however is not a technological issue and would equally apply to broadcasting, print and in-person interactions [11].

Along with the instant access to the sum of all human knowledge at one’s fingertips, the democratisation of publishing is bringing about one of the most rapid and significant social changes our modern civilisation has ever seen. It is however, through this expression of opinion instantly throughout the globe that allows organisations to learn, understand and exploit.

Cambridge Analytica was able to harvest data from 87 million user’s Facebook profiles, create psycho-graphically tailored advertisements and allegedly influence people’s voting preferences in the 2016 US presidential election. Indeed, when individuals become aware of this behaviour they often consider themselves immune to such activities and do not change their privacy settings [9].

This is by no means a reason for inaction, however, is being physiologically influenced in a targeted, sophisticated, and subversive manor harmful? What if a non-governmental organisation was to use the same tactics to bring about behavioural change regarding the climate? Do the ends justify the means? I argue that the only issue surrounding the use of data in such a manor as Cambridge Analytica is around disclosure. Individuals must be aware and consent to data being harvested, processed and used in a manor which may or may not be in their best interests.

The European Union’s General Data Protection Regulation and specifically the definition and requirements of processing & processors, controller & controllers, personal data and supervisory authorities [10] gets this balance right, provides the ability for individuals to gain control and disclosure whilst enabling organisations access to information to benefit themselves and wider society.

On individuality as one of the elements of well-being J.S. Mill points out that “the inherent value of individuality since individuality is ex vi termini (i.e. by definition) the thriving of the human person through the higher pleasures” [6]. I assert that this definition equally applies to that of an organisation creating and/or using a piece of technology. It is only with the empty space of the unknown that inventors are able to create the next generation or new breakthrough previously unimagined or thought of by society, let alone any regulator.

The pharmaceutical industry is one built around regulators controlling every part of the design, development, verification, validation, production and maintenance of a high risk product capable of saving and destroying lives. Indeed, many drugs created and in use on the market today cause harm as well as help, so why should the market place not dictate if a drug is acceptable? and if so, how much further advanced would this market place be? [12] In the 1870s, an English chemical researcher named C.R. Alder Wright created a new substance in which Bayer pharmaceuticals marketed towards children suffering from sore throats, coughs, and cold. This “heroin-laced aspirin”, (known now as diamorphine) was sold to children from 1898 and only withdrawn from the market after pushback from physicians and negative stories about heroin’s side effects pilling up. Bayer continued to market and produce their product until 1913 with the FDA only banning Heroin for sale eleven years later [15]. This and other examples are where regulators are doing both good (in terms of protecting patients from ill informed concent) and bad (stifling innovation and there for lost an opportunity to save more). Given a lost opportunity is by definition ‘and opportunity’ it is right for governments to regulate such industries in a manor to protect the lives here today.

The growth of the technology sector has so far not been constrained by regulators in the same way. As such, this has enabled such discoveries and inventions as the world wide web, space travel and video conferencing, to name but a few. From a libertarian stand point, the recent legal presidents being set around net neutrality laws demonstrates this philosophy is at the heart of what society believes the internet is, and by extension, intelligent technology. Indeed, in many countries net neutrality has been enacted as law in order to protect consumers from exploitation [13].

An additional area of consideration is when governments deliberately try and stimulate organisations to innovate and create intelligent technology to solve a societal problem. This “regulated innovation” has often had mixed results and rarely leads to revolutionary change, instead evolving existing technologies by making incremental improvements [14]. This is not a

philosophical question as such but an example of how money can easily be wasted, and worse diverted from innovations which can make more of an impact in the longer term.

On the limits to the authority of society over the individual

The final and important consideration is to what extent an authority (government, regulator et al) should intervene for our own benefit or protection. Should organisations be left free to pursue their own interests as long as it does not harm the interests of others? How should governments protect us from ourselves when we don’t even know we need protecting?

In 2013 The Guardian newspaper reported the secret and systematic collection of data from companies and individuals by state actors and intelligence agencies [14]. Known as “The Snowden files” a series of reports showed how the UK and US governments colluded to spy on one another’s population to fight terrorism and other nebulous and big brother-esk pretences. The 6,400 strong employees at the UK’s intelligence agency GCHQ tirelessly work to invent ever more imaginative ways to collect vast amounts of data on hundreds of millions of people. Of course it would be churlish to believe intelligence agencies and that their work will, to a significant degree, not be secret.

I assert that the this sort of intrusive technology needs stong democratic controls and accountability. Secret courts and secret committees have traditionally been the answer, but how can we trust this given how they have behaved historically? In the US Congress was not always told the truth about what the NSA was up to, and although I have no evidence for this assertion, I suspect the same also happened in the UK. In many ways this situation is very simular to the actions of Cambridge Analytica but with one distinct difference.

To underline this perceived tyranny, recently the Metropolitan Police have brought in to sharp focus what can happen when delegated power is used in a poorly regulated and inbalanced manor. “Our mission is to keep London safe for everyone.” is the stated mission and vision of the force [17], and yet the Daniel Morgan Independent Panel looking in to the murder of 1987 Daniel Morgan and subsequent investigation showed the force to be falling short of this. “The Metropolitan police’s culture of obfuscation and a lack of candour is unhealthy in any public service. Concealing or denying failings, for the sake of the organisation’s public image, is dishonesty on the part of the organisation for reputational benefit. In the panel’s view, this constitutes a form of institutional corruption.”[18].

When intelligence agencies lie to their own governments, and the oldest professional police force in the country is accused of historical and current institutional corruption, I assert that the balance of power is heavily skewed in favour of unaccountable and opaque institutions whom believe they have our best interests at heart, but are acting with impunity in what any other part of society, and indeed are too close to what J.S. Mill calls the “the tyranny of the majority” [6]. It is imperative that governments hold themselves to a higher standard and use the tools available through the use of inteligent technology in a responsible and restrained manor. “The road to hell is paved with good intentions” [21].

Ends. 2,191 Words.

Citations

[1] Technological singularity [Online] Available at https://en.wikipedia.org/wiki/Technological_singularity (Accessed on 22-06-2021)

[2] The AI Magazine 24.3 (2003): 144. “Computer Fear Factor in Hollywood.” Web.

[3] Bolander, T. J Manag Gov 23, 849–867 (2019). “What do we loose when machines take the decisions?” https://doi-

org.ezproxy1.bath.ac.uk/10.1007/s10997-019-09493-x (Accessed on 21-06-2021)

[4] Lakoff, G. (2008). “Women, fire, and dangerous things.” Chicago: University of Chicago press.

[5] Amabile, Teresa M. (2020) “Creativity, Artificial Intelligence, and a World of Surprises.” Academy of Management Discoveries 6.3 : 351-55. https://doi.org/10.5465/amd.2019.0075 (Accessed on 21-06-2021)

[6] Mill, John Stuart, and Jean Bethke Elshtain. On Liberty, edited by David Bromwich, and George Kateb, Yale University Press, 2015. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/bath/detail.action?docID=3420105. (Accessed on 22-06-2021)

[7] Luttrell, Johanna C. White People and Black Lives Matter : Ignorance, Empathy, and Justice. 1st Ed. 2019. ed. Cham: Springer International : Imprint: Palgrave Macmillan, 2019.

[8] Linscott, Charles “Chip” P. “All Lives (Don’t) Matter: The Internet Meets Afro-Pessimism and Black Optimism.” Black Camera : The Newsletter of the Black Film Center/Archives 8.2 (2017): 104-19.

[9] Hinds, Joanne, Williams, Emma J, and Joinson, Adam N. ““It Wouldn’t Happen to Me”:Privacy Concerns and Perspectivesfollowing the Cambridge Analytica Scandal.” Hinds , J , Williams , E J & Joinson , A N 2020 , ‘ “It Wouldn’t Happen to Me” : Privacy Concerns and Perspectives following the Cambridge Analytica Scandal ’ , International Journal of Human-Computer Studies ,

Vol. 143 , 102498 . Https://doi.org/10.1016/j.ijhcs.2020.102498 ISSN:1071-5819 (2020): Hinds , J , Williams , E J & Joinson , A N

2020 , ‘ “It Wouldn’t Happen to Me” : Privacy Concerns and Perspectives following the Cambridge Analytica Scandal ’ , International Journal of Human-Computer Studies , Vol. 143 , 102498 . Https://doi.org/10.1016/j.ijhcs.2020.102498 ISSN: 1071-5819. Web.

[10] IT Governance Privacy Team, I. T. Governance. EU General Data Protection Regulation (GDPR) - an Implementation and Compliance Guide, Fourth Edition. Ely: IT Governance, 2020.

[11] Commission of the European Communities, European Coal Steel Community. High Authority, and Euratom. Commission. Legal Instruments to Combat Racism and Xenophobia : Comparative Assessment of the Legal Instruments Implemented in the Various Member States to Combat All Forms of Discrimination, Racism and Xenophobia and Incitement to Hatred and Racial Violence. Luxembourg: Office for Official Publications of the European Communities, 1993. Print.

[12] Ross, David B. “Overdose How Excessive Government Regulation Stifles Pharmaceutical Innovation.” The Journal of Clinical Investigation 117.12 (2007): 3598. Web.

[13] Marsden, Christopher T., and Manchester University Press, Publisher. Network Neutrality : From Policy to Law to Regulation. Manchester, England: Manchester UP, 2017.

[14] Kammerer, Daniel. “The Effects of Customer Benefit and Regulation on Environmental Product Innovation.: Empirical Evidence from Appliance Manufacturers in Germany.” Ecological Economics 68.8 (2009): 2285-295.

[15] 7 of the Most Outrageous Medical Treatments in History [Online] Available at https://www.history.com/news/7-of-the-most-outrageous-medical-treatments-in-history (Accessed on 22-06-2021)

[16] The Snowden files [Online] Available at https://www.theguardian.com/world/series/the-snowden-files (Accessed on 22-06-2021)

[17] Vision and values (Metropolitan Police) [Online] Available at https://www.met.police.uk/police-forces/metropolitan-police/areas/about-us/about-the-met/vision-and-values/ (Accessed on 22-06-2021)

[18] The Report of the Daniel Morgan Independent Panel [Online] https://www.danielmorganpanel.independent.gov.uk/the-report/ (Accessed on 22-06-2021)

[19] Kuziemski, Maciej, and Misuraca, Gianluca. “AI Governance in the Public Sector: Three Tales from the Frontiers of Automated Decision-making in Democratic Settings.” Telecommunications Policy 44.6 (2020): 101976.

[20] Wirtz, Bernd W, Weyerer, Jan C, and Sturm, Benjamin J. “The Dark Sides of Artificial Intelligence: An Integrated AI Governance Framework for Public Administration.” International Journal of Public Administration 43.9 (2020): 818-29.

[21] Henry G. Bohn “A Hand-book of Proverbs” 1855

Eddie Joseph

Marketing Director at Kindred Spirit Care Limited

3mo

Civil liberties must be protected at all costs when determining which data to share and with whom.

Like
Reply

Fascinating article! 📘 It brings to mind a quote from another great mind, Albert Einstein: “It has become appallingly obvious that our technology has exceeded our humanity.” 🌐 As Mill advocated for the balance in liberty and control, it's essential we navigate this digital era thoughtfully. 💡

Like
Reply

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics