Based in Sydney, Australia, Foundry is a blog by Rebecca Thao. Her posts explore modern architecture through photos and quotes by influential architects, engineers, and artists.

The Pride and Prejudice of Artificial Intelligence

 
Pride-and-prejudice_AI.jpg

It is a truth universally acknowledged that computers will one day be endowed with intelligence levels beyond that of humans, thus giving them the capacity to replace human beings and take over the world. This moment in time is commonly referred to as the singularity. Computer scientist Ray Kurzweil, who has popularized this term, predicts that we will reach this moment by 2045. Kurzweil’s predictions are based on the exponential growth of a computer’s computational power, along with our growing ability to map the human brain. Once the human brain is simulated and reverse engineered within a computer, they will have matched the intelligence levels of humans. 

“2029 is the consistent date I have predicted for when an AI will pass a valid Turing test and therefore achieve human levels of intelligence. I have set the date 2045 for the ‘Singularity’ which is when we will multiply our effective intelligence a billion fold by merging with the intelligence we have created.”

Oxford philosopher Nick Bosrtrom is famous for having predicted that we will have “superhuman artificial intelligence” by the first third of this century, similarly drawing upon our ability to match the brain’s computational power. 

As the loudest voices in AI talk about this singularity moment not as an if  but a when, it raises many questions about the nature and definition of intelligence, the philosophy of technology, and what role ethics should play in the development of these new technologies.

Today’s conversations about artificial intelligence are ultimately no different than past generations’ about the scope and impact of technology. As AI is heralded as a savior for humanity from ourselves (at it’s best) and our ultimate demise and replacement (at worst), it is important to note both the pride and prejudices at the core of this new technology.     

Created in our Own image

Pride: The narcissus story of ai 

The ancient greeks told the story of Narcissus, a hunter who once saw his reflection in a body of water and fell in love with it, unaware that it was a representation of himself. This misunderstanding led to Narcissus’s ultimate demise, as he was unable to cope with the truth when he discovered it. 

When evaluating the tools we use, it is tempting to not recognize them as representations of ourselves. Media and technology theorist Marshal Mcluhan describes the human-technology relationship in terms of the Narcissus myth; our tools are mere extensions of human capabilities, extensions that radically alter the way we think, interact, and exist. 

“any invention or technology is an extension … of our physical bodies, and such extensions also demand new ratios or new equilibriums among the other organs and extensions of the body.” (Understanding Media, 45)

As we develop new technologies, we unknowingly create them in our own image or likeness. Like Narcissus, we fall in love with these pictures or extensions of ourselves without even knowing that they reflect our values and our biases. 

“It is this continuous embrace of our own technology in daily use that puts us in the narcissus role of subliminal awareness and numbness in relation to these images of ourselves.” (Understanding Media, 46) 

When technology is heralded as the end to all humanity’s problems, it is often described as something that is entirely different or disassociated from human influence. AI is supposed to be an unbiased way to make decisions and value judgements, because it is free from human frailty or emotional bias.

But is is important to challenge this sacred view of AI with the reminder that it is a tool, just like any other tool humans have created. Humans beings are behind the construction of AI, values and constraints went into it’s development. Humans are making the decision of what is valuable to measure, what the best mode of measurement is and – most importantly - how to organize the data and draw conclusions from it.  

Prejudice: human bias in the computer program

Human beings are great at creating decision making shortcuts. Because our minds don’t have the capacity or time to gather the vast amount of information about everyday decisions, we rely on heuristics, or decision making shortcuts, to help us navigate through life with relative ease. Our minds are trained to see patterns and reduce the cognitive load of decisions based on these patterns. For example, experience may have told you that traffic in your city usually picks up at a certain time, so you decide to plan your day accordingly; you’ve made a decision based on the way you find things to normally be, without diving in and doing hours of research about traffic patterns and city event calendars. Similarly, if you know that Mexican food usually give you heartburn, you’re probably more likely to choose Thai food over tacos. 

Heuristics are a part of our everyday reality, and the stakes of using these shortcuts usually prove to be pretty benign (being stuck in traffic, a bad Thai dinner, etc.). But what happens when we rely on these – often subliminal – shortcuts for more serious decision making. Our minds so easily make assumptions about people based on the way they look, talk, or affiliate. This has led to drastic societal issues, with pervasive racism being a prime example. 

But as a society, we are increasingly outsourcing these decision making shortcuts to computer algorithms. As we grow in our trust of algorithms and artificially intelligent systems, are really placing trust in an unbiased computer program, or merely the individuals behind the writing of these algorithms? 

When we outsource decision making to computers who hold vast amounts of data, we trust that the data fed to these machines is complete and accurate. We have already begun to rely on AI to make some important societal decisions, and have found that these machines (surprise, surprise) have similar biases than society.

Algorithms are built around key success metrics that are decided on by humans, and often merely automate the status quo. When big issues are reduced to purely technological ways of understanding, we lose out on other, more human ways, of seeing situations and making decisions. The technologizing (or outsourcing) of big societal decisions reduces core aspects of what it means to be human and to live in a community; things that can’t be measured or quantified and should incorporate other ways of seeing the world: social sciences like anthropology, sociology, theology, etc. 

Technologies categorized as Artificially Intelligent are based towards an entirely technological understanding of the world; they aren’t able to account for other ways of knowing that are core aspects of our experience as human beings. 

Artificial Epistemology: How computers know

Epistemology is the branch of philosophy that deals with theories of knowledge and belief. Comparing human knowledge or intelligence to that of computers is like comparing apples to oranges, so it is important to understand how computers know, and how that is different from how humans know. Understanding the different levels of abstraction (or even categories of abstraction) will help us get away from doomsday picture of the coming AI apocalypse while providing us a framework for discussing the real ethical and societal questions behind its development. 

How Humans Know

Humans are embodied beings. Our knowledge is embodied. There are aspects of knowledge that are metaphysical, that extend beyond the neural networks in our brains. Knowledge is inseparably connected to lived experience.

British philosopher and poet Owen Barfield describes aspects of human knowledge that are inherently associated with experience and emotion: 

“… the appreciation of lyric poetry brings about, in however small a degree, a change of consciousness, a change in the direction of a slight increase of knowledge, of wisdom.”

Barfield is talking about a knowledge that comes from the joy of experiencing the beauty of poetry that exceeds the mere acquisition of data or information; this experience brings about an “increase of knowledge” that can’t be rivaled by a machine.  

Barfield’s use of the term ‘wisdom’ here is very apropos. It introduces an additional level of abstraction to this conversation of intelligence. Computer technologies can have a level of ‘intelligence’ that is based on data and information, but lack the ability to access this idea of wisdom, which is inherently experience and consciousness driven. 

Ancient philosopher Aristotle discussed this in terms of virtue, defined by him as doing the right thing, at the right time, and in the right way. The nuances of virtuous living aren’t accessible to computer technologies because they are entirely contextual and wisdom driven.

How Computers Know

Computer technologies are based on a very specific logic and paradigm of knowledge. At a philosophical level, technology has the effect of shrinking time and space in a way that makes us feel artificially close to things and people. Consider Facebook, how it gives us this image of someone that suspends time and place. We can interact with digital representations of each other without actually being together. The fact that I can, at any time, interact with your digital self, brings about an incomplete and even artificial feeling of knowing you. 

20th Century philosopher Martin Heidegger discusses this idea of artificial nearness as a core tenant of our technological environment. The more we interact with technological, information based ways of understanding and knowing, the more we see this as the only way of knowing things (or being near things in Heidegger’s terms). Computers know things through the elimination of time and space, through the disembodied acquisition and storage of logical information, but this disembodied epistemology is artificial, or at the very least, vastly incomplete. 

Humans and computers have different ways of knowing things, and therefore have very different scales of intelligence. Yes, a computer can hold in it’s memory the entirety of Shakespeare’s cannon, but it can’t access the “change in consciousness” or increase of understanding about the human condition that reading it brings me as I spend years reading my way through it.

Human knowledge is more than the sum of information. Human knowledge and intelligence is embodied, time structured, consciousness and experience dependent. Computers are great at thinking technologically, crunching numbers, finding insights on vast amounts of data, but ultimately lack access to other human capacities for intelligence and knowledge that lead to wisdom.

 

The Fakebook Effect