Scholar Sylvester Johnson addresses Lafayette audience about implications of a ‘yes’ answer
By Stella Katsipoutis-Varkanis
As machines increasingly take on humanlike characteristics and functionalities with the rapid advancement of artificial intelligence, lurking in the shadows is a moral and philosophical question that our technology-shaped society will inevitably be obliged to answer: Will machines ever be considered human?
During a special guest lecture arranged and hosted by Lafayette College Humanities Center earlier this month, nationally renowned scholar Sylvester Johnson addressed the Lafayette community on the issue—one that, he said, is by no means a new one, but rather an age-old conundrum that is steadily resurfacing.
Johnson—the founding director of Virginia Tech Center for Humanities, who specializes in the study of technology, race, religion, and national security—explained that the muddled line between what it means to be a human being and what it means to be an object is something people have been struggling to elucidate since as far back as the 15th century.
“European commercial relations were beginning at that time. It started as an exchange of commodities that were nonhuman, and they quickly began to incorporate human trafficking and slave trade,” Johnson said. “Claims among Europeans that African merchants believed objects that were not alive could behave like people was called fetishism. Over 400 years, it became a very influential cultural theory that functioned to justify slavery and violence, and promote racism by claiming that Africans and non-Europeans were intellectually inferior because they could not properly understand the nature of objects and distinguish them from the nature of people.”
Today, the question of the proper relationship between things and humans is coming back “with a vengeance,” Johnson explained, who found no shortage of examples to demonstrate just how far technology has come and how close it is to taking on a “human” life of its own.
In 2017, a robot named Sophia—which was developed by Hanson Robotics and, thanks to a sophisticated algorithmic design, could engage in extemporaneous conversation as well as scripted speech—was the first machine to be granted Saudi Arabian “citizenship” in an effort to move the country away from petroleum dependence and toward economic growth through technological innovation. For many, the publicity stunt came with rather severe implications, as citizenship is a right that is sometimes even denied to humans who live in Saudi Arabia or are a part of the country’s large immigrant workforce.
Cutting-edge inventions like Google Duplex (a new digital concierge service that can carry out tasks like making restaurant reservations for its human users and engage in conversation) and Project Debater (the first AI system that can participate in unscripted debates with humans on complex topics) make it virtually impossible for the untrained ear to distinguish human conversation from that of machines. In 2016, self-driving car developer Waymo sought and received approval from the National Highway Traffic Safety Administration to recognize autonomous cars as drivers of record—a status that was previously reserved for biological human beings—so that the cars could be commercialized and sold in the United States.
While the purpose of some such innovations is more innocuous—like that of Lil Miquela, a music-creating digital avatar Instagram influencer—Johnson explained that others may pose significant threats to the human race if not properly regulated. These include advancements such as the robots that were used by Dallas police officers to remotely kill shooter Micah Johnson in 2016, as well as the Joint Air-to-Ground Missile (JAGM) System, a U.S. military weapon that, once launched by a human operator, can pursue its target on its own and decide how to best strike in order to optimize destruction.
“Thinking, intelligence, the ability to understand are very difficult and complicated, and also a central part of the human experience,” Johnson said. “Given the fact that humans have become increasingly skilled at getting machines to do things that resemble what we call thinking and reasoning, we have to question whether machines might actually become so sophisticated that they begin to embody, achieve, or demonstrate what we think of as human subjectivity, of personhood, of the ability to engage.”
But before we can attempt to answer this question, Johnson added, we must first consider how we will ultimately recognize, respond to, categorize, and defend the humanity and rights of the human-machine hybrid—people who receive smart prosthetic limbs or other rehabilitative technologies that modify or restore bodily motor functions. People like Claudia Mitchell, for example: the first woman to have a bionic arm that is connected to her neural network, that she can control with her brain, and that has allowed her to regain her sense of touch despite having lost her natural arm in a motorcycle accident.
“If we’re going to have a philosophical debate about whether machines can be human,” Johnson said, “we should recognize that machines and human cells are communicating just fine. So, the question is whether in any sense someone like Claudia Mitchell is altered in their human status if they are part machine. If we say that the Claudia Mitchells of the world are fully human, that their having machine parts does not alter their human status, then at some level we have to accept the participation in human subjectivity of machine entities. Does this mean machines will literally become human? Perhaps. The question is ultimately a political one.”
The question will eventually be answered, Johnson continued, and it’s going to be answered “through the greatest money influence, or through the greatest seizure of political power, or through some kind of accountability to the people who have the most to lose, who are the most vulnerable. There might not be an answer on which everyone agrees; however, the fact that we don’t all agree does not get us out of the situation of having to answer the question. There are all kinds of things that people don’t agree about, like voting or equal rights for marriage, but are still important. This is why it is important that we have humanistic leadership. In the end, we’re going to live in a world under certain conditions. But the question is, what will those conditions be?”
Johnson’s lecture was sponsored by Lafayette College Humanities Center and co-sponsored by the Cyril S. Lang ’49 Center for the Humanities Endowment Fund, Religious Studies Department, Philosophy Department, and Office of the Provost.