We assign robots too much humanity: they are simply tools

In the mid-1990s, research was conducted at Stanford University that would change the way we think about computers. The Media Equation experiments were simple: Participants were asked to interact with a computer that acted socially for a few minutes, after which they were asked to provide feedback on the interaction.

Participants would provide this feedback either on the same computer (No. 1) that they had just worked on or on another computer (No. 2) across the room. The study found that participants who responded to computer # 2 were far more critical of computer # 1 than those who responded to the same machine on which they had worked.

People who responded to the first computer seemed unwilling hurt the computer emotions to his face, but had no trouble talking about it behind back. This phenomenon became known as the computers as social actors (CASA) paradigm because it showed that humans are hard-wired to respond socially to technology that presents itself as even vaguely social.

The CASA phenomenon continues to be explored, especially as our technologies have become more social. As a researcher, lecturer and all-encompassing lover of robot technology, I observe this phenomenon in my work every time someone thanks a robot, assigns it a gender, or tries to justify its behavior using human or anthropomorphic justifications.

What I have witnessed during my research is that even though few people are under delusions that robots are human, we tend to put them off just as we would another person.

Social trends

Although this may sound like the beginning of one Black mirror episode, this trend is precisely what allows us to enjoy social interactions with robots and place them in roles as caregiver, collaborator, or companion.

The positive aspects of treating a robot as a person are precisely the reason why robotics design them as such – we like to interact with humans. As these technologies become more human-like, they become more capable of influencing us. But if we continue to follow the current path of robot and AI implementation, these technologies may appear far more dystopian than utopian.

The Sophia robot, manufactured by Hanson Robotics, has been switched on 60 minutes, received honorary citizenship from Saudi Arabia, holds a UN title and has dated actor Will Smith. While Sophia undoubtedly highlights many technological advances, few surpass Hanson’s achievements in marketing. If Sophia really was a person, we would recognize her role as one influence.

But worse than robots or AI are sociopathic agents – goal-oriented without morality or human judgment – these technologies become tools of mass influence for which organization or individual controls them.

If you thought the Cambridge Analytica scandal was bad, imagine what Facebook’s influence algorithms could do if they had an accompanying, human-like face. Or a thousand faces. Or a million. The true value of a compelling technology lies not in its cold, calculated efficiency, but its scale.

Understand the intention

The recent scandals and exposures in the technology world have left many of us feeling helpless in the face of these corporate giants. Fortunately, many of these issues can be addressed through transparency.

There are basic questions that are important for social technologies to answer because we would expect the same answers when interacting with another person, albeit often implicitly. Who owns or determines the mandate for this technology? What are its goals? What approaches can it use? What data can it access?

Since robots could have the potential to soon exploit superhuman abilities, implement the will of an unseen owner, and without showing verbal or non-verbal cues shedding light on their intent, we must demand that these types of questions be answered explicitly.

As a roboticist, I am asked the question: “When will robots take over the world?” so often that I have developed a stock response: “As soon as I tell them so.” My joke, however, is substantiated by an important lesson: Do not be scapegoats for decisions made by humans.

I consider myself a robot sympathizer because I think robots are unfairly blamed for many human decisions and mistakes. It is important that we regularly remind ourselves that a robot is not your friend, your enemy or anything in between. A robot is a tool, controlled by a person (however far away it is), and increasingly used to influence us.The conversation

Article by Shane Saunderson, Ph.D. Candidate, Robotics, University of Toronto

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Give a Comment