Will robots make good friends? Scientists are already starting to find out

0
687
Will robots make good friends? Scientists are already starting to find out

Image: Pixabay

In the 2012 film “Robot and Frank”, the protagonist, a retired cat burglar named Frank, is suffering the early symptoms of dementia. Concerned and guilty, his son buys him a “home robot” that can talk, do household chores like cooking and cleaning, and reminds Frank to take his medicine. It’s a robot the likes of which we’re getting closer to building in the real world.

The film follows Frank, who is initially appalled by the idea of living with a robot, as he gradually begins to see the robot as both functionally useful and socially companionable. The film ends with a clear bond between man and machine, such that Frank is protective of the robot when the pair of them run into trouble.

‘Robot’ and Frank form a friendship over the course of the film. Samuel Goldwyn Films/Alamy

This is, of course, a fictional story, but it challenges us to explore different kinds of human-to-robot bonds. My recent research on human-robot relationships examines this topic in detail, looking beyond sex robots and robot love affairs to examine that most profound and meaningful of relationships: friendship.

My colleague and I identified some potential risks – like the abandonment of human friends for robotic ones – but we also found several scenarios where robotic companionship can constructively augment people’s lives, leading to friendships that are directly comparable to human-to-human relationships.

Philosophy of friendship

The robotics philosopher John Danaher sets a very high bar for what friendship means. His starting point is the “true” friendship first described by the Greek philosopher Aristotle, which saw an ideal friendship as premised on mutual good will, admiration and shared values. In these terms, friendship is about a partnership of equals.

Building a robot that can satisfy Aristotle’s criteria is a substantial technical challenge and is some considerable way off – as Danaher himself admits. Robots that may seem to be getting close, such as Hanson Robotics’ Sophia, base their behaviour on a library of pre-prepared responses: a humanoid chatbot, rather than a conversational equal. Anyone who’s had a testing back-and-forth with Alexa or Siri will know AI still has some way to go in this regard.

The humanoid robot Sophia, developed by Hong Kong-based Hanson Robotics.

Aristotle also talked about other forms of “imperfect” friendship – such as “utilitarian” and “pleasure” friendships – which are considered inferior to true friendship because they don’t require symmetrical bonding and are often to one party’s unequal benefit. This form of friendship sets a relatively very low bar which some robots – like “sexbots” and robotic pets – clearly already meet.

Artificial amigos

For some, relating to robots is just a natural extension of relating to other things in our world – like people, pets, and possessions. Psychologists have even observed how people respond naturally and socially towards media artefacts like computers and televisions. Humanoid robots, you’d have thought, are more personable than your home PC.

However, the field of “robot ethics” is far from unanimous on whether we can – or should – develop any form of friendship with robots. For an influential group of UK researchers who charted a set of “ethical principles of robotics”, human-robot “companionship” is an oxymoron, and to market robots as having social capabilities is dishonest and should be treated with caution – if not alarm. For these researchers, wasting emotional energy on entities that can only simulate emotions will always be less rewarding than forming human-to-human bonds.

But people are already developing bonds with basic robots – like vacuum-cleaning and lawn-trimming machines that can be bought for less than the price of a dishwasher. A surprisingly large number of people give these robots pet names – something they don’t do with their dishwashers. Some even take their cleaning robots on holiday.

Other evidence of emotional bonds with robots include the Shinto blessing ceremony for Sony Aibo robot dogs that were dismantled for spare parts, and the squad of US troops who fired a 21-gun salute, and awarded medals, to a bomb-disposal robot named “Boomer” after it was destroyed in action.

8557259301?profile=RESIZE_710x

A military bomb disposal robot similar to ‘Boomer’. US Marine Corps photo by Lance Cpl. Bobby J. Segovia/Wikimedia Commons

These stories, and the psychological evidence we have so far, make clear that we can extend emotional connections to things that are very different to us, even when we know they are manufactured and pre-programmed. But do those connections constitute a friendship comparable to that shared between humans?

True friendship?

A colleague and I recently reviewed the extensive literature on human-to-human relationships to try to understand how, and if, the concepts we found could apply to bonds we might form with robots. We found evidence that many coveted human-to-human friendships do not in fact live up to Aristotle’s ideal.

We noted a wide range of human-to-human relationships, from relatives and lovers to parents, carers, service providers and the intense (but unfortunately one-way) relationships we maintain with our celebrity heroes. Few of these relationships could be described as completely equal and, crucially, they are all destined to evolve over time.

All this means that expecting robots to form Aristotelian bonds with us is to set a standard even human relationships fail to live up to. We also observed forms of social connectedness that are rewarding and satisfying and yet are far from the ideal friendship outlined by the Greek philosopher.

We know that social interaction is rewarding in its own right, and something that, as social mammals, humans have a strong need for. It seems probable that relationships with robots could help to address the deep-seated urge we all feel for social connection – like providing physical comfort, emotional support, and enjoyable social exchanges – currently provided by other humans.

Our paper also discussed some potential risks. These arise particularly in settings where interaction with a robot could come to replace interaction with people, or where people are denied a choice as to whether they interact with a person or a robot – in a care setting, for instance.

These are important concerns, but they’re possibilities and not inevitabilities. In the literature we reviewed we actually found evidence of the opposite effect: robots acting to scaffold social interactions with others, acting as ice-breakers in groups, and helping people to improve their social skills or to boost their self-esteem.

It appears likely that, as time progresses, many of us will simply follow Frank’s path towards acceptance: scoffing at first, before settling into the idea that robots can make surprisingly good companions. Our research suggests that’s already happening – though perhaps not in a way in which Aristotle would have approved.

Originally written by
Tony Prescott  Professor of Cognitive Neuroscience and Director of the Sheffield Robotics Institute, University of Sheffield | Februry 15, 2021
for The Conversation

Source link

Vorig artikelSigns of burnout can be detected in sweat
Volgend artikelAnalyst: Disney+ To Surpass Netflix In Subs By 2026, Driven By Hotstar