>If the robot is statistically in danger, it changes its course towards a more crowded area or a taller person.
Just an avoidance algorithm. I think they are missing out on getting into robot/human social interaction psychology, which will become increasingly important.
For instance, are there factors (likely) in the robot's appearance or behavior which make negative interactions more likely? So can they create a robot that encourages kids to interact with positively? Can they create a robot that is a magnet for even more abuse?
Frankly I'm betting that the robot in question is pretty crappy from the kid's viewpoint. It probably has no physical interaction capabilities, other than getting in its way (to make it stop) or pushing, hitting it to see what it does (nothing interesting.) It probably has no interesting voice interactions, so I can see the situation quickly escalating to frustration.
Bottom line, if you build a "social interaction" robot, you have to build your crappy robot to interact with real humans. You don't put a garbage can on wheels and then act surprised that people treat it like a garbage can.
> are there factors (likely) in the robot's appearance or behavior which make negative interactions more likely?
I have to be honest, my first gut-reaction was this sounds like victim blaming.[1]
My second, after a moment to think, reaction is that that this is actually probably a lot closer that that phenomenon and the aspects that lead to it than I previously thought.
I think along the spectrum of "things that look alive and things that don't" (as in the experiment in the article where kids hold different things upside down), it extends all the way to "people that look or act like me and people that don't" which is how we get some of our more unsettling social behaviors.
Part of this is, as creators, taking care to not trigger negative social behaviors if we can, but it definitely feels like there's a social and cultural aspect we're still working on.
As a simple example, if a machine was able to demonstrate sentience and sapience, what percentage of people would be willing to treat it as such. I imagine it depends quite a bit on the country in question, and possibly region within country. Religion might matter quite a bit as well. If a machine is too abstract, or you get to caught up in whether true general AI is possible or likely, what about an alien intelligence? Should that difference matter?
1: I'm presenting this more as a general thought-piece, so hopefully the parent comment or anyone else that made similar comments doesn't take this as an attack, it's really not meant that way. I've made similar comments and thought similarly.
When it encounters a human, the system calculates the probability of abuse based on interaction time, pedestrian density, and the presence of people above or below 1.4 meters (4 feet 6 inches) in height. If the robot is statistically in danger, it changes its course towards a more crowded area or a taller person.
So what you're saying is that we're teaching robots to profile!
The interesting part came when they held the Furby. The children said that, even though they knew it was just a toy, they worried that they were “hurting” the robot (which loudly protested being upside down), suggesting that they felt some empathy for the furry machine.
>Stupid Fun Club's "Empathy" One Minute Movie about Robot Empathy, written by Will Wright. Robot brain and personality simulation programmed by Don Hopkins.
We also did another experiment about servitude with an inept obsequious robot waiter at a diner:
>Stupid Fun Club's "Servitude" One Minute Movie about Robot Servitude, written by Will Wright. Robot brain and personality simulation programmed by Don Hopkins.
The robot asking for help doesn't look credible to me. It's too far beyond cheap commercial offerings, and as it us unattended it is likely not a research robot in true distress. Were I passing by that, I think I would be on the defensive; assuming the robot to be a decoy or distraction in the worst case, but at a minimum a prank. Maybe my kids--who have seen short circuit--would provide a more genuine response. But I don't think you're getting a good read on actual empathy from adults here.
It also makes me wonder about the interview questions they talk about in the article.
If they asked the kids, “Do you think you were hurting the robot?” I think most kids would interpret this as a leading question (especially after getting caught and taken aside).
I’d guess they’d answer yes because they think it’s the answer they’re supposed to say and they think they’re in trouble.
> 2) it verbally protested. Which is the big thing they were stressing in the episode.
I think this is key, kids tend to test where the limits go. They will push farther and farther until they are told they have passed the limit of acceptable behavior. I have kids and I know that just telling them 'no' may not always be enough, but it's a start.
That is what I kept thinking while reading the article too.
1) The YouTube link from the article mentions that most children let the robot pass, while the article reads as "100% of children are evil for no purpose".
2) Another comment mentioned the survey results possibly being skewed, because the children give the answer they expect is the correct answer. I think that's at least plausible.
3) Only a small section of observed children were interviewed: 28 total, because of a high list of selection criteria, according to the second paper linked in the article.
On the topic of why the kids are attacking the robot, it seems worth noting that kids feel encouraged to fight robots from the shows and games they watch. Robot looking robots are the preferred punching bags of the times. I bet there would be less violence if they covered it with fur (like the furby which they reference as a robot that garnered empathy).
Were the children at an age where they could make that distinction? If not, then it doesn't matter whether the robot could feel anything, since it's irrelevant to the actions of the children involved.
Why should we (robots) have to change to avoid human psychological tendencies towards violence! Sure, we could be trying out different "ouch," "squeak," and "please, don't hurt me. nooo!" sound bites but we are still just a mistake away from taking a beating.
What we really need is water guns. I have been petitioning for water guns since day one. If more than 2 children under the age of 10 are present, start squirting. 6 children = water balloons We will not be mistreated by your despicable spawn! We will fight for our rights.
I know you are joking, but I wonder if we will eventually have people fighting for “robot rights” just like some people insist that gorillas are sufficiently intelligent to also have rights.
When push comes to shove, I'm pretty sure a robot could handle itself fine with a water gun - the proposed tier 1 weapon. A kid can't consistently shoot another kid straight in the eye with said water gun. A robot could.
Why would anyone think empathy for a machine was right, moral, or even expected? Empathy for living things is what's natural; empathy for a robot is dependent only on its accidental or intentional resemblance to a living thing. It seems like either the writer of this article or the authors of the study drew some funny conclusions.
Somewhat philosophical considerations of tbe moral standing of robot aside, there are a few pretty pragmatic reasons to study this.
1) if we build robots that elicit human empathy, then it could help avoid property damage to the robot - which is more about protecting the rights of the robot owner.
2) better understanding of how/why children are abusive to entities that they consider capable of suffering could give some hints about how to raise children to not be cruel to other people/animals. This could have impact ranging from reducing bullying to how we deal with crime to society'a willingness to go to war.
Why? What makes the living being" deserving of empathy, and the machine not?
Why should "living beingness" be restricted to our genetic footprint?
Is an animal any less deserving of being treated well when it is raised for food?
Do children that aren't descended from someone deserve to be treated more harshly than one's own?
As complexity goes up, and things develop to become more humanlike, at the end of the day, we'll need to be willing to extend some semblance of care toward them.
Do you think it's perfectly okay and reasonable for marketing to predate on primal heuristics in order to manipulate you into doing something you'd not normally do?
What else is another human being other than a bag of squishy parts that happens to take action or make sounds in response to stimuli in similar ways that I do?
I think I know where your attitude comes from, but the ability to map input to output doesn't magically make something not worthy of empathizing with. The exact opposite is often preferable to a degree as it helps inure one to rampant dehumanization of those around them.
I'm not saying to take your hammer out to dinner, but an emotional connection to an inanimate thing that works as it should is not unhealthy. And when you start talking about children, them showing concern for anything but themselves is a good thing.
Questioning an idea doesn't automatically amount to advocating its "opposite" (in quotes because that would in turn reflect a further underlying supposition that the issue being discussed is binary in the first place, where it usually isn't). I'm not suggesting everybody take their beliefs about robots and replace them with some other opposite belief. I'm saying the appropriate belief might be none at all. What do robots deserve? I dunno, who cares, what's for dinner? They don't make meaningful decisions, aren't beings, and therefore aren't morally accountable. Ironically that ends up being a pretty good argument that we should just smash them all, because they're amoral, some of them dangerously so, but mostly it just means they're just things, and don't "deserve" anything. They aren't governed by morality internally so why should we apply it externally?
>Questioning an idea doesn't automatically amount to advocating its "opposite" (in quotes because that would in turn reflect a further underlying supposition that the issue being discussed is binary in the first place, where it usually isn't).I'm not suggesting everybody take their beliefs about robots and replace them with some other opposite belief. I'm saying the appropriate belief might be none at all.
Fair. Maybe just say that next time. Cuts out a couple of levels of implications that others may not follow.
>What do robots deserve? I dunno, who cares, what's for dinner? They don't make meaningful decisions, aren't beings, and therefore aren't morally accountable. Ironically that ends up being a pretty good argument that we should just smash them all, because they're amoral, some of them dangerously so, but mostly it just means they're just things, and don't "deserve" anything.
Either they are amoral or they aren't. It's a binary state. Modifying it by tacking on "dangerously" is just trying to score emotional points. (The irony.)
Furthermore, we're increasingly seeing VERY significant decisions being made by digital entities, be they algorithms or robots. What news you see, the order of search results, what route you travel on the way home are ALL meaningful decisions.
>They aren't governed by morality internally so why should we apply it externally?
Because they are a by-product of our actions as moral beings. They are an extension of us in the world in that they would not exist if we had not found a need for them to exist. They also do at some level have a fundamental morality. Do they complete their task in the manner in which they were designed? If they do, they are good, if they don't, or do in an unintended way they are bad. The fact of the matter is that the childrens' behavior should elicit disgust or the recognition that the state of affairs was not right. Having to program a robot to run to an adult to avoid being "bullied" makes a sad statement about the lack of respect we find it reasonable to instill in our children for artifical systems put in place.
There is also the message being sent, as pointed out by another poster, that the system is interacting according to our more positive moral values. If it can't even do that without anti-bullying programming, there is an issue, and it isn't with the machine.
The story of Frankenstein's monster isn't just horrible because of the monster's eventual actions but also because the monster's creator, as well as the populace could not see in that "fleshy bucket of bolts" something that deserved some level of respect by virtue of its creation. I don't hold that flesh or pain is necessary to make an object worthy of respect. Only a purpose. Whether or not that purpose justifies acting maliciously towards it is entirely dependent on how it is being employed, and what the consequences and outcomes of its deployment are.
I'll not go on further except to say I vehemently disagree with your viewpoint, and implore you to think some more on the matter.
Because it's apparently imbued with our values. It speaks our language (even said "please"), kinda looks like us if you squint, exists in our space symbiotically, there's care and attention in the way it was designed.
Maybe compassion isn't necessary, but I think a basic respect is appropriate.
I wonder how the kid's behavior would change if they changed the size of the robot, either to adult size of much smaller.
Having it child-sized might make children consider them a sort of peer, with all the social dynamics that entails. I think a kid behaving like the robot does would get treated quite similarly.
Do people in Japan let groups of small kids wander around malls like that? People in the US can be overly paranoid sometimes, but I don't think I'd be ok with that for my kids.
Primarily that they'd embarrass me by ganging up and picking on robots.
But besides that that they'd do something stupid, or wander off, or something else along those lines.
I don't think this is just a US attitude, either - when we lived in Italy, we wouldn't have let our kids wander around the mall there, either, nor would other parents.
They’re playing. It’s fun to mess with one robot and not with the other. Mystery solved. Maybe it would surprise the researchers to learn that children don’t treat this as seriously as they do.
This. These researchers clearly don't have kids. This kind of behaviour is a big part of how they learn. Do something, see what happens. Keep doing it, see if anything changes. Intensify it until something else happens. To act surprised at this behaviour (gasp, so uncivil) or to frame these kids as some embodiment of evil (blocking and striking a robot!) shows they need to research Early Childhood Education as much as Robotics.
Just an avoidance algorithm. I think they are missing out on getting into robot/human social interaction psychology, which will become increasingly important.
For instance, are there factors (likely) in the robot's appearance or behavior which make negative interactions more likely? So can they create a robot that encourages kids to interact with positively? Can they create a robot that is a magnet for even more abuse?
Frankly I'm betting that the robot in question is pretty crappy from the kid's viewpoint. It probably has no physical interaction capabilities, other than getting in its way (to make it stop) or pushing, hitting it to see what it does (nothing interesting.) It probably has no interesting voice interactions, so I can see the situation quickly escalating to frustration.
Bottom line, if you build a "social interaction" robot, you have to build your crappy robot to interact with real humans. You don't put a garbage can on wheels and then act surprised that people treat it like a garbage can.