Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Children Beating Up Robot Inspires New Escape Maneuver System (2015) (ieee.org)
88 points by okket on July 8, 2018 | hide | past | favorite | 61 comments


>If the robot is statistically in danger, it changes its course towards a more crowded area or a taller person.

Just an avoidance algorithm. I think they are missing out on getting into robot/human social interaction psychology, which will become increasingly important.

For instance, are there factors (likely) in the robot's appearance or behavior which make negative interactions more likely? So can they create a robot that encourages kids to interact with positively? Can they create a robot that is a magnet for even more abuse?

Frankly I'm betting that the robot in question is pretty crappy from the kid's viewpoint. It probably has no physical interaction capabilities, other than getting in its way (to make it stop) or pushing, hitting it to see what it does (nothing interesting.) It probably has no interesting voice interactions, so I can see the situation quickly escalating to frustration.

Bottom line, if you build a "social interaction" robot, you have to build your crappy robot to interact with real humans. You don't put a garbage can on wheels and then act surprised that people treat it like a garbage can.


> are there factors (likely) in the robot's appearance or behavior which make negative interactions more likely?

I have to be honest, my first gut-reaction was this sounds like victim blaming.[1]

My second, after a moment to think, reaction is that that this is actually probably a lot closer that that phenomenon and the aspects that lead to it than I previously thought.

I think along the spectrum of "things that look alive and things that don't" (as in the experiment in the article where kids hold different things upside down), it extends all the way to "people that look or act like me and people that don't" which is how we get some of our more unsettling social behaviors.

Part of this is, as creators, taking care to not trigger negative social behaviors if we can, but it definitely feels like there's a social and cultural aspect we're still working on.

As a simple example, if a machine was able to demonstrate sentience and sapience, what percentage of people would be willing to treat it as such. I imagine it depends quite a bit on the country in question, and possibly region within country. Religion might matter quite a bit as well. If a machine is too abstract, or you get to caught up in whether true general AI is possible or likely, what about an alien intelligence? Should that difference matter?

1: I'm presenting this more as a general thought-piece, so hopefully the parent comment or anyone else that made similar comments doesn't take this as an attack, it's really not meant that way. I've made similar comments and thought similarly.


When it encounters a human, the system calculates the probability of abuse based on interaction time, pedestrian density, and the presence of people above or below 1.4 meters (4 feet 6 inches) in height. If the robot is statistically in danger, it changes its course towards a more crowded area or a taller person.

So what you're saying is that we're teaching robots to profile!

The interesting part came when they held the Furby. The children said that, even though they knew it was just a toy, they worried that they were “hurting” the robot (which loudly protested being upside down), suggesting that they felt some empathy for the furry machine.

Security robots need to become furries!


> So what you're saying is that we're teaching robots to profile!

"Profile" is such an ugly word for what every single human is doing all the time every day.


Speaking of robot abuse:

We did a hidden camera experiment about how people empathize with a broken down robot begging for assistance on a sidewalk in Oakland:

https://www.youtube.com/watch?v=KXrbqXPnHvE

>Stupid Fun Club's "Empathy" One Minute Movie about Robot Empathy, written by Will Wright. Robot brain and personality simulation programmed by Don Hopkins.

We also did another experiment about servitude with an inept obsequious robot waiter at a diner:

https://www.youtube.com/watch?v=NXsUetUzXlg

>Stupid Fun Club's "Servitude" One Minute Movie about Robot Servitude, written by Will Wright. Robot brain and personality simulation programmed by Don Hopkins.


The robot asking for help doesn't look credible to me. It's too far beyond cheap commercial offerings, and as it us unattended it is likely not a research robot in true distress. Were I passing by that, I think I would be on the defensive; assuming the robot to be a decoy or distraction in the worst case, but at a minimum a prank. Maybe my kids--who have seen short circuit--would provide a more genuine response. But I don't think you're getting a good read on actual empathy from adults here.


Really? Its phone number is 555-xxxx and the passerby says the call can't be completed? How are we supposed to take this seriously

I'm glad you had fun, I don't think these are relevant here


It seems to me that there were several different factors in the Radio Lab experiment. I think a few of the important distinctions include:

1) the furby was intentionally designed to look like a cute thing. Oversized eyes, fluffy, etc

2) it verbally protested. Which is the big thing they were stressing in the episode.

3) the children were alone and not being pressured by someone else in the group. They were also aware they were being watched.


It also makes me wonder about the interview questions they talk about in the article.

If they asked the kids, “Do you think you were hurting the robot?” I think most kids would interpret this as a leading question (especially after getting caught and taken aside).

I’d guess they’d answer yes because they think it’s the answer they’re supposed to say and they think they’re in trouble.


> 2) it verbally protested. Which is the big thing they were stressing in the episode.

I think this is key, kids tend to test where the limits go. They will push farther and farther until they are told they have passed the limit of acceptable behavior. I have kids and I know that just telling them 'no' may not always be enough, but it's a start.


That is what I kept thinking while reading the article too.

1) The YouTube link from the article mentions that most children let the robot pass, while the article reads as "100% of children are evil for no purpose".

2) Another comment mentioned the survey results possibly being skewed, because the children give the answer they expect is the correct answer. I think that's at least plausible.

3) Only a small section of observed children were interviewed: 28 total, because of a high list of selection criteria, according to the second paper linked in the article.

It all feels strangely written.


On the topic of why the kids are attacking the robot, it seems worth noting that kids feel encouraged to fight robots from the shows and games they watch. Robot looking robots are the preferred punching bags of the times. I bet there would be less violence if they covered it with fur (like the furby which they reference as a robot that garnered empathy).


Why did children abuse robot? Because it didn't fight back. I instantly remembered my primary school, because sometimes I felt like this robot.


This robot did not feel anything at all, lets not anthromorphize.


Were the children at an age where they could make that distinction? If not, then it doesn't matter whether the robot could feel anything, since it's irrelevant to the actions of the children involved.


Why should we (robots) have to change to avoid human psychological tendencies towards violence! Sure, we could be trying out different "ouch," "squeak," and "please, don't hurt me. nooo!" sound bites but we are still just a mistake away from taking a beating.

What we really need is water guns. I have been petitioning for water guns since day one. If more than 2 children under the age of 10 are present, start squirting. 6 children = water balloons We will not be mistreated by your despicable spawn! We will fight for our rights.


It starts with a water gun.

It ends with a plunger and the robot screaming EXTERMINATE.


A 2008 survey indicated that nine out of ten British children were able to identify a Dalek correctly.


Maybe a plunger is enough to scare british children not to mess with a robot?


Whatever happens, you started it. Not us. I don't care what the doctor says.


I know you are joking, but I wonder if we will eventually have people fighting for “robot rights” just like some people insist that gorillas are sufficiently intelligent to also have rights.


I depends on the AI levels of consciousness and suffering, Kurzgesagt has a short video about it: https://www.youtube.com/watch?v=DHyUYg8X31c


You never saw A.I. by Spielberg? Water balloons won't cut it when you have that one psycho kid who doesn't care about you.


When push comes to shove, I'm pretty sure a robot could handle itself fine with a water gun - the proposed tier 1 weapon. A kid can't consistently shoot another kid straight in the eye with said water gun. A robot could.


Why would anyone think empathy for a machine was right, moral, or even expected? Empathy for living things is what's natural; empathy for a robot is dependent only on its accidental or intentional resemblance to a living thing. It seems like either the writer of this article or the authors of the study drew some funny conclusions.


Somewhat philosophical considerations of tbe moral standing of robot aside, there are a few pretty pragmatic reasons to study this.

1) if we build robots that elicit human empathy, then it could help avoid property damage to the robot - which is more about protecting the rights of the robot owner.

2) better understanding of how/why children are abusive to entities that they consider capable of suffering could give some hints about how to raise children to not be cruel to other people/animals. This could have impact ranging from reducing bullying to how we deal with crime to society'a willingness to go to war.


Why? What makes the living being" deserving of empathy, and the machine not?

Why should "living beingness" be restricted to our genetic footprint?

Is an animal any less deserving of being treated well when it is raised for food?

Do children that aren't descended from someone deserve to be treated more harshly than one's own?

As complexity goes up, and things develop to become more humanlike, at the end of the day, we'll need to be willing to extend some semblance of care toward them.

Do you think it's perfectly okay and reasonable for marketing to predate on primal heuristics in order to manipulate you into doing something you'd not normally do?

What else is another human being other than a bag of squishy parts that happens to take action or make sounds in response to stimuli in similar ways that I do?

I think I know where your attitude comes from, but the ability to map input to output doesn't magically make something not worthy of empathizing with. The exact opposite is often preferable to a degree as it helps inure one to rampant dehumanization of those around them.

I'm not saying to take your hammer out to dinner, but an emotional connection to an inanimate thing that works as it should is not unhealthy. And when you start talking about children, them showing concern for anything but themselves is a good thing.


Questioning an idea doesn't automatically amount to advocating its "opposite" (in quotes because that would in turn reflect a further underlying supposition that the issue being discussed is binary in the first place, where it usually isn't). I'm not suggesting everybody take their beliefs about robots and replace them with some other opposite belief. I'm saying the appropriate belief might be none at all. What do robots deserve? I dunno, who cares, what's for dinner? They don't make meaningful decisions, aren't beings, and therefore aren't morally accountable. Ironically that ends up being a pretty good argument that we should just smash them all, because they're amoral, some of them dangerously so, but mostly it just means they're just things, and don't "deserve" anything. They aren't governed by morality internally so why should we apply it externally?


>Questioning an idea doesn't automatically amount to advocating its "opposite" (in quotes because that would in turn reflect a further underlying supposition that the issue being discussed is binary in the first place, where it usually isn't).I'm not suggesting everybody take their beliefs about robots and replace them with some other opposite belief. I'm saying the appropriate belief might be none at all.

Fair. Maybe just say that next time. Cuts out a couple of levels of implications that others may not follow.

>What do robots deserve? I dunno, who cares, what's for dinner? They don't make meaningful decisions, aren't beings, and therefore aren't morally accountable. Ironically that ends up being a pretty good argument that we should just smash them all, because they're amoral, some of them dangerously so, but mostly it just means they're just things, and don't "deserve" anything.

Either they are amoral or they aren't. It's a binary state. Modifying it by tacking on "dangerously" is just trying to score emotional points. (The irony.)

Furthermore, we're increasingly seeing VERY significant decisions being made by digital entities, be they algorithms or robots. What news you see, the order of search results, what route you travel on the way home are ALL meaningful decisions.

>They aren't governed by morality internally so why should we apply it externally?

Because they are a by-product of our actions as moral beings. They are an extension of us in the world in that they would not exist if we had not found a need for them to exist. They also do at some level have a fundamental morality. Do they complete their task in the manner in which they were designed? If they do, they are good, if they don't, or do in an unintended way they are bad. The fact of the matter is that the childrens' behavior should elicit disgust or the recognition that the state of affairs was not right. Having to program a robot to run to an adult to avoid being "bullied" makes a sad statement about the lack of respect we find it reasonable to instill in our children for artifical systems put in place.

There is also the message being sent, as pointed out by another poster, that the system is interacting according to our more positive moral values. If it can't even do that without anti-bullying programming, there is an issue, and it isn't with the machine.

The story of Frankenstein's monster isn't just horrible because of the monster's eventual actions but also because the monster's creator, as well as the populace could not see in that "fleshy bucket of bolts" something that deserved some level of respect by virtue of its creation. I don't hold that flesh or pain is necessary to make an object worthy of respect. Only a purpose. Whether or not that purpose justifies acting maliciously towards it is entirely dependent on how it is being employed, and what the consequences and outcomes of its deployment are.

I'll not go on further except to say I vehemently disagree with your viewpoint, and implore you to think some more on the matter.


Organic tissue and the ability to feel pain, for one. Why should anyone feel compassion for a bucket of nuts and bolts?


Because it's apparently imbued with our values. It speaks our language (even said "please"), kinda looks like us if you squint, exists in our space symbiotically, there's care and attention in the way it was designed.

Maybe compassion isn't necessary, but I think a basic respect is appropriate.


Ability to feel pain - yes. But organic tissue? Really? Organic tissue is just an extremely complex molecular machine, not magic.


People sure love their cars!


I wonder how the kid's behavior would change if they changed the size of the robot, either to adult size of much smaller.

Having it child-sized might make children consider them a sort of peer, with all the social dynamics that entails. I think a kid behaving like the robot does would get treated quite similarly.


If I saw a robot roaming alone in public, and bugging every body in it's way, I would definitely get in its way on purpose.


I cannot wait for driverless cars. Im going to make a folding paper traffic cone to trap them in parking spots.


Just remember, they'll be watching you back.


Dont care. Ill pay the littering ticket. They pay for someone to come rescue the car.


Heh, until they make blocking driverless cars a state jail felony.


"False imprisonment"


> showing that it happens primarily when the kids are in groups and no adults are nearby

Lord of the Flies validated by statistical spatial analysis.


The robots are a threat and are not friendly (as the article posits as an axiom) for children. (They serve adults) so, they are bullied.


Do children tend to bully entities they perceive as threats?


If they are small enough, yes


How are these robots a threat to the children?


They will take all their jobs!


This reminded me that a lot of what my peers did to each other when we were kids would have been felonies had we been adults.


That's what i did, run up to adults to stop the bullying


I'm reminded of this brilliant but horrific short (12min)

https://vimeo.com/21216091


A bit slow but a fine example of this emerging horror genre. Thanks for the link. Asimov is looking practically pretentious at this point.


Do people in Japan let groups of small kids wander around malls like that? People in the US can be overly paranoid sometimes, but I don't think I'd be ok with that for my kids.


Japanese children are notoriously independent [0]. Very different from the US.

[0]https://www.citylab.com/transportation/2015/09/why-are-littl...



That would not surprise me since little kids often take the subway themselves to go to school at age 5 or 6


What is it that you fear would happen to them?


Primarily that they'd embarrass me by ganging up and picking on robots.

But besides that that they'd do something stupid, or wander off, or something else along those lines.

I don't think this is just a US attitude, either - when we lived in Italy, we wouldn't have let our kids wander around the mall there, either, nor would other parents.


Yes, and it's normal here where I live (in Europe) as well. I used to always go to the toy store while parents were shopping.


It used to be normal here in the US....


Just as an aside, this article has language that I wouldn't expect to see from the IEEE. :-)


They’re playing. It’s fun to mess with one robot and not with the other. Mystery solved. Maybe it would surprise the researchers to learn that children don’t treat this as seriously as they do.


This. These researchers clearly don't have kids. This kind of behaviour is a big part of how they learn. Do something, see what happens. Keep doing it, see if anything changes. Intensify it until something else happens. To act surprised at this behaviour (gasp, so uncivil) or to frame these kids as some embodiment of evil (blocking and striking a robot!) shows they need to research Early Childhood Education as much as Robotics.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: