Skip to content(if available)orjump to list(if available)

Tianjin robot incident raises alarm over public safety and robotics

pj_mukh

Why does this article read like the robot actually got angry at a spectator. It did not, it does not have that capacity.

This was definitely a glaring safety issue and the company should review all its failure modes that show up in public but an ”emotional” response this was not.

glenstein

I understand the concern but I feel like it was on the right side of the line. The article says:

>displayed aggressive behavior

>swinging its arms in a manner described as aggressive and violent, similar to human behavior

I can understand aggressive and violent as descriptions of behavior that don't necessarily (on charitable interpretation) imply an internal emotional state.

TeMPOraL

We could claim it was self-preservation or collision avoidance routine, perhaps mistakenly triggered by bad sensor input and/or coding error. However, that's also the reasons for which a human could get "angry at the spectator" and display the same behavior.

Obviously there's a difference, but similarities are uncanny.

digbybk

The AI emotion discussion is interesting, but at the end of the day, does it matter? The question is, is it safe? And if it's not safe, how unsafe is it?

janalsncm

Right. We don’t need to debate whether a loose tiger is angry, confused, or hungry in order to know that they’re not safe around people. We can leave understanding the contents of its mind to the philosophers.

What is very clear from the video is that the robots are an order of magnitude heavier and stronger than a person. That’s all you really need to know.

lurk2

It's important to note that a robot gone rogue is still a killer robot even if the robot doesn't hate you.

givemeethekeys

"As you can see from the smile on the robot's, let's call it face, your honor, this was clearly a friendly gesture."

DocTomoe

That's the other side of anthropomorphic robots: they get anthropomorphic attributes associated with them. A 4 axis robot arm hitting a worker is a machinery with bad safety settings and/or a worker who ignored them. A humanoid robot hitting a worker looks a lot like another worker hitting a worker.

Also take into account that these humanoid robots are specifically designed to integrate into spaces which usually were not used by robots before, which immediately means more potential contact between them and non-trained personnel, even civilians.

I feel we are quickly approaching ST:TNG "The Measure of a Man" territory here: At what point does a machine stop being a machine and becomes a being, a strange, technological being, sure, but a being nonetheless. After all, there's a good argument to be made that we are essentially biological robots.

inglor_cz

We antropomorphize animals all the time, it won't be any different with robots.

When one wants to be picky, we can't be certain about other people's emotions either. A psychopath may not feel any anger when hitting you, or he might be feeling something very different from what a normal person would label "anger".

janalsncm

The video is pretty unsettling, kind of showing how strong robots can be compared with humans. (We already knew this, but it’s good to remember.)

Reminds me a bit of the chess robot that broke a child’s finger: https://www.theguardian.com/sport/2022/jul/24/chess-robot-gr...

What annoyed me at the time was them describing the child as having broken some “rule” about waiting for the robot or something.

We should reject this framing. Robots need to be safe and reliable if we’re going to invite them into our homes.

Gracana

In an industrial setting, these robots would be placed behind interlocked barriers and you wouldn't be able to approach them unless they were de-energized. Collaborative robotics (where humans work in close proximity to unguarded robots) is becoming more common, but those robots are weak / not built with the power to carry their 50kg selves around, and they have other industry-accepted safeguards in place, like sensors to halt their motion if they crash into something.

scratchyone

That article is absurd, describing the robot as being completely safe while blaming the kid saying "children need to be warned".

themanmaran

pinkmuffinere

It is ridiculous to attribute intent to the motion, but I understand why people do — it really does give the impression of an aggressive, upset human. That’s unfortunate.

bumby

There is an often unaddressed risk in robotics because there is a lack of theory-of-mind. We’ve evolved to intuit what others humans are thinking (based on words, body language and other context) which helps us predict behavior and mitigate risk. Unfortunately we can’t do the same with robuts so there is a potential for more latent risk (same as dealing with “crazy” humans where our mental models fail to predict behavior).

IMO this means we won’t be comfortable with robuts and safety critical applications until they are well, well beyond human capabilities. This is where I think the crowd that aims for “human-level performance” is wrong; society won’t trust robuts until they are much, much better than humans.

pinkmuffinere

Ya, that makes sense to me, this is roughly how I feel about self driving cars as well — I want very good proof that it outperforms even the best drivers by a wide margin before I’ll actually use it. I feel that my friends and I are better drivers than average, even though I know that’s mathematically unlikely. So I need the self driving to be _really_ good before it attracts me. I know this is irrational; what I feel does not obey rational rules.

themanmaran

Yea, likely this was some kind of trip + glitch that happened to look like an attack. But it really did have a "boxing" style movement.

I saw a video of the Unitree [1] robot doing a kung fu routine the other day. I imagine developers are constantly programming in some pre-scripted moves. Similar to all the Boston Dynamics demo videos. They're great for showing off movement. Conceivable that someone could run the wrong demo routine. Imagine the Atlas robot doing it's classic backflip in the middle of a crowd.

[1] https://www.youtube.com/watch?v=iULi4-qz22I

bentcorner

Agree - it looks like low-quality robotics code, probably one-off written for this festival.

This article seems to try to ride the fears of AI and "bots are taking our jobs" but really this looks like plain old badly written software.

Large machines operating near people should always have failsafes. Having handlers who are expected to drag the bot around IMO isn't enough.

janalsncm

Doing that unpredictably is almost worse. Functionally, the point of anthropomorphizing is to tell a story that makes things predictable. In other words, “unsafe if angry, safe otherwise”.

But if you can’t tell if it’s “angry” then we have to assume it’s always unsafe. Of course this was always true.

null

[deleted]

givemeethekeys

Many people will older brothers will recall the time when the brother had no intention of actually hurting them. They were merely swinging their hands, moving closer.. and closer.

randomfrogs

Don't anthropomorphize robots. They hate that.

usaphp

it looks like the robot tripped at the barricades and balanced itself.

imhereforwifi

thats what it looks like to me. It looked like it was trying to continue the handshake while the person was pulling back and the robot was moving forward and stuttered and tripped on the bottom of the barricade causing it to lunge and try and stabilize itself.

datadrivenangel

Video of the robot punching someone in the crowd... Weird behavior.

bicepjai

>> The manufacturer, Unitree Robotics, attributed the incident to a "program setting or sensor error." Despite this explanation, the event has heightened ethical and safety concerns regarding the use of robots in public venues. The local community is calling for urgent measures to ensure that robots' actions align with social norms, emphasizing the need for regulatory and legal frameworks to govern robot-human interactions.

I did not think I’m going to see this in my lifetime after watching Animatrix

hkmaxpro

rwmj

Could they have made them more creepy? It's way too much like the terminator from Terminator: Dark Fate (https://collider.com/terminator-dark-fate-images-featurette/)

wewewedxfgdf

We need some sort of special police unit to liquidate renegade robots.

Those police officers need a catchy name.

daotoad

These heroic officers will have to deal with malefactors that want to rock and roll all night and party every day. This is why they will need an unlimited supply of waterfalls and sandwiches.

cholantesh

Liquidate sounds a tad aggressive, maybe 'retire'?

mattlondon

Years ago when I was a student at uni I volunteered to take part in a research study with robots.

I went to a rented house near campus where they had a normal living room set up and sat me down on a dining chair in the room and handed me a box with a button on it.

"The robot will approach you. Just press the button when you feel like it is getting too close" they said.

They left the room so I was alone, and a few minutes later the wheeled robot entered the room and started slowly but deliberately to move towards me.

Let's just say the robot got too close.

I was sat there alone as the robot moved towards me. I was frantically mashing at the button but it did not stop until it actually collided with my feet and then stopped.

To this day I am not sure if it was meant to stop or not, or even if it was a robotics research project at all or actually a psychology research project.

In hindsight it was as terrifying as it sounds. Still, I got £5 for it.

bluSCALE4

Terrifying. I get the sensation playing VR games and have robots swing at me. Don't enjoy it at all.

DocTomoe

5 quid? The was a psychology research project, good sir. I hope you got lab hours for that.

v9v

Seems to me that it loses its balance and extends its arms in order to rebalance itself, similar to what's happening in their demo videos https://www.youtube.com/watch?v=GtPs_ygfaEA?t=24 I've worked with their robot dogs before and they kick their legs really fast when they sense that they are falling over.

moribvndvs

Tangentially, this is the sort of thing that generally bothers me most about AI. Well, second most thing. The first is it being abused by humans to do terrible things. The second is it being built and maintained by humans, where a thing can easily malfunction in ways the people building and maintaining them can’t comprehend, predict, or prevent, especially when it’s built by organizations with a “move fast and break things” mentality and a willingness to cut corners for profit. The torrents of half-broken tech we are already drowning in don’t exactly inspire confidence.

thallavajhula

This was one of my major concerns when Elon announced Tesla Optimus. There's a real need for government regulation on the bot specs. I blogged about this a while ago.

Something like:

1. They shouldn’t be able to overpower a young adult. They should be weak. 2. They should be short. 3. They should have very limited battery power and should require human intervention to charge them. 4. They should have limited memory.

glitchcrab

I don't disagree with you, but at the same time if you cripple the robot too much then it has no value - no sane company would develop a product which nobody would want. That's commercial suicide.

thallavajhula

True. I guess, there has to be a balance to get to a sweet spot.

Havoc

There is a video floating around of it. It's a sudden forward movement that certainly looks alarming but wouldn't call it "unexpectedly displayed aggressive behavior".

More like it's hardcoded to do something (maintain balance or whatever) without limits on how fast it can move to achieve the goal.

i.e. bad safety controls rather than malice

LVB

I really liked the show "Humans" (https://en.wikipedia.org/wiki/Humans_(TV_series)). I feel like we're catching up to that timeline.