Robots With Knives
June 2, 2010 8:43 AM   Subscribe

 
I would like to get that head whacking robot arm installed on my boss' chair. Every time he sends me an email with some new HR policy change I would press a button and it would just nail him right in the skull.
posted by spicynuts at 8:55 AM on June 2, 2010


This is how it starts people! This is how it starts!
posted by The Whelk at 8:57 AM on June 2, 2010 [1 favorite]


The main goal of the study was to understand the biomechanics of soft-tissue injury caused by a knife-wielding robot.

Oh sure, if you call it a "study" no one freaks out, but call it a "hobby" or "sexual predisposition" and suddenly everyone's a judge.
posted by quin at 9:02 AM on June 2, 2010 [19 favorites]


Developments like this and the SawStop (if you can call that "robotics") are refreshing. But the more I see their complexities and limitations, the more I worry that the Three Laws aren't truly possible at all. In Asimov's world, robots are able to make judgment calls about what constitutes a violation, and this is hard-coded at a low enough level to override all other instructions with a minimal risk of conflict or malfunction. But there are too many variables at play in real life, and AI research has stalled to the point that robots may never (in our lifetimes) be able to understand their environments well enough to make three-laws assessments or activate their safety systems reliably in anything but highly specialized situations. In other words, table saws and industrial cutters may get smarter, but don't hold your breath for an all-purpose kitchen droid you can trust with your grandkids.
posted by The Winsome Parker Lewis at 9:06 AM on June 2, 2010


The researchers acknowledge that there are huge reservations about equipping robots with sharp tools in human environments. It won't happen any time soon.

Aw c'mon. What could possibly go wrong?
posted by zarq at 9:06 AM on June 2, 2010


I'm afraid to click on any of these links.
posted by craniac at 9:08 AM on June 2, 2010 [1 favorite]


The Computer God Sealed Robot Operating Cabinet has many robot arms, with electrical and laser beam knife robot arms.
posted by anazgnos at 9:10 AM on June 2, 2010 [1 favorite]


But the more I see their complexities and limitations, the more I worry that the Three Laws aren't truly possible at all.

The militarization of robotics turns Azimov's Laws into the pipe-dream of a science-fiction author.

You don't want your chopping-droid to spare a kitchen full of terrorists do you?
posted by ennui.bz at 9:13 AM on June 2, 2010


Well, when Asimov wasn't contemplating the interpretation of the Three Laws in moral gray areas, there was always the fear that some robots wouldn't be three-law compliant at all.

But I'm thinking that in real life, it's the opposite: most robots won't even ATTEMPT to comply, and the very few that do will be so complex and necessarily specialized that their "three-laws compliance" will be impractical and unreliable in environments that aren't finely controlled ahead of time.
posted by The Winsome Parker Lewis at 9:23 AM on June 2, 2010


This would be so much less terrifying if only the STAB-O-TRON 5000 had the face of a teddy bear.
posted by Sys Rq at 9:26 AM on June 2, 2010


I never believed in the feasibility of Asimov's three laws of robotics and I doubt that Asimov did either. Rather than decscribing them as a pipe dream (as ennui.bz does) I would describe them as an intellectual exercise. It is a common strategy in writing fantasy to start with a premise that you know to be impossible, and to then work out the logical consequences of that premise. It doesn't tell you much about the real world, but it is still an interesting exercise in logical extrapolation.
posted by grizzled at 9:32 AM on June 2, 2010


What's the matter, red? You scared?
posted by mazola at 9:33 AM on June 2, 2010 [1 favorite]


But I'm thinking that in real life, it's the opposite: most robots won't even ATTEMPT to comply, and the very few that do will be so complex and necessarily specialized that their "three-laws compliance" will be impractical and unreliable in environments that aren't finely controlled ahead of time.

well, that more or less summarizes the state of AI in general...

I guess Clarke in 2001 is saying any moral calculus if it is comprehensive (practical) will necessarily be incomplete i.e. include cases where the calculus tells you to kill them all. but, for the state of the art, it's probably challenging for ai to determine whether a target is human at all, or even an object i.e. not a shadow, heat distortion etc. it's OK to shoot clouds right?

Better, to not give the robot a knife or missile launcher in the first place.
posted by ennui.bz at 9:34 AM on June 2, 2010


err...complete != consistent

I guess Clarke in 2001 is saying any moral calculus if it is comprehensive (practical) will necessarily be incomplete not consistent i.e. include cases where the calculus tells you to kill them all.
posted by ennui.bz at 9:38 AM on June 2, 2010


Why would you give your robot a scalpel or a steak knife if you weren't expecting it to be cutting meaty things? How do you determine "good meat" from "bad meat?"
posted by public at 9:43 AM on June 2, 2010 [2 favorites]


The problem with the Three Laws is that their effectiveness depends entirely on how smart the robot in question is. "A robot may not injure a human being or, through inaction, allow a human being to come to harm." Based on what? How far out does this poor robot have to model its actions? It's one thing if it confines itself to just the stripped down world of the factory floor and keeping human external integument continuous and unbroken, and maybe imparting less than some tiny number of Newton-meters-per-second of momentum to their tender flesh. Sort of a SHRDLU with a "do not touch the pinkish-brown wobbly thing while you are slicing the blue pyramid" added in.

For anything more complex than that, I imagine their little positronic brains melting into sludge at the paralysis analysis of contemplating the effects of their actions even an hour hence, much less a year. "Oh man that human doesn't even know she's got that tumor that shows up on my infrared and my chemical samplers are picking up all kinds of specific markers and I have this scalpel welded to my hand ... if I could just cut it out, but I have no way of communicating because I'm just a silent makerbot ARGH!"

I have no mouth and I see something in your bloodstream.
posted by adipocere at 9:43 AM on June 2, 2010


You had me at "robots with knives."
posted by Cool Papa Bell at 9:48 AM on June 2, 2010


Wait, there's a robot manufacturer called VECNA? Dude, fire your marketing guy. People that love robots also play D&D and know about this other guy named Vecna.
posted by Cool Papa Bell at 9:50 AM on June 2, 2010 [2 favorites]


Luckily, we may be approaching Asimov's 1st law ... or not .

I'm betting "not".
posted by Thorzdad at 9:57 AM on June 2, 2010


"A robot may not injure a human being or, through inaction, allow a human being to come to harm."

I totally do see why this is a good thing, but it makes robots sound like paladins in a kind of dull way.

Whoa, Firefox spell check totally knows the word "paladins"!
posted by Mrs. Pterodactyl at 10:02 AM on June 2, 2010


The thing with the Three Laws is that in the universe they inhabit, the robots are nearly always human analogs. There aren't giant industrial robots bolted to a floor, incapable of moving to aid a person in trouble, but giant industrial machines operated by very human shaped robots capable of acting in human like ways.

This makes a big difference, because he wasn't imagining a world where regular machines would have to make Three Law distinctions, and anything that would be called a "robot" would be powered by a Positronic brain, giving it, at a minimum, a level of intelligence more than capable of acting within the framework of the Laws.

That said, the whole point of the Three Laws was really to see how far he could play with the idea of logical arguments. Hell, the whole theme of the nine short stories that make up I, Robot are explorations of what happens when you make just a minor change to one of the Three Laws.
posted by quin at 10:08 AM on June 2, 2010


Your moral calculus is derivative.
posted by Babblesort at 10:08 AM on June 2, 2010 [1 favorite]


SkyNet abides.
posted by Halloween Jack at 10:10 AM on June 2, 2010 [1 favorite]


Why am I suddenly reminded of the need to call Dr Susan Calvin? Like IMMEDIATELY
posted by infini at 10:17 AM on June 2, 2010


See my previous rant about Asimov's Three Laws.

It's important to look at this not as "robots will kill us all with knives!" and more as "robots are dumb!" The development of powerful arms and manipulators has moved faster than the integration and "intelligence" of mechanical compliance, safety sensors, etc -- and most robots still operate in very controlled environments, like factories or research labs. There hasn't been a reason to make them safe to operate around humans until just recently, as there are opportunities for them to move out into uncontrolled environments and close to humans.
posted by olinerd at 10:28 AM on June 2, 2010


Your moral calculus is derivative.

But it is also integral.
posted by Zalzidrax at 10:31 AM on June 2, 2010


Heh, the head-bashing study has been performed by the German Aerospace Center in collaboration with the Institute of Man-Machine-Interaction of the RWTH Aachen. Some interaction...

The RWTH also happens to be my old alma mater, and it is mostly populated by engineering and medicine students. Those videos are the obvious result of bringing the two sorts together (plus a generous helping of booze, also quite present in Aachen).
posted by Skeptic at 10:35 AM on June 2, 2010


Aachen Man-Machine Interaction sounds like a lost David Bowie album.
posted by The Whelk at 10:41 AM on June 2, 2010 [1 favorite]


-Armed South African robot rebels against it's monkey masters, kills 9 and injures 14 with cannon. When the knife version of this headline arrives, don't say you weren't warned.

-Robots trying to kill humans with weapons we gave them is bad enough. But robots trying to f*ck humans with sex toys we strapped on them...

-What I'm saying is Pirates don't trust robots, no we don't!. Do you own a copy of How To Survive a Robot Uprising?
posted by Pirate-Bartender-Zombie-Monkey at 10:53 AM on June 2, 2010


. "A robot may not injure a human being or, through inaction, allow a human being to come to harm." Based on what? How far out does this poor robot have to model its actions?

Did you make it through to the end of the Foundation series?
posted by carsonb at 12:12 PM on June 2, 2010


Knives?!? Who need knives? This guy could kick our collective asses with a pointy stick.
posted by UncleJoe at 12:22 PM on June 2, 2010 [1 favorite]


Why would you give your robot a scalpel or a steak knife if you weren't expecting it to be cutting meaty things? How do you determine "good meat" from "bad meat?"

Robots are pretty good at cutting meaty things btw-
Meat Processing Robot
M-710iB cutting meat
posted by pantsrobot at 2:32 PM on June 2, 2010


Not excited pantsrobot is keeping track of these brethren.
posted by carsonb at 3:04 PM on June 2, 2010


A lot of people would consider the development of robots that fuck you as a triumph of Second Law compliance, and a major sign that the Rapture of the Nerds is upon us.
posted by localroger at 3:54 PM on June 2, 2010


The problem with the "Three laws" formulation is that robots in reality can be pretty unsophisticated. The term applies to things that are little more then electronic windup toys, maybe with course correction or something, and can go all the way up to systems with sophisticated algorithms for moving around, working with cloth, or whatever. But we are a long way off from any kind of robot that can interact with humans and make decisions on it's own for a long time.

In other words, you might be able to program a robot to try to kill everything in a certain area, but it would have to be programed to go in and out of "Kill mode" at some human command. We are not anywhere near having the AI needed to have a robot be in "Kill Mode" all the time and deciding when to actually do it based on some social circumstances.

Basically determining who is a 'good guy' and who's a 'bad guy' are a social judgment, and robots are not anywhere near capable of making that determination.
posted by delmoi at 5:28 PM on June 2, 2010


What's hilarious about Asmov's 3 laws is that they were essentially political. People were afraid of robots so they passed these laws so that people wouldn't be scared of them. But Asmov totally ignored humanity's tendency to jump head first into whatever technological or hair brained idea they could come up with and worry about the consequences later.
posted by delmoi at 5:40 PM on June 2, 2010


It seems like the last robot, with the APOBS, is following the 1st law. That weapon is not for attacking people, it is for destroying other weapons that are designed to hurt people. iRobot 710 is doing its utmost to ensure that those soldiers not come to harm through barbed wire or mines.
posted by agentofselection at 6:56 PM on June 2, 2010


« Older Lots of pretty houses and friends to be found   |   Peter Orlovsky, In Memoriam Newer »


This thread has been archived and is closed to new comments