Politics Tech

Robots Will Kill You, But Only If A Human Tells It To


In Violation of the First Law of Robotics, the Pentagon has confirmed: If a robot is going to kill you, it will be a human that directs it to do so.

The government made it crystal clear that every time one of it’s drones drops one of it’s lethal payloads, the decision to have done so will have been made by a human being that can be held accountable if necessary.

Human rights groups have been nervous about the advances in robotics, including those robots that are autonomous who may in the future be able to make critical decisions for themselves. However according to the new policy directive issued by Deputy Defense Secretary Ashton Carter, the possibility of SkyNet actually coming into existence will be impossible.

On November 21st, Carter signed a series of instructions that would “minimize the probability and consequences of failures” in autonomous or semi autonomous armed robots that could lead to unintended engagements, starting at the design stage. Essentially, the Pentagon wants to make sure that under no circumstances should any of the military’s many Predator, Reaper, or Drone-like missiles make the decision itself to harm a human being.

That decision can only be made by another human being.

The hardware and software of the deadly robots will be equipped with “safeties, anti-tamper mechanisms, and information assurances”, and will ultimately be designed so that above all it will be forced to operate consistently with the commander or operator running it. If the bot is unable to confirm with a human operator before engaging in any maneuvers the Pentagon will not be purchasing or using it.

While it is somewhat reasonable to worry that the advancements in robot autonomy might have this outcome, systems are no where near the level of advancement where the kinds of fears being addressed are actually viable. None of the current drones can fire their missiles without a human being directing it. However that isn’t stopping human rights organizations like Human Rights Watch from issuing reports warning that new developments in drone autonomy represent the demise of established legal and non-legal checks on the killing of civilians.

Carter’s directive, while allowing the military to continue it’s forward march into autonomy makes it so that the military is doing everything in its power to make sure that robots cannot kill you without someone being behind it. However, the directive does not apply to “unarmed, unmanned platforms; unguided munitions; munitions manually guided by the operator; mines; or unexploded explosive ordinance.”

Meaning those same fail safes being implemented into the drones are not being implemented when it comes to pieces of code that can be used to gather data, or the camera that can record 80 years worth of video in a single day.


Are you more worried about the armed robots that can only kill you if a person tells it to, or more worried about the fact that a piece of code that can wreck your life isn’t being as closely regulated?