David Weinberger Thinks We’ll Want Self-Driving Cars to Have the Ability to Murder Their Occupants

David Weinberger wrote an article on self-driving cars with a click bait title of Should Your Self-Driving Car Kill You to Save A School Bus Full of Kids? According to Weinberger, society will have to explicitly program some sort of moral code into such automated safety systems.

Weinberger imagines the following situation once self-driving cars become common,,

It’s the near future and you’re reading this on your way to work in your self-driving car. The human driver of the car in front of yours slams on the brakes. Your car’s reaction time is roughly the speed of light, so it has time to realize that the stopping distance is too short and to see that the lane next to you is empty.

A quick swerve barely interrupts your morning browse of the headlines. The system works.

He then compares this to what might happen further on down the road when all cars are self-driving,

Now it’s ten years later. Human-driven cars have been banned from the major commuter routes because they’re unsafe at any speed. Wouldn’t you know it, but exactly the same situation comes up. This time, though, your car accelerates and slams itself into a nearby abutment, knowing full well that the safety equipment isn’t going to save you.

Your car murdered you. As it should have.

Once cars are networked, it would be immoral and irresponsible to continue to take self-preservation as the highest value.

In this second scenario, your self-driving car has determined that swerving would endanger a busload of children, and so elects to sacrifice you. There are a number of problems with this.

1. This is an extremely confusing set of scenarios.

If human-driven cars are banned from commuter routes, then the car in front of you shouldn’t be slamming on its brakes as the human in the original scenario did. With the entire system apparently networked, the cars should be able to communicate information to prevent this scenario from occurring–after all, additional safety would probably be the major reason to switch to entirely self-driven cars.

Weinberger responds to these sorts of criticisms in the comments to his article by essentially saying “no system is perfect.” So, allowances will have to be made for these sort of situations,

But we have yet to see a system that avoids all situations that require unpleasant accidents. So the cars will have to be programmed to respond just in the unlikely case that something unexpected happens. No?

2. No. We should not program cars in this way for those sorts of situations.

Weinberger is correct that programming a system that prevents all possible accidents is probably not possible. But this explains too much.

If it is true that there is no such system, it is also the case that there is no complex software that can be certified completely free of bugs or errors. We know of numerous cases in which tiny errors in programming have led to dozens of deaths.

In 1991, for example, 28 U.S. soldiers were killed and another 100 injured when the Patriot anti-missile system protecting them failed to intercept a SCUD missile fired by Iraqi forces. The cause of this failure turned out to be a small bug in the software/hardware of the Patriot system,

It turns out that the cause was an inaccurate calculation of the time since boot due to computer arithmetic errors. Specifically, the time in tenths of second as measured by the system’s internal clock was multiplied by 1/10 to produce the time in seconds. This calculation was performed using a 24 bit fixed point register. In particular, the value 1/10, which has a non-terminating binary expansion, was chopped at 24 bits after the radix point. The small chopping error, when multiplied by the large number giving the time in tenths of a second, led to a significant error.

The sort of system that would track and prioritize in real time the lives of self-driving vehicle passengers would be several orders of magnitude more complex than the sort of software used to shoot down SCUD missiles. It would be impossible to build such a system without the strong possibility of bugs and unpredictable emergent behavior. Unforeseen circumstances would likely lead to a larger loss of life in such situations than would human beings with much poorer access to information.

Additionally, just the serious suggestion of adding such systems might delay or altogether deter long-term adoption of self-driving cars. This in turn would likely cost more lives in the long-term. For example, a system where a self-driving vehicle prioritizes the safety of its passengers may indeed result in situations where a busload of children are put in danger. However, that hypothetical risk would likely be much smaller than the actual known risk today where such accidents occur with alarming regularity.

3. The TCAS Example.

Weinberger doesn’t mention it in his article, but there is already a widely deployed automated system for protecting travelers from collisions. That system is used by airplanes and is called Traffic Collision Avoidance System (TCAS). The benefits and limits of TCAS provide some real-world insight into how software designed to prevent automobile collisions might actually work.

TCAS is designed to mitigate the risks of mid-air collisions between planes. The system monitors the airspace around a plane for potential collisions. In the event that two TCAS-equipped planes appear to be in danger of colliding,

The next step beyond identifying potential collisions is automatically negotiating a mutual avoidance maneuver (currently, maneuvers are restricted to changes in altitude and modification of climb/sink rates) between the two (or more) conflicting aircraft. These avoidance maneuvers are communicated to the flight crew by a cockpit display and by synthesized voice instructions.

So if two TCAS-equipped planes on are on a collision course, the TCAS units in each plane communicate and agree on a course of action to avoid the collision. The pilot of one plane is then given instructions to increase altitude while the pilot of the other plane is given instructions to decrease altitude, thus averting the collision. Currently, human pilots have to physically perform the maneuvers, but the TCAS computers alone decide what actions each plane should take.

As Wikipedia notes, although safety studies of TCAS show that it dramatically improves safety, it is also the case that the design of TCAS might someday cause an accident. For example, what if a pilot is told to climb when his airplane is already at its maximum safe altitude? Or, conversely, if TCAS tells the pilot to lower altitude to the point that his aircraft would then be dangerously close to the ground?

And TCAS–which is much simpler than the sort of system Weinberger imagines–does make mistakes. For example, consider this report from a pilot who received an apparently erroneous TCAS command that could have led to an accident,

“Our flight [air carrier X]…was at flight level 260…We observed a TCAS II advisory [TA] of traffic at 12 o’clock, 1,000 feet above at about 15 miles on an opposing heading. Shortly after, we observed traffic on [the] TCAS II display descend from 1,000 feet above to 500 feet above. TCAS II commanded a descent of at least 2,000 feet per minute to avoid traffic.

“We queried…[ATC about the] traffic. They told us we had an air carrier jet (Y) 1,000 feet below us on a converging heading…. At about the same time we visually acquired air carrier (Y) about 500 to 1,000 feet below our altitude. [The] Controller confirmed he was assigned flight level 250. We observed no traffic above, nor did the Controller have any traffic above us. Our TCAS II continued to command a descent and continued to show…[a] traffic conflict 500 feet above us. [The] Controller advised that air carrier (Y)’s Mode C did momentarily show 26,500 feet and then returned to flight level 250 on their scope. We had altered course slightly to the right to offset [the] conflict, but did not follow the TCAS II RA. If we had followed the TCAS II RA we, in my opinion, would have impacted the opposing aircraft.” (ACN 210599)

Again, these systems are complex enough and prone to problems when they are left to negotiating basic avoidance maneuvers. Adding in some odd formula about which plane’s passenger are more valuable is a much more complex proposition and one that probably will never be attempted.

4. Reinforcing Social Inequities With Objective Criteria.

Finally, there is a system that does attempt to decide who should live and die by the sort of objective criteria that Weinberger imagines. That is the system created in the US by the United Network for Organ sharing which attempts to use objective criteria for prioritizing limited organs available for transplant.

When considering whether or not an individual should be listed in the UNOS database as a candidate for a kidney transplant, for example, one of the criteria that some medical teams take into account is how likely the patient is to consistently take post-operative immunosuppressant drugs. If a patient doesn’t consistently take such drugs, the effective lifespan of the transplanted organ can be dramatically reduced.

At first glance this would seem to be a fairly straightforward criteria for selecting candidates for organ transplants. Imagine we have two patients who need a kidney transplant, and a kidney has just become available. Doctors determine that Patient A is likely to take the drugs inconsistently for only three years, while Patient B is almost certain to take them consistently for at least 20 years. The maximum social benefit would be to award the organ to the person likely to take the drugs longer.

However, this may just reinforce existing racial and poverty inequities,

The costs of post-transplant medications pose a real and significant barrier to successful organ transplantation based on the socioeconomic circumstances of the recipient. This barrier is not neutral; the wealthy do have an edge and the poor are not guaranteed an equal opportunity to live. In some cases, these costs prevent patients who are otherwise medically good candidates for transplantation from making it onto the national deceased organ donor waiting list, either by their own choice or based on the recommendations of their health care team. Those who do get on the waiting list and receive a deceased donor organ transplant but cannot in the end afford the necessary medication will inevitably experience organ failure. Among the survivors, some will go back on dialysis and possibly back on the national deceased donor organ waiting list. Many will die while waiting on the list; others will simply wait to die. Poverty is not only a significant barrier to organ transplantation, it is in effect a de facto contraindication for it.

In a similar vein, imagine our car is out of control and will unavoidably crash into one of two buses, each carrying 100 people.

Our networked system quickly calculates that based on everything it knows, the people on Bus A collectively have an average of 1600 additional years of life left between them. The folks on Bus B, however, have a meager average of just 1100 additional years of life left between them (almost 1/3rd less than Bus A). Since 1600>1100, our automated system decides to sacrifice Bus B.

Congratulations, our networked system is now racist. The scenario above occurs if all of the passengers on both buses are 60-year-old men, and the passengers on Bus A are all white, while the passengers on Bus B are all black. It is certainly possible to avoid these sort of outcomes, but not without making the system ever more complex and likely reducing its ability to make these sort of calculations to begin with.

Summary.

If we ever reach a point where all cars are self-driving, we should stick with a basic “take reasonable steps to preserve the safety of my current passengers” design. Asking our cars to be Platonic philosopher kings is a step too far.

One thought on “David Weinberger Thinks We’ll Want Self-Driving Cars to Have the Ability to Murder Their Occupants”

Leave a Reply