with an intelligent, human-driven machine, the bicycle+person, why would we ever think the situation would be any easier when the coordination must take place with an intelligent machine? The moral of this story is that we shouldnât even try. Smart machines of the future should not try to read the minds of the people with whom they interact, either to infer their motives or to predict their next actions. The problem with doing this is twofold: first, they probably will be wrong; second, doing this makes the machineâs actions unpredictable. The person is trying to predict what the machine is going to do while, at the same time, the machine is trying to guess the actions of the personâa sure guarantee of confusion. Remember the bicycles of Delft. They illustrate an important rule for design: be predictable.
Now comes the next dilemma: which should be the predictable element, the person or the intelligent device? If the two elements were of equal capability and equal intelligence, itwouldnât matter. This is the case with the bicyclists and pedestrians. The intelligence of both comes from human beings, so it really doesnât matter whether it is the bicyclists who are careful to act predictably or the pedestrians. As long as everyone agrees who takes which role, things will probably work out okay. In most situations, however, the two components are not equal. The intelligence and general world knowledge of people far exceeds the intelligence and world knowledge of machines. People and bicyclists share a certain amount of common knowledge or common ground: their only difficulty is that there is not sufficient time for adequate communication and coordination. With a person and a machine, the requisite common ground does not exist, so it is far better for the machine to behave predictably and let the person respond appropriately. Here is where the playbook idea could be effective by helping people understand just what rules the machine is following.
Machines that try to infer the motives of people, that try to second-guess their actions, are apt to be unsettling at best, and in the worst case, dangerous.
Natural Safety
The second example illustrates how accident rate can be reduced by changing peopleâs perception of safety. Call this ânaturalâ safety, for it relies upon the behavior of people, not safety warnings, signals, or equipment.
Which airport has fewer accidents: an âeasyâ one that is flat, with good visibility and weather conditions (e.g., Tucson, in the Arizona desert) or a âdangerousâ one with hills, winds, and a difficult approach (e.g., San Diego, California, or Hong Kong)?Answerâthe dangerous ones. Why? Because the pilots are alert, focused, and careful. One of the pilots of an airplane that had a near crash while attempting to land at Tucson told NASAâs voluntary accident reporting system that âthe clear, smooth conditions had made them complacent.â (Fortunately, the terrain avoidance system alerted the pilots in time to prevent an accident. Remember the first example that opened chapter 2 , where the plane said, âPull up, Pull up,â to the pilots? Thatâs what saved them.) The same principle about perceived versus real safety holds with automobile traffic safety. The subtitle of a magazine article about the Dutch traffic engineer Hans Monderman makes the point: âMaking driving seem more dangerous could make it safer.â
Peopleâs behavior is dramatically impacted by their perception of the risk they are undergoing. Many people are afraid of flying but not of driving in an automobile or, for that matter, being struck by lightning. Well, driving in a car, whether as driver or passenger, is far riskier than flying as a passenger in a commercial airline. As for lightning, well, in 2006 there were three deaths in U.S. commercial aviation but around fifty deaths by lightning. Flying is safer than being out in a