When You Trust Computers More Than Yourself

The Wall Street Journal published a great in-depth article this weekend about Oracle Team USA’s incredible come-from-behind victory in The America’s Cup last September. The New Zealand team had a commanding 8-1 race lead over Oracle Team USA and only needed one more win to claim the Cup, but Oracle Team USA rallied back and won 8 races in a row to seal the victory.

How did the team do it? This quote from the article sums it up nicely:

With his team’s prospects getting dimmer by the hour, Mr. Spithill [the skipper of Oracle Team USA] decided it was time to stop obeying the computers and start thinking like sailors [emphasis mine].

The New Zealand team was consistently beating Oracle Team USA on the upwind leg of the races. Team Oracle’s computer program — called Velocity Performance Predictor — had established a target for the upwind leg: “sail into the wind at a relatively tight angle of about 42 degrees, which would produce the optimal mix of speed and travel distance.” The New Zealand team, however, was sailing at much wider angles to the wind (about 50 degrees, on average), which meant they were covering more water, but they were also reaching faster speeds — “more than enough to offset the greater distance travelled.” This was something Oracle’s computer program had not predicted.

Had team Oracle placed too much faith in the technology? Had its enormous budget lulled the team into overconfidence? Had Mr. Spithill gotten away from the lessons he had learned [growing up sailing] in Elvina Bay?

I encourage you to read the article for the complete story, but here are my takeaways and lessons learned.

If a computer is telling you one thing, but your eyes, experience, and gut are telling you something else, trust yourself more than the computer. To err is human — and that applies to software engineers who develop computer programs. Simply put, computer models are not infallible. Feed them inaccurate data and they’ll give you back inaccurate results. They are math equations that try to model real life conditions, but despite all the advancements in computing power and technology, you can’t perfectly model real life — it’s too complex, dynamic, and unpredictable.

This is a lesson airline pilots, for example, are learning the hard way. The Federal Aviation Administration (FAA) issued a report last November on how flight deck automation is affecting flight path management. As reported in Aviation Week:

One finding says that while automated systems have improved safety, pilots rely too much on them, continue to be confused by autoflight modes and “may be reluctant to intervene” [emphasis mine] when they are faced with a confusing, automation-related situation.

Last summer’s crash of Asiana Flight 214 in San Francisco is the most recent example. According to a CNN.com article:

Capt. Lee Kang Kuk, who was highly experienced in a Boeing 747 but was transitioning to flying a 777, told the National Transportation Safety Board that he found it “very stressful, very difficult” to land without the glideslope indicator that helps pilots determine whether the plane is too high or too low during approach.

Of course, airplane crashes are an extreme example of what could happen when you trust computers more than yourself, but the lessons apply in everyday life and business nonetheless.

Always question your assumptions and constraints. The Oracle team ultimately figured out what was wrong with its computer model: a constraint. “To get going fast enough upwind to get on the foils, the yacht initially had to sail at an angle that would force it to cover more water — something the computer wasn’t programmed to allow [emphasis mine].” When the constraint was removed and the wider angles were entered into the software, “the computer…recalculated the speed and showed the boat could sail faster that way, confirming what the sailors had found.”

Why was that constraint there? The article doesn’t say, but like many constraints, it probably represented a long-held assumption or practice. Normally, you wouldn’t take a 50 degree angle to the wind because it would add too much distance. But this wasn’t your typical sailboat. “Mr. Ellison decided to commission a new kind of boat,” the article states, “a decision that would turn the sport into something akin to Formula One on water.”

The bottom line is that we need to ask “What if?” more often. What if this constraint didn’t exist? What if we did the opposite of everybody else? What if we turned standard practice on its head? Removing constraints often sparks innovation. But so does adding constraints where they don’t exist. For example, what if you offered free shipping to customers, all of the time, regardless of the amount they spent? That is exactly what L.L. Bean introduced back in March 2011. At the time, company spokeswoman Carolyn Beem said there would be no price increases due to the initiative. But since shipping costs would go up for L.L. Bean, the company had to innovate elsewhere to negate those costs and maintain, or even improve, its profit margin.

In order to spark innovation, you have to get out of your comfort zone. And sometimes that means turning off the computer, turning off the autopilot, turning off the automation, and trusting your human senses and experience to make the right decisions to get you home and across the finish line first.

A version of this post was also published on LinkedIn. Follow Adrian: