Stanford Professor Clifford Nass has been studying computer-human interactions for years. He and fellow researchers noticed that even technically sophisticated people tend to react to computers as if they were human. For instance, when the car manufacturer BMW installed advanced navigational computers in some of its cars a decade ago, it had to recall them. Not because the devices didn’t work well, they worked just fine. But the company received too many irate phone calls from German males who insisted they would never take directions from a woman. It didn’t matter that it was just a woman’s voice, that the computer was gender-neutral. The device/voice had to be changed.
The researchers also noted that if people used a teaching program on a computer for a half an hour and then were asked to asked to evaluate the program, the results depended on what computer they used. If it was the one they used for the instruction the marks were more favorable than if they did the evaluation on another machine. It was almost as if the evaluators didn’t want to hurt the computer’s feelings.
So developers have been trying to make computers seem more friendly and trustworthy by building on this human tendency. Nass believes in some cases they’ve done a remarkable job. In a recent Wall Street Journal article he writes,
Indeed, we may be reaching the point which our technologies are actually more socially effective that our colleagues…. It would be ironic if in the future, people will be turning to computers to learn how to win friends and influence people rather than the other way around.
I’m keeping an open mind on this. I know plenty of humans who aren’t that socially adept, but I haven’t yet interacted with computers who are that sophisticated. When I encounter a phone menu I appreciate friendly computer voices, but I usually try to get to a real person as soon as possible. That doesn’t mean I doubt Nass’s word. Eventually I plan to read more about his work in his upcoming book, The Man Who Lied to His Laptop.
What do you think?