Memory Storage

by rthieme on October 17, 1997

Islands in the Clickstream I was disappointed when hour-long cartoons of Peanuts were made for television. I had been reading the comic strip for years, and when I read the words in balloons above the characters’ heads, I heard their voices inside my head as a kind of echo — the way you probably hear “my” voice inside your head as you read these words.

That voice — private, well-modulated, always just right — was replaced by a real child’s voice that didn’t sound right at all. It sounded like a child, a real child, not the Charlie Brown in my head. By providing too much information, the movie makers yanked Peanuts from the world of imagination and turned it into one more concrete thing in the world of sensation, a fetish stripped of its magical properties.

Computer engineers pay close attention to the world of sensation as they struggle to develop computers that act like human beings. They more they try, however, the more it seems they miss the mark. Artificial intelligence and robotics experts design crabs that scuttle around their labs like low-grade idiots. Few laypeople are excited when a robot distinguishes a cube from a ball and lifts it off the ground.

The best robots are designed for tasks, not to look like living creatures. Let them do their jobs, and we’ll provide the personality.

A decade ago, Joseph Weizenbaum of MIT became upset when an employee interacted with ELIZA, the simple interactive “therapist” he designed, as if ELIZA were a real person.. His employee even asked him to leave the room so she could have a private conversation.

Weizenbaum was alarmed at the ease with which people projected personality and presence onto the computer. He thought it was bad, instead of just what’s so. Now two men from Stanford — Byron Reeves and Clifford Nass — have carried out some wonderful studies that reveal how and why we respond to computers as if they are real people (“The Media Equation: How People Treat Computers, Television and New Media like Real People and Places” – Stanford and Cambridge: 1996).

Their studies state the obvious, but — as usual — it was so obvious, we missed it. Our brains evolved to help us survive, and we react, unconsciously and automatically, as if something that looks or acts like a person IS a person. Our “top-level” program may say something else — “it’s only a movie,” for example, when we’re frightened during a horror flick — but that wouldn’t be necessary if we didn’t think it was real.

Artificial intelligence and virtual reality are not necessary to make us think a computer is smart. Less is more. Too much detail, too much information, overwhelms our imaginations.

Computers are inherently social actors, Nass said at a Usability Professionals Association conference. He used flattery as an example. “We’re suckers for flattery, even when we know it isn’t true.” So computer programs that flatter the user are consistently judged to be smarter and better at playing games, and users enjoy using them more. And … people ALWAYS deny that’s what they’re doing.

We act the way we act, not the way we think we act.

We need friends, we need allies, Nass explained, and when they tied blue armbands around both users and computers and said they were a team, the users believed their computers were friendlier, smarter, better, just as we do about our human team-mates. Again, no one knew they were doing it.

The voices of our computers — the ones we hear in our heads — are always just right. If designers simply provide the opportunity for projection and facilitate the transaction in a seamless way, we’ll do most of the work and add emotional richness and content. Get in the way too much, it’s like that little paperclip guy on Windows programs, always in your face. I don’t know anyone who wants that animation dancing on their screen all the time like a fly you can’t swat.

The Infocom interactive text games from the 1980s were powerfully evocative. Games like Trinity, A Mind Forever Voyaging, and Hitchhiker’s Guide to the Galaxy used clever text and poetic imagery to invite us to co-create landscapes as magical as those I remember from children’s books. With larger platforms and memory devices, games evolved into interactive movies that shut down that process. When graphics dominate the interface, there’s less room for the activity of the imagination.

Children imagine so much, Eleanor Roosevelt observed, because they have so little experience. As our experience grows, the magical landscapes of our childhood vanish, replaced with interstate highways, convenience stores and power lines. A little more imagination and a little less information wouldn’t hurt. It gives our souls some room to maneuver. If computers provide just enough cues to elicit our projections, we’ll do the rest. We’ll endow distributed networks, human and non-human alike, with personality, presence, and intentionality as the ancient Greeks saw gods in every rock and grove and thunderstorm.

Cyberspace is “space” indeed, brimful of gods and goddesses, angels and demons waiting to become flesh. That’s neither good nor bad, it’s just what’s so. Digital deities are emerging now in the brackish tidewaters of cyberspace, where all life begins. If we accept responsibility for understanding how we co-create them, how we interact with the Net and the entire universe unconsciously and automatically, then we can cooperate with how our brains work anyway. They make up the game whether we want them to or not. “Out there” and “in here” are metaphors, defining preconditions of perception as “space.” The grid is imaginary, and the grid is real. That’s the playing field of our lives so we might as well learn the rules, then work and play with gusto and be all used up when the game is done.

{ 0 comments… add one now }

Leave a Comment

Previous post:

Next post: