Knock Knock

by rthieme on May 2, 1997

Knock Knock

by

Richard Thieme

published as “Mapping the Human Heart”  in Internet Underground, January 1997

“Who are you?” said the caterpillar.

Lewis Carroll, Alice’s Adventures in Wonderland

“How can it not know what it is?”

Deckard, Blade Runner

Blade Runner.

A film about replicants, genetically engineered androids, machines that look and act human — so human, in fact, they develop an instinct for survival. Driven by a newfound fear of mortality, the narrator explains, some replicants are rebelling and need to be “retired.” Deckard is the specialist or blade runner given the task of killing them.

Rachael is a beautiful woman who works for the Tyrell Corporation, the company that makes replicants. Eldon Tyrell, the company’s founder, invites Deckard to run an “empathy test” on Rachael to determine whether or not she’s human.

It takes many more questions than usual for Deckard to conclude at last that Rachel’s a replicant. This confuses him. Deckard knows replicants, and he’s never run into one that believes it’s a human being.

“She doesn’t know,” he muses. Then, “How can it not know what it is?”

Tyrell explains. This new model is given implants of false memories supported by faked photographs. That artificial history cushions the shock of real life experience when the replicant opens its eyes. Rachael “remembers” those memories and thinks she’s been alive for years. That keeps her sane.

The illusion of memory … the seamless interface of an artificial and a real self… a conspiracy of consensual silence … enable Rachael to live a lie.

How can it not know what it is?

Deckard poses the central philosophical question of the film, and later — sitting in his apartment, surrounded by photographs — he too will have to face the evidence that he may be a replicant that didn’t know what it was.

How could Deckard not know what he was? That’s the stuff of science fiction. But how about you? In an age of biometric identifiers and genetic engineering, do you know who — or what — you are?

The illusion of memory … the seamless interface of an artificial and a real self … a conspiracy of consensual silence.

This is not science fiction.

This is your life.

For several generations, we have accepted the existence of artificial limbs, implants, prosthetic devices. But we always thought of our “selves” as different from that stuff. We may not know what we are but we know we are “not that.” That is machinery. Nuts and bolts. Add-ons. We are the ghosts in the machine.

But who are we?

First, who do they think you are?

We are increasingly using biometric identifiers — fingerprints, voiceprints, iris and retina scans, hand measurements and signature dynamics — to say who human beings are and aren’t. Biometric identifiers are blurring further the gray area between human beings and machines.

Like every first step, the use of biometric identifiers emerged out of known technologies. Fingerprints have been used to identify people for years; now the process has moved from inked images-on-paper to electronic fingerscans stored as digital images. As the cost of technology falls, the uses for fingerprint scanning and other biometric identifiers are widening.

The justification for expanded use of digital identifiers linked and accessible in global databases is always security and efficiency. Fingerprint scanning is sold as a reliable, increasingly inexpensive way to control access to everything from computer workstations to secure buildings.

The Veriprint system, for example, sold by Biometric Identification, is claimed to have false acceptance and rejection rates of 0.001 and to verify identity in less than a second. Studies show that people are willing to give fingerprints readily when they believe the context justifies the request. Fingerscans are showing up on smartcards now, at ATM machines, and on computer networks.

We’re a little less eager to let machines move close to our eyes for an iris or retina scan.

EyeDentify 2001 says they’re working on that “intrusion factor.” EyeDentify can acquire the vascular pattern at the back of the retina from a few inches away. They hope to move further than that, more in the direction of electronic signals from an automobile that verify an access code when we drive up to a gate.

EyeDentify has been marketing devices for scanning the unchanging vascular pattern at the back of the retina since 1982. Familiar from sci-fi movies, such devices guard access to military and intelligence facilities around the world.

EyeDentify says retina scanning is better than fingerscanning or voiceprinting because retinal characteristics can’t be lifted or audio taped for matching or tracking a person. They seldom use encryption because the vascular pattern can be observed only when a person is present for scanning.

The use of biometric identifiers to control access or protect property and data is widely accepted, according to surveys by Dr. Alan Westin of Columbia University. Top-down use

to monitor or control populations is another story.

According to Ann Cavoukian, assistant commissioner for Information and Privacy in Ontario, Canada, and co-author of “Who Knows?” the line between necessary or defensible security and the widespread availability to governments or businesses of linked and accessible data, all tied together by a universal identifier like a fingerscan, is a line that must not be crossed.

“In cases in which the individual voluntarily chooses to identify oneself to cash a check, log onto an Internet account, or enter a secure area, the use of biometric identifiers is clearly appropriate,” she said. “But when you start talking about a mandatory government identity card with an identifier on it, you move from a situation in which your control and security is enhanced to one in which it is surrendered.”

Privacy advocates are increasingly distressed by “function creep,” which happens when data or identifiers are gathered and linked for one purpose, then used for another.

Cavoukian thinks the same technology that threatens to imprison us in an electronic panopticon can be used to secure our freedom as well. As with correspondence, document transfer, and commerce on the Internet, the key to that security is encryption.

Eric Hughes, founder and chief mentor of Cypherpunks, a widespread grass roots effort to keep encryption powerful and freely available, thinks this is the direction we need to go. Because we don’t know all the purposes to which “the street” will put technology, we need to use the technology proactively to maintain control of our lives. The only thing preventing involuntary transparency in every transaction of our lives is strong encryption.

Negative reaction to programs like Netscape’s “cookies,” that secretly capture and retain information about our online sessions, ought to tell us what to expect when “sniffer” programs hidden on networks can capture the fingerscan of a user as she lays her finger on a key to gain access to the network.

What will stop an identity thief from using that stolen scan to impersonate us?

“Good question,” said George Tomko, President and CEO of Mytec in Ontario, a company competing with Unisys Canada for the welfare fraud contract. “The answer is biometric encryption.”

Mytec’s system of biometric encryption does not store the fingerscan. Instead the scan is turned into a “bioscrypt,” a coded number consisting of a set of alphanumerics scrambled with the pattern of the scan. The Bioscrypt does not look like a fingerscan nor can it be converted into one. The Bioscrypt is descrambled when a person swipes their live finger across the input lens of an optical computer. The matching of the fingerscan and the descrambled Bioscrypt confirms an individual’s identity.

Mytec’s Bioscrpt is a biometric version of Whittfield Diffie’s public key encryption (Pretty Good Privacy is a well-known example). People can identify themselves without the identifier being linked to other data or to their core identity. The Bioscrpt is the equivalent of the private key while the fingerscan is equivalent to the public key.     Bioscrypts hide the uniqueness of an individual and the identifier that would turn a security device into an apparatus for surveillance and control.

“Hiding uniqueness is the key,” says Tomko.

So “they” are doing their best to know who we are by reducing us to a collection of attributes, behaviors, and markers and connecting all our digitized data through a single unifying identifier.

The problem is, we are doing it to ourselves as well. We are starting to think of ourselves as if we are digital data, as if “bits ‘r’ us.” What will that do to how we think of ourselves?

Who do we think we are anyway?

We used to identify ourselves by something we owned — a driver’s license, a social security card. Then we identified ourselves by something we remembered or knew — a PIN number, a password. Now we identify ourselves by measuring a particular physical or behavioral characteristic and letting it stand for ourselves in the world of social, political, and economic activity.

We collude with the forces that turn part of us into all of us.

We did something similar with photographs. Photography made picture IDs possible, but we turned the abstract photograph — a representation of ourselves in light and dark — into something so “real” that we took its word rather than ours.

“Pictures don’t lie,” we used to say. Given a choice between a picture and a first-person narrative of an event that contradicted the picture, we believed the picture. The picture was “objective,” a person’s account was “subjective.”

We derive internal images of ourselves from our interaction with the products of our own technology, then forget that we created them in the first place.

We spoke of our lives as “open books,” for example, or said we knew people “from a to z.” We talked of “turning over a new leaf” or “starting a new chapter” in life. The textual origin of our metaphors was forgotten.

Today our internal models of reality — including ourselves — are being transformed into digital images. We are learning to think of our “selves” as images hyperlinked to other images instead of being “cogs in a machine.” When we work collaboratively on a project shared on a network, we imagine our “selves” as transitory images passing through one another’s minds or through the hive mind of collective humanity like pixels constellating momentarily on a monitor. The experience of interacting with the digital world is changing us in essential ways.

Once we imagine ourselves as digital images, the implications of biometric identifiers became significant in a different way. It is no longer a question of the self finding privacy, but a question of the very nature of the self that wants privacy, and even whether that self — ever more fluid, ever more protean, ever more hyperlinked to other selves — can experience privacy the way we once believed we did.

Once images became digital, and we could translate anything  — sound, images, documents — into zeroes and ones and reconstitute them on the other side of the wire — then pictures of every kind became lies or at the least possible lies.

Remember the fairy tale of the forester who found a gnome in the forest and made him identify the tree under which gold was buried? The forester tied his scarf around the tree and went to get a shovel, but first made the gnome promise not to remove the scarf. The gnome promised, and when the forester returned, sure enough, the gnome had kept his word. He had simply tied identical scarves around every other tree in the forest.

That’s the digital world.

When a country issues a new identity card, it takes digital thieves on the back streets of Asia twenty-four hours to make perfect counterfeits.

I have watched a “live feed” of a television broadcast altered just enough by electronic manipulation to make a political candidate look sick or dishonest.

How can we know if the bits that constitute our “passport data” to gain entry into the social and economic world of digital goods and services have been altered or lost? What happens if we are both the electronic manipulator and the image being manipulated?

On a societal or cultural level, the phenomenon of tourism illuminates that identity is recursive, how our “real selves” go into hiding when our lives must be lived on stage. The same thing happens to individuals once they become digital images of themselves in a virtual world.

“Touristic space” is space bracketed by a society or culture and presented as a microcosm of itself for visitors to explore. Natives may act out roles in the presentation but their roles are never the roles they play in “real life.” A Hawaiian, for example, may dance at a luau for tourists, but does not dance at home except to practice for that touristic experience. Tourists are meant to believe that the presentation is a real event, but everyone who lives in the culture knows it isn’t.

In their search for a “real” experience of “real culture,” then, tourists began visiting people in “real life” — staying with families in homes instead of hotels, touring factories and “living farms.” Tourists wanted to see real people doing real work. But immediately the real people withdrew inside the presentation, the factory workers played roles in a skit called “factory life.” Even if the tourist lingered and observed the workshop where the props for their touristic experience were manufactured, they would never quite glimpse the natives in their natural habitat because the very presence of the tourist and the act of observation changes the behavior of the native and distorts what is seen.

Touristic experience shows us that identity is nested in levels of authenticity like Chinese boxes. No non-native can ever enter the authentic core experience of the native.

The danger of the digital world prefigured in biometric identifiers is that it will become a touristic space created by those who live in it and presented to themselves as if it is real, while it is in fact what Baudrillard calls a simulacra — a copy of a copy without an original.

When biometric identifiers — behaviors such as how our voices sound, how we write our names, how we spend our money , and attributes such as our fingerprints, our eyes and faces, the shapes of our hands — are cross referenced with other data about us in a vast panoptic sort, those identifiers become more than digital constructs, they literally become us. Those constructs become who we are so far as other people are concerned, they become our social, economic, political selves. We are reduced to being tourists in our own homes.

The necessity for “hiding our uniqueness,” as Tomko said, is the key to maintaining control over our identities.

Have we wandered too far from the innocent world of biometric identifiers? I don’t think so. Those identifiers place layers of abstraction between others and ourselves and between ourselves and our own real experience of ourselves. They are a source of abstract data about us that the world considers more real than flesh and blood or anything we might say about ourselves.

Which do you think will be considered “real” — the scan of your hand or iris that refuses to unlock a door or your indignant assertion that the scan is wrong?

When the ghost in the machine is a digital construct, the ghost and the machine become indistinguishable. We experience “cyborg creep” — the gradual, relentless and inevitable transformation of human beings into machines who don’t know who or what they are.

We look at old photos and describe the event they represent … until a sibling says, “That’s funny — I don’t remember it that way” …

We look at our plastic hand through contact lenses, our blood pumping through an artificial heart …  or we look at digital images of ourselves in our own minds, images and minds alike back-engineered from the digital world they have created …

The illusion of memory … the seamless interface of an artificial and a real self … a conspiracy of consensual silence…

“Who,” asked the caterpillar, “are you?”

Alice replied, rather shyly, “I hardly know, just at present — at least I know who I was when I got up this morning, but I think I must have been changed several times since then.”

Isn’t Alice silly?

How can it not know what it is?

{ 0 comments… add one now }

Leave a Comment

Previous post:

Next post: