Chief Cypherpunk: An Interview with Ian Goldberg

by rthieme on October 5, 2001

Chief Cypherpunk

October 2001

Q&A WITH IAN GOLDBERG


Crypto star Ian Goldberg moved from academia to industry, but his passion remains “to live in a world where I can communicate securely and privately.”

INTERVIEWED BY RICHARD THIEME
Q: You’ve been chief scientist at Zero-Knowledge Systems since 1998. What exactly does the chief scientist do?

A: I do research into security and privacy technologies and encryption. I develop and analyze protocols for new products, and we work out what technological directions we want to go and are feasible or need work before they could exist. I work with various groups in the company to analyze the technology they are producing on both the consumer side and the enterprise side.

Whom do you work with? Do you have a team of peers like you did at Berkeley when four of you started ISAAC [the Internet Security, Applications, Authentication and Cryptography] group, or are you directing R&D through a staff?

It’s more that the other groups in the company come to my group, Zero-Knowledge Lab, when they need technological work, like protocol analysis, security or crypto.

You come from an academic setting that might have permitted rigorous accountability to technical standards to a greater degree. Do you have to make compromises?

You always have to make compromises, even in academia. But as challenges come up from those trying to invade our privacy, we will meet those challenges and continue to protect consumer security and privacy.

Are C-level executives seeing that there are dollar costs to not managing privacy?

On a fundamental level, I would say so. Clearly that’s not the only reason to provide privacy for customers, but, for most businesses, it’s a strong one. They see that they will have better customer relationships if they help their customers manage their private data.

There are also liability issues.

That’s right. If a corporation has a large number of credit cards on file, and they’re stolen, that’s a big issue. If the corporation had arranged things with greater privacy, so they didn’t have that list or had it in an encrypted form, they protect themselves from exposing their customers’ information.

Where do you see that pressure coming from? Insurance companies, the SEC, financial services, the marketplace?

All of those. Broad societal pressure is pushing companies toward implementing privacy policies. There are government pressures through bills like C6 in Canada, HIPAA in the United States, the E.U. Data Protection Act. Companies have to abide by them–if for no other reason than it’s the law.

Zero-Knowledge Systems is based in Montreal, and you are Canadian. Does the culture make a noticeable difference? It seems that there was an earlier and perhaps a higher consciousness about privacy issues in Canada.

Canada itself has definitely a more explicit stance on privacy than the United States. We have privacy commissioners in the provinces, which don’t have a parallel in the United States. We have a government that believes it’s their job to protect the privacy of individual citizens. This is great. We work with people in those offices and get a lot of support from them. Privacy is the right to be left alone, and we see more emphasis on that in Canada.

Let’s get back to what excites you. What elicits your passion these days comparable to the passion you developed for security, privacy and crypto at the University of Waterloo as an undergraduate and Berkeley as a graduate student?

It’s the same thing, really. I still do research on analyzing public systems widely deployed to verify their security. I personally want to live in a world where I can communicate securely and privately.

And you perceived that was at risk.

That’s right. If protocols come out that don’t have good public analysis or aren’t strong against attack, it’s a serious problem. As you mentioned, companies that deploy systems do it based on the kind of buy-in they can get, based on return on investment and not on the best security. So often we see security systems deployed in consumer devices, and the privacy systems are not the best. We want at the very least to have people take note that when someone gives you a cellphone and says, “this is secure and can’t be eavesdropped on,” you should know how to interpret that statement.

What’s the nature of government influence on that process? Is the process sufficiently open?

I have never been in on one of the cellphone standards meetings. We know people from governments are present, but we don’t know what went on because it’s all behind closed doors. This is one reason we encourage an open development process of standards. If government does come in and say we need this or that, they should say it in the open.

This is about open design: you can have an open protocol without having the source code of the implementation open. They don’t have to give out their implementations to give out the protocol, so we can analyze it and see if it is fundamentally insecure. Maybe it’s supposed to be secure, but the implementation isn’t correct.

The difference between a protocol and an implementation is like the difference between a blueprint and a building. The protocol is like the blueprint. If there’s something wrong with the design of the system on paper, then you can’t build it well. Even if you build it exactly to spec, if your building is missing a supporting wall, it will collapse. On the other hand, if the blueprint is exactly right, you might still make a mistake when you’re building the thing. Open source has to do with looking at the building after it’s built. I advocate, in addition, an open development process, which means going to the meetings where you decided to put in or remove this or that wall, which might save money but result in the collapse of the building. Those meetings where things are decided should be open.

You worked on Palm security a few years ago, and we both heard Mudge and Kingpin present a paper on numerous insecurities in the Palm at USENIX. Do you agree with their findings?

Yes. They basically point out that the Palm OS isn’t a secure operating system, and that’s exactly true.

And what do you suggest?

Not that you shouldn’t use a Palm Pilot, but that you should be careful what data goes in and goes out. If you accept random programs from others, they could contain viruses, just like on a PC. Unfortunately, there’s no antivirus software that I am aware of for the Palm Pilot yet. So we’re back where we were with PCs a few decades ago when viruses were starting to come out. We had never heard of antivirus software, and they spread pretty freely.

At least we don’t have the worm capability on the Pilot because it’s hard to spread from one Pilot to another autonomously. A program cannot jump from one Pilot to another without someone pointing it and doing something. There’s at least that containment issue.

But if you do let strange programs onto your Pilot, you have handed over complete control. Most people know that. Earlier Windows had the same property. If you ran an arbitrary program or an ActiveX applet off the Web, you gave some stranger complete control of your computer. Yet we continued to do things like online banking. So security isn’t a black-and-white issue. It’s economic. It’s about the benefit you get versus the risk you’re taking. It’s a risk management issue.

So some businesses can take big hits and not be motivated to upgrade security if insurance covers the loss. It’s a cost of doing business.

Exactly.

Is that frustrating, coming from the technology side? In your heart, do you still want to implement a maximum level of security?

It’s not that I want the maximum level of security. If the business takes a large hit, and it’s covered by insurance, that’s fine. But if I take even a fraction of that hit, I’m not covered as an individual by insurance at all. They may have their assets covered, but I don’t. So if a vendor gives me a product with faulty security on the assumption that the burden will be shifted elsewhere, I am exposed even though they’re not. That’s the kind of situation we want to avoid. If insurance coverage is extended to consumers actually using the devices, who will take the financial hit due to poor security? If personal information is leaked or private data uncovered, that would be part of the risk management solution.

In 1995 you wrote an article on basic flaws in Internet security and commerce in which you said that the ease of attack and the subtle variations possible were striking. You illuminated numerous problems and said that these issues must be resolved before Internet security and commerce are realistic. How are we doing? Has much changed six years later?

Have you been hit by the SirCam worm?

It got in pretty easily.

Right. Back in 1988, we had the Morris worm caused by a buffer overflow, which was a pretty new thing, a really cool way to penetrate a system.

And buffer overflows are still a problem.

Right! Thirteen years later, they still do buffer overflows. It’s crazy. We’ve learned nothing.

Because applications still go to market that allow it. What will it take, 50,000 deaths on the highway before we get seatbelts?

I don’t think we’ll see any deaths from the SirCam worm. Now, we are doing better. I don’t want it to sound like I think we’ve gone nowhere. We are definitely doing much better than before. Now we understand secure coding practices. We understand how to not only build things securely but after they’re built, wrap them in armor so they stay secure. We do belts and suspenders kinds of things, then put it on a network with a firewall so they can’t hit it, inside an IDS, so even if they penetrate the firewall, we can watch them doing it. And we have a managed security service watching our network in case we fall asleep. We have many levels of protection that didn’t exist 13 years ago.

How much of that is burden shifting, so that if you want to use a wireless network, you had better shift the security burden to a VPN?

That particular burden shift wasn’t the best. The wireless people promoted 802.11 and WEP [Wired Equivalent Privacy] as just as secure as a physical cable.

And your paper showed how empty those promises were.

Since our analysis, there has been Adi Shamir’s paper, which goes further. WEP has now been broken in every conceivable way.

Were they simply not telling the truth, then?

I wouldn’t say that. I really doubt that they knew about these attacks.

Is that because the WEP was developed in private?

They tried to do the right thing. This just shows how hard this development process is. The IEEE is basically an open development process. Anyone can participate in it. But they didn’t specifically invite the experts in the field to do so, so no one in the field was asking the questions like, “What is this group over here doing?” If you’re developing anything having to do with privacy, security or cryptography, you have to find the experts and invite them in. To their credit, they have since done that.

You’re also making a subtle point. The process by which you understand how to create the process is just as important as technical development. Jeff Moss said years ago: “There’s too much for me to know, so the most important thing I need to know is what I don’t need to know, and the next most important thing is who knows it so I can get it when I need it.”

That’s exactly right. I had exactly the same revelation when I arrived at Berkeley. I was in a grad student dorm with people around me who had extremely detailed knowledge about fields I didn’t know existed. In Waterloo, I was surrounded by people in the same field, and we had basically the same kind of knowledge. At Berkeley, I realized how much I didn’t need to know, and I was so glad that others studied those things.

Implicit in your process was knowing how to access people and derive their value.

That’s what research and building and learning things is all about. You won’t learn anything if you try to teach it to yourself. In general, you won’t learn without a teacher, and knowing where to find that teacher is critical.

When is it most critical to open the development process to ensure security in the final product?

During development, when the system doesn’t actually exist yet. That’s the time to find and report bugs, before they affect the security and privacy of users. Once the system is deployed, it’s not as clear that you should post a bug to the Internet as soon as you find it. There are the usual issues: How much to tell which party? How much in advance?

I’m still a fan of full disclosure and Bugtraq and lists like it. You do have an obligation to tell the people supplying the product and the people affected by it, and “full disclosure” issues emerge in the latter case. Just because there’s a bug, it might not be interesting, and there’s no need to publish it. But if it affects users, those users need to know about it. Simply telling the company that supplied it, if they just sit on it, and users continue to be exposed to the vulnerability, does no good. Even if users cannot get a fix, because it’s a closed source company, and they have to wait for a patch, they should know they’re vulnerable, so they can do risk management and risk assessment.

There was a great hue and cry after Code Red about patching and a lot of finger-pointing because the patch had been available for a month. It’s so easy to cry, “Patch! Patch!” but when you’re dealing with complex systems with thousands of machines, complexity can be the enemy of security. You don’t just patch the system when you don’t know how that patch will affect the other parts of the system.

That’s right. It’s a hard problem. It’s one of the hardest things that IT managers and sysadmins have to deal with, and it has been for a long time. This is one reason that full-disclosure lists were created.

If I have a hacked-together distribution of some operating system, and someone says, “Oh, there’s a bug in this,” but I have modified “this,” then how do I know if I have the bug? How do I test it? How do I fix it? I can’t just use your patch because I have changed the system, and it won’t work. I have to know what the fundamental problem is, analyze whether or not my changes will affect that problem and be able to take a suggested fix and learn how to apply that fix to my system. It doesn’t help if I have thousands of machines, all of which are patched differently, which is poor admin practice. You’ll never be able to individually patch them all, so there should be some uniformity.

On the other hand–I said this was a hard issue–uniformity can lead to a monoculture, which is the worst thing you can do in a security environment. If everyone is running IIS, then everyone gets infected. I run an Apache server and see attacks from these Code Red machines all the time, and my Apache server laughs at them. So there’s a dichotomy: if you have a large network and make everything the same, it’s easy to manage, but then it’s easy to infect. Make things diverse, and you have more to do when you have a problem, but you’re more assured that at least part of your network will stay up while the other part goes away.

I’m imagining a savvy sysadmin doing everything you say who leaves the company after a few years. In terms of the capture and transmission of knowledge, documentation…

Documentation is the most important thing.

Have you seen any nightmare scenarios?

All the time. You see them at universities. A sysadmin may be a summer student who sets something up in July and leaves in September, and no one has any idea what he did. Eventually, you discover a cable running from here to there, and you take it out, and the whole thing crashes.

Are standards for documentation and accountability for documentation improving?

I don’t think there’s a uniform standard at all. You cannot say we’re doing a better job now than we were 10 years ago. It’s very hit-and-miss, determined by every corporation or even every sysadmin. Everyone knows you should do it, but is it more important to document what you did yesterday or fix the new problems that came up today?

You’ve done a lot of work with anonymous or pseudonymous publishing on the Web. What’s the state of the art?

It’s in active development. A workshop is having its second iteration in April, and the people who are working on similar technologies like the Freenet Project or Publius talk about what exactly is meant by anonymous and pseudonymous publishing.

What are we trying to achieve? What attacks are we defending against? Are they technical attacks or legal–usually it’s both–and what do we need to do to ensure robustness? Do we want to make a system censorship resistant, like Ross Anderson’s Eternity Service, where you can put things in and never take them out, and they spread like crazy and can’t be removed from the public network? They can be used by whistle-blowers, people afraid of local law enforcement–anyone in a dangerous situation. We’re seeing governments apply their laws extraterritorially–such as the French government forcing Yahoo! to remove things from its Web site–which leads to the Internet only being safe for the safest person.

You have also been interested in creating secure forms of e-currency. We’re not doing so well with that, are we?

Unfortunately it’s not taking off. I want to see it take off. It’s annoying if nothing else that I can’t buy something online without using a credit card and revealing a great deal of information about myself. I can’t pay with cash online. Even more annoying is that we have the technology, but business issues surrounding deployment prevent its use. There’s just not enough ROI in the eyes of businesses to make it worthwhile.

A currency is only useful if it’s widely adopted. The DigiCash trial met with limited success way back in 1996 or 1997, because they had few merchants accepting it. To their credit, they did provide for person-to-person payments, so you didn’t have to pay a special merchant as you do with a credit card. But they had flaws with execution. As a user, for example, you had to open a bank account at a particular bank in Missouri, and to do that you had to get up from your chair and that breaks the first law of Internet commerce. You cannot make your customer get up from his chair.

Ian, what keeps you awake at night? What challenge can’t you solve?

How do you get privacy-enhancing technologies out of the door? That’s really quite big. The cypherpunk motto was, “cypherpunks write code.” Well, we have all the code we need. How do we get it into the hands of consumers who need the protection, especially when they may not know that they need the protection? Most consumers are happy using credit cards all the time. Maybe we should let them be happy, or maybe we should educate them as to why they would benefit from having more privacy. They don’t see themselves losing anything. From their point of view, maybe there isn’t a problem.

I’ve given a credit card number over the Web and over the telephone and to a waiter who disappears into the kitchen. I occasionally have a mistaken charge. But as long as my liability is limited to $50, what’s the problem?

That’s right, but that doesn’t address the privacy issue, which is that everything you do with your credit card is compiled into a huge dossier. That’s where most consumers aren’t aware of the reality.

And when you show people that dossier, they freak out. Seeing it all in one place is shocking.

Absolutely. This is why we’re moving toward open dossiers. One of the principles of privacy policies in Canada, the U.S., the E.U., is that you have to be able to see what information people have about you so you can correct it if it’s wrong.

A few years ago, you wrote an article on privacy quoting the cypherpunks’ credo, which you stated as “privacy through technology, not through legislation…. If we can guarantee privacy protection through the laws of mathematics rather than the laws of men and whims of bureaucrats, then we will have made an important contribution to society. It is this vision which guides and motivates our approach to Internet privacy.”

Others who thought salvation would come through mathematics and cryptography have changed their emphasis because they realized that strengthening one aspect of security in effect weakens another. You have said that as well, correct?

That’s exactly right. Privacy and security aren’t simple issues and require a multifaceted approach.

Do you still agree with that credo?

I still think we need strong technology, because it isn’t enough to have just legislation. Strong mathematics and cryptography and good technical security are important pillars of any security or privacy solution for many reasons. Laws change, locations change, the Internet is a global space, and the legislative issues are different in different countries. We want more of a level playing field using cryptography so I know that I am as protected as I am here in Montreal when my information travels over the Internet to some offshore island. It certainly won’t enjoy the same legal protection, so we have to give it the same technical protection.

Copyright © 2001 Information Security, a division of TruSecure Corporation

{ 0 comments… add one now }

Leave a Comment

Previous post:

Next post: