A Mentor’s Mantra: An Interview with Stephen Northcutt

by rthieme on May 2, 2001

A Mentor’s Mantra

May 2001

Q&A WITH STEPHEN NORTHCUTT

Former Navy man Stephen Northcutt has new marching orders: Train the defenders to think like their attackers.

INTERVIEWED BY RICHARD THIEME

Q:How did you get to SANS?

A:I’ve worked for the Department of Defense for most of my adult life. I was in the uniformed Navy and worked for the Defense Mapping Agency in 1982, writing password codes for a Sperry mainframe. I had the opportunity to switch to the Navy Lab in 1985 and did a variety of security- and network-related work–design as well as implementation.

In 1994, my Sun workstation got hacked. The source address was out of Australia–someone came in with a sendmail attack that let him execute an arbitrary program, and he decided to compile a backdoor. I actually saw it happening. I thought, “Wow, my disk is flashing!”

After that, I developed a real passion for security. It got to a point where there were no further promotions possible at the Navy Lab; they were very kind, but a little out of the mainstream. So I did a year in Washington, D.C., working for the Ballistic Missile Defense Organization. Being in a managerial position and not allowed to do any real work of my own was an eye-opener. Alan Paller, chief researcher at the SANS Institute, had offered to let me spend a couple of years at SANS and, after the Y2K watch, I decided it would be a wonderful opportunity.

Would you say you hit a “silicon ceiling” in the Navy? That wouldn’t bode well for retaining qualified people in the military.

Well, I’m one of those weird people for whom salary isn’t the primary motivation. My primary need is to drive the boat in the direction I want, which is a difficult thing for government to put up with.

In general, the military is a training ground for a lot of journeyman-level security professionals. They’ll get the education and experience they need, put in three to five years, then move into industry for a higher salary. Even so, I’ve been absolutely shocked in the last couple of years at the desperate need for fully competent people, let alone top managers.

You’ve said that the skill level of a high percentage of security professionals is extraordinarily low, that fewer than one in 20 have core competence in all aspects of computer security. That’s scary.

And it’s obviously not getting any better since the level of complexity or the amount of knowledge you have to master isn’t shrinking. I think we’re headed for an interesting future.

Of course, the manufacturers themselves may help, so when you pull the OS out of the box and plug it in, it’s hardened. If you want to screw it up, you’ll have to intentionally screw it up. Up until now, the OS has been pretty soft when you start, and you’ve had to harden it on your own before connecting it to a network.

Ah, the good old days, when networks were a part of the geek subculture instead of a platform for global commerce.

It used to be kind of fun, actually. You started with an empty box and you had to load the OS onto the box. Of course, that was very inefficient–it would take three days to bring up the box. We still have echoes of that mindset today, except the stuff is pre-installed.

Do you see any hope in the future about building systems with intrinsic security?

I do see some light at the end of the tunnel, and it comes from the notion of consensus and standards. Right now, we’re all flapping around wildly trying to figure out how to harden Windows 2000.

I could wave my hand and tell you, “we know how,” but that’s not the complete truth. So, there are lots of people working on this and finding out that it’s a little harder than we originally thought.

But once we’re able to say that this is the way security ought to look, then whenever software OEMs release another operating system, we’ll have a blueprint for them. You can see this starting to creep into the Solaris and Unix community. When the community comes to agreement on what ought to happen–not a 100 percent agreement, but, say, 80 percent–then a lot of it begins to creep into the OS level.

Do people really understand what’s at stake these days with computer security and information security?

No. It shakes me up every time I go into an airport lounge and listen to people in cubicles talking very loudly on cell phones about things I could use the minute I hear them. I think the best way to attack an organization would be simply to walk into its building. Most of the time, nobody seems to challenge you; I get happily surprised maybe once or twice a year.

In addition, the Internet itself is being used more and more for business, which means money is on the line, and that gets certain people’s attention. The Internet has its own frailties, with DNS and routing in particular, so you can easily launch an attack to disable the Internet infrastructure.

And servers at the Department of Defense, NASA and Department of Energy–do they have manifold weaknesses?

Yes, but the Pentagon has made phenomenal strides in improving information security. It’s done a fine job hardening that one spot. But other departments in the government are turning to outsourcing, and outsourcing is dangerous.

It’s dangerous because you may not be able to ensure the level of security you need and, in addition, the military and the government are also using commercial applications to a greater degree, which might mean programs that have exploits or backdoors.

That’s true, and it brings us right back to the lack of skilled people. When you go to outsourcing–application providers and the like–you’re saying, “I can’t hire enough people myself to run it, so I’m going to hire someone else to do it. They can get the people and they don’t have to do all my jobs.” But that means placing a large portion of your critical infrastructure in the hands of contractors. Is that comforting?

The way you spoke about that intruder hacking your Sun terminal, that sounded personal. You have great enthusiasm for computer security and finding the “bad guys.” Is it personal?

Yes, it is. I felt violated when I realized that someone penetrated my computer. I’m not bearing a grudge after all these years, but I see the actual harm that’s being done. I know all about the “happy, harmless attackers” who just want to see what they can see, and that sounds wonderful. But I keep helping people who have to wipe and reload their operating systems because of crackers, and that’s a huge waste of time and energy. It would be wonderful if we could turn some of that time and energy toward writing better software.

What’s the greatest threat? Attackers from outside the perimeter or insiders?

Insiders are without a doubt the largest threat. They know where the crown jewels are. They know the processes on the inside. They already have logins. If they have something to gain, there’s not much to prevent them from doing the wrong thing. The people who spend all day surfing the ‘Net on their employer’s time–the ones who are resentful because they were passed over for promotion–if they can find a way to use their access for their own benefit, they probably will.

Do you think the level of security and surveillance will improve?

You know the problem with that: The greater the security, the harder it is to get anything done. I’ve worked in some pretty secure facilities. I don’t get the feeling, for example, that a facility with a two-person rule really trusts me; they’re saying that we have safeguards in place because the information has such great value that we cannot trust you or anyone. I understand that, but it can mean 15 man-hours to do eight hours worth of work.

In general, is society gravitating in that direction?

I don’t think it can. Everything I see is leading toward greater productivity, which is one reason we don’t have more security. In the short run, enhanced security can diminish productivity.

The Consensus Intrusion Database you’re compiling at SANS is a shared database of malicious source IP addresses and intrusion methods. What’s the long-term value of a database of IP addresses, given how clever attackers are at spoofing, coming in from different backbones, and so on?

It has exactly one value–any entry that comes to your network is one more thing to check. Every kind of signature we use is prone to false positives, so this is a chance of finding something you might otherwise miss. There are a number of flaws in the system, but the more people who enter data into it, the more correct the top hundred or so entries are going to be.

Spoofing is great for a number of things, but if someone is trying to establish a connection in order to download data while spoofing the IP address, the data will never come back to him. In addition, even a spoofed address may have some value to the good guys, because it may bring your attention to something you might otherwise miss.

You headed the team that developed the Shadow IDS, which is open source, correct?

I prefer “public domain” to “open source.” Open-source software can still have some restrictions on it, but public-domain software means it’s out there for everyone’s inspection and use.

Is Shadow the best IDS?

Not any more. It was never intended to be a standalone IDS; it was designed to complement boxes like Network Flight Recorder (NFR). A long time ago, I had a box in the Navy, a network intrusion detection (NID) system, and I needed a box that could do what my NID couldn’t do. NFR is very similar to a NID. I needed something that would help dampen false positives and false negatives, and help me find things I couldn’t otherwise detect. That was the goal of the project.

Since then, Snort (a libpcap-based packet sniffer and logger) has evolved. Snort is open source and can be used as a lightweight network IDS. It does string matching all in one box, and you can run Shadow scripts on Snort data–the heart of the data they both store is exactly the same. You can use that as your computation platform for real-time string matching. But if you want to run hourly scan-detect codes, or even every 24 hours, you can pull the scripts right out of Shadow and use them in a Snort environment to get a pretty good scrub of the data.

How common is the use of open-source–or as you call it, public-domain–software in the government?

Not very common, and I’m sorry that’s so. It used to be that anything the taxpayer paid for should be released back to the taxpayer. But when we were helping the FBI on the code for tracing distributed denial-of-service attacks last March, we suggested they release the software, and they said, “No, we have a big investment in this.” I thought, “Well, no, you don’t have a big investment in this–I do. I pay taxes.” Far too often, after they’ve built something, government groups try to turn it into a business. That confounds me. It doesn’t square with my understanding of the law and what should be happening. It’s not the purpose of government to compete with private industry.

Would you agree that engineers, for whom configuring complex systems is a game, are being tamed by a marketplace that insists they recognize that the end user isn’t always an engineer?

I completely agree. A number of IDSes are simply too hard to use. Plus, you can’t say that you’ll work on a given IDS every day of your life. You’ll get it running fairly well with a certain set of filters and then life is going to call–your employer will have 56 other things for you to do. In a couple of months, you’ll come sweeping back to check on your filters, and you’ll have to start by opening the book again because the system is so complex. Well, it’s just a matter of time until it’s not going to be used.

You’ve pointed out that the hacker community shares information very well, while the computer security community is just learning how to share information. Has the development of Shadow had an influence on sharing information? Do people notice that the strength of that tool is linked to the methodology by which it was developed?

Over time, as people became more familiar with Shadow, they supported it more and more. There’s a group that freely shares information on the program. In Hawaii, there’s a joint project on IDS where the various government services are working together; it’s not all peace and harmony, but nobody is sticking a knife into someone else’s back, either. They realize that they’re all on an island that has a whole lot of fiber-optic cable coming in and leaving; if there’s an asymmetric attack, they’ll be at ground zero. So they’re coming together and doing some solid work. I don’t know if Shadow will ever be completely open source, or public domain, but it’s certainly available within the government.

So hacker-style sharing is good.

Yes, but there’s another kind of sharing that’s even more important, and I’m trying to model it within the security community–a mentoring program. If you enter a hacker chat room and don’t behave like a total idiot, if you’re respectful and quiet and find your place, before too long they’ll help you. They’ll toss you things to read or give you quick answers.

We’re starting to experiment with this kind of mentoring at SANS. I don’t know if it’s going to work, but we’re taking people who have already passed the course and asking them if they’ll serve as mentors for those who are struggling. We’re also trying the Local Mentoring Project, which may be the craziest idea in the world. If you care about your community and want people in your community to learn what you’ve learned, we’ll help. We’re offering discounted classes for people who come in with someone who has already done well and who will answer their questions. Mentoring depends on skilled people being willing to help others. If this succeeds, it will up my faith in humanity.

Jeff Moss, the founder of the Black Hat Briefings, said that in a complex world the most important thing he needs to know is what he doesn’t need to know. That is, he needs to know who knows it so he can get it when he needs it. That’s the essence of cooperative learning, but in an online hacking community there’s a different dynamic. There are no numbers to make; they can develop a community that regulates itself. How do you recreate that in a commercial environment that’s trying to make its numbers?

We can only give you tools and tell you places to look, but you have to learn on your own. One answer is to assign a wickedly hard intrusion detection practicum as part of the certification. Students have to work 60 to 70 hours on it and learn a lot that they didn’t learn in the class. Before we charged for the certification class, we used to have maybe 10 percent who stayed with it and completed that task, which may be percentages similar to what you find in the hacking community.

Those who completed it, though, would say, “I learned as much doing this as I did in the whole class,” and of course, they did. That’s how you learn, by applying your knowledge. Once you’re invested in something, it’s harder to walk away from it.

If you can’t learn on your own, you’ll never do computer security once you’re out of the class.

That’s right. When I’m teaching a track, by day three I say, “Get in there and bet on this thing, because I’m only going to be here two more days. Once you leave here you’ll have to make those bets, so you’d better start doing it now.”

It strikes me that everyone I know who is really sharp in computer security also has, or at least understands, the mind of a hacker. Is it possible to secure a system if you don’t know how to attack it?

I couldn’t agree more. The best class I teach is “e-warfare.” I take defenders who have only been defenders and, within a few days, I make them think about how to attack things. At first they refuse. They have a hard time thinking offensively. But they start to have some fun when they realize the value they’re getting as defenders by taking aggressive action. That’s when those light bulbs come on in their brains; it’s an intense moment.

Early in 1999, you said, “The good news is, of everything that I’ve seen in 1998 and 1999 so far, there is nothing that really presents a danger to a well-configured, proxy-based firewall site. Almost every technique that I’ve seen in use will not pass through that firewall; you do have to watch your backdoors, but that’s really good news.” Is this still true?

Yes, but with a modifier. You also have to have a content sensor for e-mail attachments. The improvements in malicious code are significant. Insiders are a big threat, but any software running on any system in your organization is an “insider” as well. It has the same advantages as any human insider.

Do you think it’ll take a large-scale attack to wake people up?

No, I don’t think so. The price tag for a virus like Melissa was put at $4 billion to $5 billion. That’s probably overstated, but it should have been enough to wake people up. But, as near as I can tell, it was business as usual, nobody got rid of Outlook–people seemed to say, “Well, we survived,” and kept going. People are pretty tolerant and even if someone drops the top-level DNS servers and shuts down the Internet for two days, people just deal with it.

I think it’s the smaller side, not the big side, that has impact–the continuing aggravation and inconvenience from viruses and denial-of-service attacks, a nagging fear that someone is looking at your system, the occasional system compromise that makes you stay up all night to clean it up. I see these small Chinese water-torture droplets as the real drivers for changing behavior.

What’s on the horizon?

Right now, I’m looking into forensic incident response. Sooner or later, if you think about it, you’ll take a hit-someone will delete your file system or whatever. The question is, how will you survive with the least possible damage and get back in business as fast as possible?

This isn’t sexy and it’s difficult to sell today, but forensics is really hot, especially as a means for teaching people incident handling. If you teach them forensics, they go digging into the system–if it’s network forensics, into the network–and they’ll say, “Oh, I had this anomalous pattern and I don’t know what it is yet, but I don’t like it.” A little later they’ll say, “Now I know why I don’t like it; it does these bad things.” At that point, they’ll have initiated an appropriate response.

If the community were really to turn its collective mind toward applying everything we know about programming, good practice and so on, we could learn to develop incident-response techniques that would reduce the amount of damage tremendously.

Any final thoughts?

You know, we’re just not making engineers like we used to. Engineering schools often have more foreign students than U.S. citizens. That’s great for other countries, but ought to wake us up. On the training and professional side, I think everyone will have to learn some engineering along the way. We’re going to have to embed sound engineering principles in every SANS track, or in five years we’ll be sitting here linked to this global technology that almost nobody understands.

Copyright © 2001 Information Security, a division of TruSecure Corporation

{ 0 comments… add one now }

Leave a Comment

Previous post:

Next post: