Lessons from the First Computer Virus
Published on: 25th Oct 2013
By: David Ng
When the first computer virus brought 10 percent of the computers connected to the Internet to a screeching halt 25 years ago it grabbed national headlines even though it only affected about 6 000 computers. Since then the number of Internet connected devices has climbed into the billions but according to the man who analyzed that first computer worm our security awareness is still stuck about where it was when fewer than 100 000 computers were connected to the Internet.
"Based on what people have done since then in terms of security, I don't think it [the Morris Worm] taught us a lot." said Eugene Spafford, a Purdue University professor of computer science. "I don't think we learned anything from it, and we're still not learning anything from it."
The Morris Worm that Spafford analyzed back in 1988 is widely considered to be the first major computer virus. It was released on Nov. 2 by Cornell graduate student Robert Tappan Morris. Within days, USA Today had proclaimed it the "worst computer virus outbreak in history." Morris, who declined an interview request, claimed the virus was intended to gauge the size of the Internet, but instead replicated itself, resulting in a denial of service attack. In the aftermath, Morris became the first person convicted of violating the 1986 Computer Fraud and Abuse Act. Since then he has earned a Ph.D. and is now a professor in the MIT Electrical Engineering and Computer Science department. Many, including Spafford, have supported a pardon for Morris.
When the Morris Worm was released 25 years ago, affecting two of the eight computers in his university lab, Spafford was an assistant professor. Since then he has become a recognized expert on information security, digital forensics and privacy technology, and advises institutions and companies such as the FBI, DOJ, Intel and Microsoft. In an interview just before the 25th anniversary of the Morris Worm, Spafford discussed how the virus changed perceptions of security and how you can only hope to reduce security risk, not eliminate it.
Your name is forever linked with the Morris Worm. How did that come about?
I had machines that I was responsible for in my lab that were affected by the
worm program. Our lab had probably eight machines and two of them were affected.
I also had a number of people on the [Usenet] network who were turning to me
asking if I had any information. So I quickly used the tools at my disposal,
including a mailing list with managers, some students and really talented IT
staff at the university to start providing details and pumping it out to the
What did you find when analyzing the Morris Worm?
I found software that clearly was written by a couple of different authors and found some sophisticated code, but also some very glaring mistakes including use of very crude algorithms that could have been made better and a framework that made it look like it could have been deployed for a much larger set of machines and operations than it was.
Did the Morris Worm change security forever?
Prior to the Worm, we were kind of part of a closed community where a lot of people assumed good intent and responsible behavior. The majority were academics who never really thought about the idea that others might have bad intent. There were 50,000 to 60,000 machines connected to the Internet, and now there are some companies that have more than that. Many of those machines back then were shared machines that were used mainly for research and small business. The Internet was not commercial; there was no World Wide Web.
The day after the worm, an awful lot of people were shocked that such an abuse could occur. A lot of people outside the academic and research communities suddenly became aware of networking, became aware that malicious software could be written and that really changed a lot of people's perceptions. It woke them up to the possibilities.
Would you say the Morris Worm was a good thing, maybe a wake-up call?
I don't think I'd classify that as a good thing. I would classify it as inevitable. Somebody would have done something like that eventually, but that just so happened to be the first one.
The year after the Morris Worm outbreak you said, "The only truly secure system is one that is powered off, cast in a block of concrete and sealed in a lead-lined room with armed guards - and even then I have my doubts." Still believe that?
If anything, given supply chain attacks and current passive eavesdropping mechanisms, it's just as valid. There are more ways to attack than what was feasible when I first started saying variations of that quote 40 years ago. Security is really risk reduction and not elimination.
Do you believe that Robert Morris didn't mean to cause any harm?
I think he made a severe error in judgment. I think what he did was wrong. But much to his credit he owned up to it. He took the punishment for it. He did not do anything to make money off of it or brag about it, and went on to a distinguished career in the field of computing where he's gotten a Ph.D., he's written papers, started some companies. And that's all very much to his credit and reinforces all along that what happened was an error - a severe error. But it's fairly clear that it didn't play out anywhere close to what he had intended. And in that regard, in retrospect he probably got more severe punishment than he deserved because he was the first and at the time it was very, very major.
About 10 years ago a couple of us informally were exploring the idea of seeking a pardon for him to get the felony off his record. We were rebuffed by the officials we talked to. With the passage of time it's fairly clear that his behavior is in a different class from the people writing malware now.