[ISN] The Next 50 Years of Computer Security: An Interview with Alan Cox

InfoSec News isn at c4i.org
Wed Sep 14 04:30:26 EDT 2005


by Edd Dumbill

Author's note: Alan Cox needs little introduction--most will know him
for his long-standing work on the Linux kernel (not to mention his
appreciation and promulgation of the Welsh language among hackers).  
Cox is one of the keynote speakers at EuroOSCON this October, where he
will talk about computer security.

According to Alan Cox, we're just at the beginning of a long journey
into getting security right. Eager for directions and a glimpse of the
future, O'Reilly Network interviewed him about his upcoming keynote.

Edd Dumbill: You're talking about the next 50 years of computer
security at EuroOSCON. How would you sum up the current state of
computer security?

Alan Cox: It is beginning to improve, but at the moment computer
security is rather basic and mostly reactive. Systems fail absolutely
rather than degrade. We are still in a world where an attack like the
slammer worm combined with a PC BIOS eraser or disk locking tool could
wipe out half the PCs exposed to the internet in a few hours. In a
sense we are fortunate that most attackers want to control and use
systems they attack rather than destroy them.

ED: Linux sysadmins see a security advisory and fix practically every
day now. Is this sustainable, and does it harm Linux that this

AC: It isn't sustainable and it isn't going to work forever. Times
between bug discovery and exploits have dropped dramatically and
better software tools will mean better and faster written exploits, as
well as all the good things.

I think it harms Linux perhaps less than most systems because Linux
security has been better than many rivals. However, even the best
systems today are totally inadequate. Saying Linux is more secure than
Windows isn't really addressing the bigger issue--neither is good

ED: You say that we're only just at the beginning of getting computer
security right. What are the most promising developments you see right

AC: There are several different things going on. Firstly, the
once-stagnant world of verification tools has finally begun to take
off and people have started to make usable code verification and
analysis tools. This helps enormously in stopping mistakes getting
into production.

Related to this, languages are changing and developing. Many take some
jobs away from the programmer and make it harder or near impossible to
make certain mistakes. Java for example has done a lot to make memory
allocation bugs and many kinds of locking errors very hard to make.

The second shift has been towards defense in depth. No-execute flags
in processors and software emulation of them, randomization of the
location of objects in memory and SELinux help control, constrain and
limit the damage an attacker can do. That does help. There have been
several cases now where boxes with no-execute or with restrictive
SELinux rulesets are immune to exploits that worked elsewhere.

SELinux also touches on the final area--the one component of the
system you cannot verify, crash test, and debug: the user. Right now,
systems rely on user education and reminding users "do not install
free screen savers from websites" and the like. The truth is, however,
that most users don't read messages from their IT staff, many don't
understand them and most will be forgotten within a month. SELinux can
be used to turn some of these into rigid policy, turning a virus
outbreak into a helpdesk call of "the screen saver won't install."

This last area is very important. We know the theory of writing secure
computer programs. We are close to knowing how to create provably
secure computer systems (some would argue we can--e.g. EROS). The big
hurdles left are writing usable, managable, provably secure systems,
and the user.

It's important perhaps to point out here that secure programs,
reliable programs and correct programs are all different things.  
Knowing how to write provably secure programs is very different from
saying we know how to write reliable or correct programs.

ED: Can security in software development be meaningfully incorporated
into tools, so it doesn't end up stifling the productivity of

AC: The current evidence is yes. Many of the improvements actually
increase programmer productivity by taking away tedious tasks like
memory management, or identifying potential bugs at compile time and
saving the programmer from chasing bugs for days, and because many of
them use labeling techniques where you have to indicate when you mean
to do unusual things--actually making code easier for other humans to

There is no evidence that sparse has slowed kernel development,
tainting features have hindered Perl, or that Java memory management
harmed most productivity.

The tools are doing by machinery what is hard to do by hand. Bad tools
could slow people down, but good tools do not.

ED: Isn't there a fundamental level at which security concerns and the
freedom of individuals to innovate are opposed? Is there an end in
sight to open source software created by small numbers of people?

AC: There are areas where they come together--obvious ones are safety
critical systems. It's just possible that you don't want nuclear power
station employees innovating on site, for example.

There are 'security' systems such as 'trusted computing' that can be
abused by large corporations to block innovation, and unfortunately
the EU ministers seem to want to help them, not their citizens.  
Whether the EU commission is corrupt, incompetent, or just misguided
is open to debate but the results are not pretty. We've seen that with
the X-Box. Microsoft sells you a product and threaten to sue you for
using it to its full.

Those same tools, however, are valuable to end users, providing they
have control over them. The same cryptographic techology that will let
Apple lock their OS to apple branded x86 computers is there for me to
keep personal data secure if a future laptop is stolen. It is a tool,
unfortunately a tool that can be easily abused.

To a homeowner a secure house is generally good. but if you lose
control of the key, it can be a positive hindrance. TCPA is no

ED: Where is the ultimate driving force for implementing secure
software going to come from? It seems that regulatory enforcement,
such as in the pharmaceutical industry, might be the only way to
properly protect the consumer.

AC: At the moment it is coming from the cost of cleaning up. Other
incentives come from statutory duties with data protection, and also
from bad publicity.

In the future they might also come from lawsuits--for example, if an
incompetently run system harms another user--or from Government. In
theory as we get better at security the expected standard rises and
those who fail to keep up would become more and more exposed to
negligence claims.

The bad case is that someone or some organization unleashes a large
scale internet PC destroyer before we are ready and legislation gets
rushed through in response. That will almost certainly be bad


Edd Dumbill is Editor at Large for O'Reilly Network, and co-author of
Mono: A Developer's Notebook. He also writes free software for GNOME,
and packages Bluetooth-related software for the Debian GNU/Linux
distribution. Edd has a weblog called Behind the Times.

Copyright © 2005 O'Reilly Media, Inc.

More information about the ISN mailing list