[ISN] Insecurity through obscurity

InfoSec News isn at c4i.org
Thu Jun 9 01:16:40 EDT 2005


Opinion by Jian Zhen
JUNE 08, 2005

Security through obscurity is probably one of the oldest tricks in the
security book.

The basic premise stems from the fact that people are trying to ensure
security by hiding certain facts of their software or architecture
design from regular users. This is equivalent to someone hiding a
house key under a pot of plants in front of his house.

However, Auguste Kerckhoffs, a 19th century Flemish cryptographer,
said it should be assumed that attackers know the design of the entire
security system, except for the keys. This concept, known as
Kerckhoffs' law, basically rejected the notion of security through
obscurity (your key hidden under your potted plant) and suggested that
a system should be secure even if everything's public knowledge,
except the key.

Most administrators and developers these days are somewhat familiar
with the various security concepts such as virus, worm, buffer or heap
overflow, cross-site scripting and SQL injection. Since these concepts
are fresh in their minds, they try to take explicit precaution to
avoid these traps.

However, they continue to develop software and products that rely on
hiding certain trivial information, such as URL, username or other
session information, and hope that users won't find them. They also
try to hide this information in obvious places, such as hidden fields
of a Web page or a different directory on a Web server.

A case in point: Last March, Harvard Business School, along with a few
other top business schools, suffered a huge embarrassment because its
admission portal had a "break-in," as university officials called it
(see story) [1]. ApplyYourself.com, a company that handles
applications for Harvard and other elite institutions, had a Web
portal where applicants could check on the status of their
applications. Generally, Harvard's decisions go out on March 30.
However, one applicant had figured out a way to obtain the status
before that date. This applicant then posted it on a Web site for
others to try. In the end, a total of 119 applicants tried this
method. After finding out, Harvard decided to reject these 119
applicants regardless of their admission status (see story) [2].
Stanford University had made similar decisions recently, rejecting 41
applicants who tried this method.

Lessons learned

We are not here to argue whether Harvard and Stanford made the right
decision or whether the action taken by the 119 applicants was
ethical. However, there are some lessons to be learned here.

First of all, ApplyYourself.com's method of hiding the admission
status from the applicants was a great example of security through
obscurity. In order to obtain the status early, the users took
information that was readily available to them, modified the URL in
their browsers and got access to their own admission status.

There are at least two major mistakes here. First, ApplyYourself.com
hid an ID field that users were not supposed to see in the Web page
source. This ID was then used to construct the URL that would give the
user the admission status.

Second, ApplyYourself.com assumed that users would not have knowledge
of the URL that would provide the status. However, anyone who applied
to these schools through ApplyYourself.com would have seen the URL,
and would have known what the URL looked like, as well as the
parameters required to construct the URL. Given that this URL was
provided to previous applicants, current applicants could easily
obtain it by simply asking.

These two grave mistakes left ApplyYourself.com scrambling to patch
the security holes.

Another good example of security through obscurity was demonstrated
when hackers compromised Cisco Systems Inc.'s corporate network and
stole more than 800MB of source code (see story)[3]. This incident
caused quite a stir in the IT community, since Cisco's routers are
responsible for managing a majority of the Internet traffic. Any
security issues in the source code could become public. The
publication of these security vulnerabilities -- still a possibility
-- has the definite potential of causing major havoc on the Internet,
possibly bringing it down on its knees.

Microsoft Corp. has also experienced similar embarrassing incidents.  
In February 2004, portions of the source code for the Microsoft
Windows NT and Windows 2000 operating systems were leaked (see
story)[4].  The leaked source code could potentially allow hackers to
identify security vulnerabilities in the Windows operating systems.
Given the popularity of Windows in both consumer and corporate
environments, this leak could be devastating to the whole Internet

All these examples demonstratem the danger of the
security-through-obscurity premise. There are many articles, books and
seminars on this topic. Companies and software developers need to
start with Kerckhoffs' law, assume that the algorithm and design of
the software are known, and design security into the products and
software in the beginning instead of retrofitting or patching security
holes later.

[1] http://www.computerworld.com/securitytopics/security/story/0,10801,100206,00.html
[2] http://www.computerworld.com/databasetopics/data/story/0,10801,100261,00.html
[3] http://www.computerworld.com/securitytopics/security/story/0,10801,93215,00.html
[4] http://www.computerworld.com/softwaretopics/os/story/0,10801,90200,00.html

More information about the ISN mailing list