Security through obscurity

In cryptography and computer security, security through obscurity (sometimes security by obscurity) is a controversial principle in security engineering, which attempts to use secrecy (of design, implementation, etc.) to ensure security. A system relying on security through obscurity may have theoretical or actual security vulnerabilities, but its owners or designers believe that the flaws are not known, and that attackers are unlikely to find them.

For example, if somebody stores a spare key under the doormat in case they are locked out of the house, then they are relying on security through obscurity. The theoretical security vulnerability is that anybody could break into the house by unlocking the door using the spare key. However, the house owner believes that the location of the key is not known to the public, and that a burglar is unlikely to find it. In this instance, since burglars often know likely hiding places, the house owner would be poorly advised to do so.

In cryptography, the reverse of security by obscurity is Kerckhoffs' principle from the late 1880s, which states that system designers should assume that the entire design of a security system is known to all attackers, with the exception of the cryptographic key: "the security of a cypher resides entirely in the key". Claude Shannon rephrased it as "the enemy knows the system". Historically, security through obscurity has been a very feeble reed on which to rely in matters cryptographic. Obscure codes, cyphers, and crypto systems have repeatedly fallen to attack regardless of the obscurity of their vulnerabilities.

The full disclosure movement goes further, suggesting that security flaws should be disclosed as soon as possible, delaying the information no longer than is necessary to release a fix or workaround for the immediate threat.


Arguments against security by obscurity

Many argue that security through obscurity is flawed. If a system's security depends solely or primarily on keeping an exploitable weakness hidden, then, clearly, if that weakness is discovered, the security is easily compromised. It is argued that keeping the details of widely-used systems and algorithms secret is difficult. In cryptography, for example, there are a number of examples of proprietary ciphers becoming public knowledge, either by reverse engineering (e.g. A5/1), or by a leaked description (e.g. RC4).

Furthermore, keeping algorithms and protocols unpublished means that the ability to review the security is limited only to a few. It is argued that allowing everyone to review the security will mean that any flaws or weaknesses can be identified and fixed sooner.

In practice

Operators and developers/vendors of systems that rely on security by obscurity often keep the fact that their system is broken secret, to avoid destroying confidence in their service or product and thus its marketability. It is possible that this may amount in some cases to fraudulent misrepresentation of the security of their products, though application of the law in this respect has been less than vigorous, in part because terms of use imposed by vendors as a part of licensing contracts have (more or less successfully) disclaimed their apparent obligations under statutes and common law in many jurisdictions requiring fitness for use or similar quality standards.

Often, such designers or vendors, or executives thereat, actually believe they have ensured security by keeping the design of the system secret. It appears to be difficult for those who approach security in this way to have enough perspective to realise they are inviting trouble, sometimes very big trouble. Self delusion or ignorance are very difficult problems generally and have many, almost universally unfortunate, consequences.

This security practice sets users up for trouble when the software they use is accidentally or deliberately disclosed, as has occurred in several cases:

  • Diebold — voting machine software; apparently accidental publication on an official Web site
  • MicrosoftWindows and other software; apparently deliberate penetration of a corporate development network
  • RSADSIcryptographic algorithm software; probably deliberate publication of alleged RC4 source on Usenet
  • Cisco — router operating system software; accidental exposure on a corporate network
  • Many other examples

When software ('secure since obscure') is widely used, there is potential for widespread trouble; for instance, assorted vulnerabilities in the various versions of the Windows operating system or its mandatory components such as its Web browser Internet Explorer, or its mail applications (Outlook or Outlook Express) have caused world wide problems when viruses, Trojan horses, worms, and so on have exploited them.

Software which is deliberately released as Open Source can never be said, certainly in theory, and in practice as well, to be relying on security through obscurity (the design being publicly available), but it can nevertheless also experience security debacles (e.g., the Morris worm of 1988 spread through some obscure -- if widely visible to those who bothered to look -- vulnerabilities), though the frequency and severity of the consequences have been rather less severe than for proprietary (ie, secret) software. The reason for this divergence has been attributed to the theory that many eyes make all bugs shallow.

Historical note

There are conflicting stories about the origin of this term. It has been claimed that it was first used in the Usenet newsgroup in news:comp.sys.apollo during a campaign to get HP/Apollo to fix security problems in its Unix-clone Aegis/DomainOS (they did not change a thing). ITS fans, on the other hand, say it was coined years earlier in opposition to the incredibly paranoid Multics people down the hall, for whom security was far more an issue than on ITS. Within the ITS culture, the term referred to (1) the fact that by the time a tourist figured out how to make trouble he'd generally got over the urge to make it, because he felt part of the community; and (2) (self-mockingly) the poor coverage of the documentation and obscurity of many commands. One instance of deliberate security through obscurity on ITS has been noted; the command to allow patching the running ITS system (altmode altmode control-R) echoed as $$^D. Typing alt alt ^D set a flag that would prevent patching the system even if the user later got it right.

See also

External links

it:Sicurezza tramite segretezza


  • Art and Cultures
    • Art (
    • Architecture (
    • Cultures (
    • Music (
    • Musical Instruments (
  • Biographies (
  • Clipart (
  • Geography (
    • Countries of the World (
    • Maps (
    • Flags (
    • Continents (
  • History (
    • Ancient Civilizations (
    • Industrial Revolution (
    • Middle Ages (
    • Prehistory (
    • Renaissance (
    • Timelines (
    • United States (
    • Wars (
    • World History (
  • Human Body (
  • Mathematics (
  • Reference (
  • Science (
    • Animals (
    • Aviation (
    • Dinosaurs (
    • Earth (
    • Inventions (
    • Physical Science (
    • Plants (
    • Scientists (
  • Social Studies (
    • Anthropology (
    • Economics (
    • Government (
    • Religion (
    • Holidays (
  • Space and Astronomy
    • Solar System (
    • Planets (
  • Sports (
  • Timelines (
  • Weather (
  • US States (


  • Home Page (
  • Contact Us (

  • Clip Art (
Personal tools