Re: Intro and questions (fwd)

Gene Spafford (spaf@cs.purdue.edu)
Thu, 23 Mar 1995 11:21:23 -0500

> 
> >>>>> "ddrew" == ddrew  <ddrew@mci.net> writes:
>   >> From: Rens Troost <rens@imsi.com> " Well-designed systems should
>   >> be secure even if the details of those systems are public
>   >> knowlege."
> 
>   ddrew> You show me a "Well-designed system that is secure even if
>   ddrew> the details are public knowlege" and I'll show you a system
>   ddrew> with no power cord.
> 
> :)
> 
> Kerberos is an example of such a system.

It comes back to how you define "secure," and that is an issue of
policies.

For instance, Kerberos V4 is susceptible to a dictionary attack
against passwords (in fact, it less resistant than standard Unix boxes
because it usually uses straight DES).  That is not "secure".

If you understand how Kerberos works and you can subvert multi-user
systems within a domain, you can violate security; Kerberos is only
designed for single-user workstations.

However, if you have trong password policies and limit use to
single-user workstations (plus some other constraints), Kerberos may
be a help.




Back to the original thread about public knowledge, however.  It is
probably better to say that systems are more trustworthy (NOT
"secure") if they depend on mechanisms that would not be
significantly weakened if their details were disclosed.

For instance, using IDEA for end-to-end encryption could add to the
trust of a system.  Disclosure of the mechanism (e.g., IDEA algorithm)
provides no useful information to an attacker. (Assuming that IDEA
does not have an as-yet unpublished flaw.)  However, publication of
the "details" (e.g., passwords, initialization methods, etc.) would
weaken the trust in the mechanism.

Pedantically,
--spaf