Archive of UserLand's first discussion group, started October 5, 1998.

Re: No Doubt

Author:Joshua Allen
Posted:8/18/2000; 12:23:59 AM
Topic:scriptingNews outline for 8/15/2000
Msg #:19814 (In response to 19782)
Prev/Next:19813 / 19815

I expected such sophism when I ventured into this territory. I'm impressed by your wordplay and your ability to call upon the authority of various postmodern legends. But no amount of word-witching or calling upon the dead spirits of meme-peddling authors can change the fact that:

for (int i = 1; i < 70 * 7; i++){ char * strPaulSays = "I repent"; }

has naught to do with security. Of course security requires a holistic approach, and of course security touches all parts of a system! That is why security is so important. But to say that this means that security is improved by *all* source code being free is a ridiculous syncretic leap and a grave error.

The best way to do that is to get as many eyes on the code as you can.

I disagree. At one point in my life, I conducted "white hat" hacking -- firewall penetration analysis and other security audits. I happen to believe that my experience with computer security can inform this discussion. Security flaws can take many forms. Certain classes of security problems, such as buffer overruns, are fairly trivial to exploit. Any script kiddie with a moderate IQ can pick up dilldog's tutorial and be writing exploits in a few hours. On the other hand, there are not so many intellects that can figure out something like differential cryptanalysis, and most of them have reputations (as I pointed out earlier) and can benefit more from the illumination of the attack than the exploitation. Complete security is a pipe-dream, and the goal of any computer security measure is to make the (risk + effort)/reward ratio prohibitive. This is accepted as a fundamental truth of computer security despite however you may choose to interpret some book by hofstadter. Read anything by Winn Schwartau.

Now, opening crypto code to "everyone" does not really mean everyone. Algorithms in use today are comprehensible by only a very small percentage of the population. And an even smaller fraction of the bell curve are equipped to evaluate the algorithms' worthiness. Right there, you have weeded out many of the script kiddies that are dumb enough to use any exploit they discover.

I have also been using Linux since '93, and I can tell you that you need only ask any denzien of EFNet #hack to verify that there have always been security holes in Linux that were kept jealously close in various hacker circles, sometimes for years before being made public. Same goes for windows of course, too. The point is, by the time you see a warning on CIAC or CERT about an exploit, the hackers have known about it for a *long* time; it's a fact of life, and has noting to do with source code being available or not. I mean, how many versions of Sendmail have we been through in the last 8 years, and we are *still* getting buffer overruns? Where are your magical eyeballs? So the only way your statement could seem slightly reasonable is if you naively believed that all (or most) eyeballs viewing your source code have good intentions. I maintain that it's only the really challenging areas that you can make this bet, because the people who do this have more riding on doing it right.

More reason: look at the NSA -- do you feel that Rivest, Shamir, or Adleman have contributed at all to improvement in NSA capability to build secure systems? I doubt it. When the NSA suggested a particular sbox combination for 3des, it took your "as many eyeballs as possible" 20 years to figure out why. I'm confident that the public domain will always be many years behind the NSA, and the NSA will never feel the need to open up all of its source code to all of its systems to improve their security.

Next reason: taking from the idea of sboxes. There are many parts of a security system that can be used by an attacker to mount an attack but in no way benefit someone analyzing the overall security. For example, active monitoring systems might have certain time delays or attack-level thresholds before triggering certain conditions. Knowing exactly what those intervals and conditions are can give a hacker a considerable edge. Keeping those parameters secret decreases the likelihood that a hacker will know this information, and increases the risk + effort profile. Same with sboxes or salts. Nobody would claim that it's wise to give out your salt values freely. In fact, giving out the algorithm that you use to generate or select salts is foolish. Sure, there is a possibility that a hacker can pump enough alcohol into your ex-programmer to figure it out, but the point is, getting this information helps the hacker and *should* be difficult. And if Adleman tells me not to give out information about my salt values, I sure don't believe you when you tell me that it's *good* to hand out all my source.

So now we get even deeper into semantics. You decide to try the tack that "things that shouldn't be shared are parameters and everything else is code -- parameters are kept safe and code is open and free." If you define code loosely enough to say that certain security-sensitive operations are not "code", then I can loosen it further to be anything that I feel is a business assett to be protected. Just look at how far Bezos and crew are stretching the definition of what is patentable. Hell, I could design my own programming language where everything I wanted to keep private was a builtin function (native machine code) and the only functions in the language were Logon() and similar. It would be totally open, though, cause you could look through the machine opcodes all you want. So this is why the semantics game is a double-edged sword. Sheesh, even Visual Basic compiles down to an intermediate format that gets compiled and assembled by the same engine as Visual C++. Or with our VS.NET stuff every language compiles down to IL, which gets JITed into machine code later. SO which "source" should I "open"? Should I just give everyone IL? I doubt you want to read that. Or maybe assembler? Or maybe your brain is considering the idea that "everything should be shared in its original language of development". That is a novel idea. It's not hard to consider some of the RAD languages out there, though. The higher up the abstraction chain you get, the harder it is for security analysis to be useful. Same goes for when you get too close to the metal. My point, of course, is that security is such a holistic discipline and source code in any particular language is such a tiny part of the whole that it's a red herring, even if the idea of a benevolent community of watchers weren't baseless.

Now to add to my assertion that full source is a red herring, consider the example that another poster offered to combat my original post -- the word macro virus. I personally remember the day 6 years ago that Bontachev posted his warning about the potential for a word macro virus across comp.security.virus. In fact, a 13-year old kid from Australia had already figured this out and promptly sent Bontachev's message back to the list as a word document, alive. Now we have a talentless copycat in the Phillippenes doing it for the nth time. The interesting thing is that niether Bontachev nor the hackers needed *any* access to the Word source code to know about this security hole. And furthermore, knowing about the security problem and publicly announcing it didn't magically fix it as you seem so desperate to believe. Next I would suppose that you will claim "but if the source code was available, people would have just recompiled word to fix it." Once again, though, you find that source code is a red herring; the fix was made without access to the source code as well, and required only some simple steps from the user; not a full-scale recompile. If we can't even get people to download and apply a settings patch, you are in fantasy land if you think people will recompile their applications whenever they have a problem. I will not attempt to displace blame for the fact that security patches were not applied or that a better solution was not deployed. But I think it is fantastically naive to think that access to Word source code would have made it possible for the alleged legions of community eyes to fix things. In fact, I can think of many, many, things that the legions of community eyes *could* do to make the desktop safer from macro viruses, and none of them have anything to do with source code. On the other hand, I cannot think of any ways in which access to source code would help a good samaritan prevent such happenings.

So to sum up my long-winded retort:

Thanks, Joshua




This page was archived on 6/13/2001; 4:56:06 PM.

© Copyright 1998-2001 UserLand Software, Inc.