Good evening, 23.20.60.138.
Today is friday the 18th of August 2017. The time is 22:05:10 and it's week number 33.

(2010-06-14) Good, evil or neutral?

A few years ago, I was invited to a department get-together by the company I was working for. We would go out and bowl while getting some free food and beverages. When I got to the place I had an idea: let’s see if I can walk right in without ever mentioning the name of the department, my identity, the name of the company or anyone else. This way I would act like I was a freeloader just trying to get a free meal. So I did. There were a lot of people inside the building and I was never challenged to prove my identity. Had they done so, I would have had no problems proving I was actually invited. They did not check me of a list or anything. I just strolled in.

This is pretty much the mind-set of a security conscious person like me… and of course this is the mind-set of a malicious hacker. The difference is the color of the hat. If you’ve seen enough western movies, you’ll probably recognize the metaphor. The good guys wear white hats and the bad guys look good (or is it evil?) in black. This metaphor is very common in the security field. White hats and blacks hats are commonly used terms for security professionals and hackers respectively. Grey hats are people that move around in a legal and moral “grey zone”. And yes I use the term hacker for a person with malicious intent. I know this is not the original meaning of the word “hacker”.

Many of the tools that can be used to check for security vulnerabilities can and are used to break into systems as well. So if the color of the hat is the big difference between good and bad, why is the technology not seen as neutral? It depends on who you’re asking. If you ask me, I say that technology is indeed neutral. This implies that you must allow full disclosure. If I have found vulnerability in a program, how should I handle it? Assuming that I want to be a good person and help others to be secure, should I just tell the company, organization or the creator of the program or should I go public? A quite common solution is to send all the information about the vulnerability to the owner of the program and inform them that they have 30 days to create a patch. After that, you will publish the information about the vulnerability to the world. This compromise has a big problem: what if someone else finds it and decides to create an exploit? Then we will have a zero day exploit on our hands.

If you were to go public with the vulnerability directly, everyone would get an equal opportunity to protect against it or to exploit it to attack others. It would mean that intrusion protection systems, anti-virus software and other defensive measures could be updated and stop the attack dead in its track. I’ll leave it to you to decide if you prefer full disclosure now or if the owner should get a chance to fix the code first.

In conclusion, this is how I see it: White hat or black hat, you will think the same way and use the same kind of tools and methods. You will both benefit from the same security information and most importantly you will both be responsible for what you do. The technology is never to blame.

Bruce Schneier is for Full Disclosure:
http://www.schneier.com/essay-146.html

The opposite of “Full disclosure” is “Security by Obscurity”:
http://en.wikipedia.org/wiki/Security_through_obscurity

Jay Beale defends “Security by Obscurity” to a certain extent:
http://web.archive.org/web/20070202151534/http://www.bastille-linux.org/jay/obscurity-revisited.html

Posted: 2010-06-14 by Erik Zalitis
Changed: 2010-06-19 by Erik Zalitis

News archive