Home
| Calendar
| Mail Lists
| List Archives
| Desktop SIG
| Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU |
On Wed, Mar 27, 2013 at 08:19:12PM -0400, Rich Pieri wrote: > --On Wednesday, March 27, 2013 6:42 PM -0400 Tom Metro > <tmetro+blu at gmail.com> wrote: > > >We're getting a bit wrapped up in dogma. This isn't a black-and-white > >issue. If you take a broad enough definition of "obscurity" it could be > >taken to mean your knowledge of a password - it's obscure, you know it, > >and yet it's guessable, just like the oddball port your service is > >running on. > > Passwords aren't obscured things. They're supposed to be secrets. A > password that is not a secret but merely obscured is a password that > has been compromised. You're being pointlessly pedantic... the words "secret" and "obscure" are near-synonyms, and Tom already stipulated a broad definition of "obscure". > >There's really no reason why a system administrator should reject an > >obscurity layer, if their security fundamentals are already good, as > >long as in their judgment the obscurity doesn't impact their users. (For > > If the fundamentals are good then the obscurity layer adds no > tangible security to the system while at the same time making the > system more inconvenient for users. Not true. As I've already said, it slows down your attacker. They can't immediately attack you with a known vulnerability; they need to first expend some effort to determine where you're vulnerable. Depending on what you're running, and what they're targeting, this could take a couple of minutes, or it could take a couple of days... But it can do more than that. Take the case of a zero-day exploit to a public service your site MUST provide, let's say Apache. Your security is as good as it can reasonably be, but your system is vulnerable and there's no possible way you can know; the information isn't available to anyone but the attacker. But he writes an exploit and posts it on some hacker site, and it's immediately picked up by some script kiddie and run against your site. Historically, a number of such exploits actually check the response from your server to see if it's vulnerable, and move on if it does not recognize the server as being the target. Here, a little security through obscurity--shutting off ServerSignature and changing the Server: response header via a feature of mod_security--could very well save your site, where no other measure you're likely to be able to implement would. And the impact to users of changing your welcome banner? Zero. Will doing this always save you? Absolutely not. If the script doesn't bother to check the response but just assumes you're vulnerable and always executes the attack, you lose. If a knowledgeable attacker finds the exploit and targets you directly, chances are you lose... but maybe not, depending on how serious he is. There's a significant statistical probability that making such a change will save you from a number of attacks, and the cost of making the change at implementation time is very nearly zero. So why WOULDN'T you do it? > Obscurity makes it harder to manage the system. I can't use standard > tools to monitor my services if they're not running on standard > ports and protocols. Maybe, probably not. A reasonable tool will let you specify where the service is running. > I can't automatically install system and security updates with > cron-apt; In the example I gave, this is not prevented: cron-apt does not muck with your server configuration. If you modify your error document, it doesn't muck with those either. This is unlikely to ever be an issue unless you need to modify code to do it. > I have to validate every non-standard change. In practice, not likely... in many such cases it's a trivial config change. > This makes my job that much more difficult. It is prone to mistakes > and unforeseen consequences, and as a direct result makes the whole > system that much less secure and reliable. In some cases perhaps, but in most security-concious apps which have facilities for this sort of thing, absolutely not. > >But the real value in obscurity measures is cutting down on noise, which > >doesn't directly impact security, but can indirectly help it by making > >real attacks far more visible, and avoid alarm fatigue. You're merely > >filtering out the nuisance. > > I want that "noise" because it isn't noise. It's useful information. Sometimes. Like with a host-based IDS, you want a little noise, so you know the tools are doing their job. For a web server, which recieves hundreds of thousands connections, a significant percentage of which are virtually guaranteed to be some sort of attack traffic, it really is just noise. If you can get rid of it, seems like a good thing to me. > If I don't have that "noise" then I can't show the fiscal > office anything tangible to justify the expense of intrusion > detection and prevention systems. As a matter of fact, yes, I have > once been asked to provide such logs for such purposes. If you need to, you can always shut them off... But I would just say that I did that analysis before it was implemented, and now that it is implemented, the data is not available. "I can get it if you want but we'll be more vulnerable to attack while I collect the data" I've found is usually enough to get them off your back. -- Derek D. Martin http://www.pizzashack.org/ GPG Key ID: 0xDFBEAD02 -=-=-=-=- This message is posted from an invalid address. Replying to it will result in undeliverable mail due to spam prevention. Sorry for the inconvenience.
BLU is a member of BostonUserGroups | |
We also thank MIT for the use of their facilities. |