Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

intrusion detection/prevention



Dan Ritter wrote:
> Tom Metro wrote:
>> Most file system change detection tools work on a model where they set a 
>> baseline and then once they detect a deviation from that baseline, they 
>> email you perpetually until that baseline gets reset.
> 
> It's the only really useful way. There are two tricks:
> 
> - make it easy to reset the baseline
> 	- a single word alias is best

What is the advantage of having that manual intervention? If you're 
busy, and don't get to manually reset the baseline before the next 
report, the deltas accumulate, and after a few days the reports become a 
useless muddled mess.

This results in changes made on day 2, 3, etc. being far less 
noticeable, which I consider to be a far more serious threat than the 
unlikely prospect that an attacker breaches your system and resets the 
baseline.

(If you've automated resetting the baseline to the point where it is a 
single word alias, then you haven't really gained any security over a 
system that automatically resets the baseline after changes are 
reported. Once you've eliminated the use of a complex passphrase that 
gets hand-typed, anyone who has gained root can circumvent the system. 
Even then, I tend to think that as long as your database is hosted on 
the system itself, the passphrase approach is more of an illusion. If 
you want real security, you need to bypass the target system's kernel 
and directly scan the drive using another host or a live CD.)


> - map exactly what parts of your filesystems you can ignore
> 	- in particular, you need to have the monitor
> 	  automatically ignore logs, temp files, pidfiles, mail
> 	  spools and user home directories

That's a requirement for any file system change detection tool, 
otherwise you end up with a lot of noise. Integrit has a pretty simple 
config file syntax with rules to skip specified paths entirely or 
partially (such as ignoring modification time or content changes, while 
still monitoring other attributes, like inode number). The default rules 
in the Debian package do a decent job, though they get tripped up by the 
/dev changes triggered by udev.


>> ...they're actually more beneficial for routine system
>> administration by providing a record of what system files changed
>> when.
> 
> Though there are three better tools:
> 
> - keep your configurations in a version control system
> - and/or keep snapshots of your configurations (or whole
>   filesystems)
> - look in your OS package installation log (/var/log/dpkg, for
>   instance)

I agree with all of that, but find that the file change reports provide 
an additional way of accessing the information that is often more 
convenient. Package installation logs often won't tell you when a 
specific file has changed. A search on the mail folder containing 
Integrit reports will. A VCS also won't necessarily tell you when a 
package installer has overwritten your local config - only that it is 
different. I also don't "check in" configs that have no local modifications.

  -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/






BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org