Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Wireless ethernet?



--------

Derek Martin writes:
| On Mon, Aug 13, 2001 at 09:12:18PM -0400, Ron Peterson wrote:
| > On Mon, 13 Aug 2001, Derek D. Martin wrote:
| > > It seems to me that perimeter security -- limiting the traffic which
| > > can enter your network from unknown and untrusted parties on the
| > > outside to only that which is absolutely essential for your business
| > > or personal needs -- is an essential part of securing any site.
| > > Firewalls are a proven tool to accomplish this goal.  I'm unable to
| > > imagine a reason why someone would not want to have one, given today's
| > > network landscape and the (lack of) ethics rampant amongst a certain
| > > subset of the people who hang out there.
| >
| > In an academic environment, it's difficult to advocate running a firewall,
| > because it involves making a value judgement about what is and isn't
| > acceptable.
|
| ...  Most universities state in various documents that
| the computing resources of the university are for the use of the
| students and faculty of the university.  Certainly mine did.

Perhaps, but there's a straightforward argument why  this  is  a  bad
policy  for  an  academic  institution,  and  also  for  a great many
computer software companies.

I've worked on projects at a number of  companies  that  have  strong
firewalls  protecting their internal networks from the outside world.
At all of them, I've had a bit of fun with the following argument:

Suppose you are considering buying some security  software  from  two
companies.  The sales rep from company A uses this approach:
   "We block Internet connections from within our network, and  we've
   never had a breakin."

Meanwhile, the salesman from company B says:
   "Our network is open, and we invite hackers to attack us. We had a
   few  breakins,  which  we studied.  Our product includes code that
   blocks them, and we have had no more breaking in  the  past  three
   years,  although  our logs show dozens of attempts each day.  If a
   successful breakin occurs, you can be assured that we will quickly
   supply our customers with upgrades that fix the problem."

Which company's products would you buy?  It's a "no brainer", right?

Academic institutions have a similar motive.  MIT, for example, has a
clear policy of encouraging "unprotected" Internet connections.  They
want their graduates to be able to say "I have  a  lot  knowledge  of
network  security problems.  It's not just academic knowledge; I have
hands-on experience finding and solving  such  problems."  They  want
their people to be able to experiment with writing and using security
software, and to learn how to fix problems on a lab-by-lab basis.  An
institutional  firewall  would effectively block much of this sort of
learning and development.

If a university installs strict firewalls to protect their  students,
then none of their graduates will be able to make such a claim. Well,
OK, lots of them probably would make such a claim.   But  they  would
risk  being  caught when the interviewer does a quick check and finds
the firewall.

These are at least two examples where a strict firewal  policy  is  a
serious  mistake in the long run, no matter how tempting it may be on
a day-to-day basis.

This machine, trillian.mit.edu, is owned by the  EE  department,  and
has been attacked on numerous occasions. No attack has caused serious
problems, and all have been blocked in a day or  so.   This  is  good
resume material for the people who manage the machines in the EE lab,
and they'd be fools to try to block such attacks at a higher level.

I have a web site here with a number of CGI scripts.  A  year  or  so
back,  several  of  the newer search bots learned how to call the CGI
scripts, and brought the machine to its knees by hitting it with  CGI
requests from dozens of machines at once for hours on end.  I quickly
added  a  "blacklist"  capability  to  the  scripts  that  fixed  the
immediate problem, and sent messages to the EE admins.  After a brief
discussion, they gave me write permission  on  the  robots.txt  file,
which  fixed  the  problem  in  general.  (This was a case of runaway
search bots, not a DOS attack.) Now I can make the  legitimate  claim
that  I  have  experience  using  robots.txt to control the impact of
search bots.  If MIT had been using any sort of effective firewall or
load  control  software, I would have never had the opportunity to do
this.  I've certainly never been given such  an  opportunity  at  any
company  that  I've ever worked for, since I was "unqualified", and I
probably could never have gotten such an opportunity outside MIT.

Now that I have RCN cable at  home,  I  can  actually  do  a  bit  of
learning  about  security issues on a machine that I own.  If RCN (or
whatever monopoly gobbles them up in the  future)  decides  to  block
attacks  to  "protect"  us, this will mean the end of learning on the
part of all their users.  The result will be that security issues are
relegated to a small priesthood at the cable company. When they don't
do their job right, we'll have no defense, because we will have  been
locked  out  of  the  learning  process,  with no way to test our own
systems for vulnerabilities.

-
Subcription/unsubscription/info requests: send e-mail with
"subscribe", "unsubscribe", or "info" on the first line of the
message body to discuss-request at blu.org (Subject line is ignored).




BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org