Boston Linux & UNIX was originally founded in 1994 as part of The Boston Computer Society. We meet on the third Wednesday of each month, online, via Jitsi Meet.

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Discuss] Is open source more secure at the current level of AI?



I see two differences here between open and closed source:

1. Open source means bad guys can look at the source, this is a real 
risk for open source.

2. Closed source means there can be lots of embarrassing crap in there 
that would have to be cleaned up before anyone would have the guts to 
make public, this is a way in which open source is more secure.

----------------

The AI craze is a crazy, unsustainable bubble. These things are 
dangerous, most people should have nothing to do with them. These 
chatbots are frequently idiots, and they cannot think.

*BUT*, the can do something that is frequently substitutable for 
thinking. Do not dismiss them on the assumption that they can only find 
bugs they have already seen. There is more going on in all those matrix 
multiplications than it seems their architecture would make possible. 
There is real power here.


I expect there will be a lot of bugs found by these things and we are 
approaching a situation that will be like shooting fish in a barrel. 
Maintainers should be using AI to try to find and fix their bugs before 
bad guys get around to exploiting them.


And we should design software that has safer designs in the first place. 
Less attack surface, smaller blast radius. But designing things before 
building is way out of style, we have "design patterns" instead. And 
code reviews to verify coding standards are met and everything has 
pretty formatting. AIs love that.


-kb





Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org