Boston Linux & UNIX was originally founded in 1994 as part of The Boston Computer Society. We meet on the third Wednesday of each month, online, via Jitsi Meet.

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Discuss] Is open source more secure at the current level of AI?



In the current state of the art, AI agents like Claude Mythos are good at 
finding exploitable bugs in code.  

That affects open-source systems differently than closed-source systems, and 
arguably it creates more risk for open-source.

I think it is, or soon will be, true that open source's main security 
vulnerability is the ability of AI to find exploitable bugs.

There are multiple risks:

(1) Open-source software might be released without any AI systems auditing 
the latest code for exploitable flaws.

(2) Open-source software might be released after being audited only by 
relatively weak AI, where adversaries have access to more advanced AI that 
can detect flaws inaccessible to the weak AI.

(3) An adversary might introduce code into an open-source project that has 
been designed, with AI assistance, to have flaws that most humans as well as 
less advanced AIs are unable to see.

I suppose we are all biased in the pro-FOSS direction.  But these risks 
should be faced.  Are open-source projects doing enough against these risks?  
Are there open-source projects that are so benighted that they don't even 
guard against risk (1)?
.



Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org