[Discuss] Is open source more secure at the current level of AI?

Randall Rose rrose at pobox.com
Thu Apr 9 13:26:04 EDT 2026


In the current state of the art, AI agents like Claude Mythos are good at finding exploitable bugs in code.  

That affects open-source systems differently than closed-source systems, and arguably it creates more risk for open-source.

I think it is, or soon will be, true that open source's main security vulnerability is the ability of AI to find exploitable bugs.

There are multiple risks:

(1) Open-source software might be released without any AI systems auditing the latest code for exploitable flaws.

(2) Open-source software might be released after being audited only by relatively weak AI, where adversaries have access to more advanced AI that can detect flaws inaccessible to the weak AI.

(3) An adversary might introduce code into an open-source project that has been designed, with AI assistance, to have flaws that most humans as well as less advanced AIs are unable to see.

I suppose we are all biased in the pro-FOSS direction.  But these risks should be faced.  Are open-source projects doing enough against these risks?  Are there open-source projects that are so benighted that they don't even guard against risk (1)?
.


More information about the Discuss mailing list