[Discuss] Is open source more secure at the current level of AI?

Dan Ritter dsr at randomstring.org
Thu Apr 9 15:25:06 EDT 2026


Randall Rose wrote: 
> In the current state of the art, AI agents like Claude Mythos are good at finding exploitable bugs in code.  

Objection: Anthropic says this. Pretty much everything Anthropic
has ever said turns out to be overstated at best.

(Counter-objection: Greg K-H says that LLM-discovered kernel
bugs are now actually worth investigating.)

> That affects open-source systems differently than closed-source systems, and arguably it creates more risk for open-source.

I have had visibility into several companies' nominally
closed-source software and SaaS products, and it is a mistake to
think that the work that they do is significantly insulated from
open-source work.

The XKCD about the Internet relying on a small Jenga brick
developed by one person in Nebraska? Approximately true for
every large project. 

Don't think of proprietary software as being different from open
source. Think of proprietary software as being a layer of icing
on top of a cake made mostly from open source components.

> I suppose we are all biased in the pro-FOSS direction.  But these risks should be faced.  Are open-source projects doing enough against these risks?  Are there open-source projects that are so benighted that they don't even guard against risk (1)?


No. Yes.

-dsr-


More information about the Discuss mailing list