[Discuss] Is open source more secure at the current level of AI?

Randall Rose rrose at pobox.com
Thu Apr 9 17:58:24 EDT 2026


Thanks.  From what I've seen of current AI capabilities -- not simply relying on public self-promotion by AI companies -- it is believable that there are current AI agents that can find exploitable bugs in code, and it is even more believable that they soon will be either fairly good or unusually good at doing so.

The reason why I said "arguably it creates more risk for open source" is:

(1) When a company develops software as closed source, there is less of an attack surface for hostile AI agents to look at the part of the code that is closed-source.  We in the FOSS community may not like this fact, but it is true, and it becomes more relevant if AI agents come to be better at finding exploitable vulnerabilities than most human coders.

(2) If a company that develops closed-source software checks its closed-source code for AI-detectable vulnerabilities, I am *not* confident that it does this for all open-source code it makes use of.  Does the company check for vulnerabilties in gcc's source code, for instance?  Perhaps few companies, if any, would, and that might extend to other open-source software.  Does the NSA want each big tech company to use AI to thoroughly scrutinize all the outside code it uses, or would the NSA sometimes prefer if big tech does not do a thorough job on this -- and do we want to rely on assuming the NSA is not having a negative effect here?  Some companies might check a *portion* of the open-source code they use, some might check none of the open-source code at all.

A few other points:

(3) Even if, contrary to what I suggested, we could be confident of the flattering conclusion that open source has no additional vulnerabilities relative to closed source, it would remain true that, even apart from comparison to closed source, open-source software may well now be more vulnerable than we thought due to AI.

(4) As I implied, I think it is worth knowing whether all open-source projects are doing enough against the risk of AI-detected vulnerabilities.  I share your hunch that at least some of them are not.  It is a test of how much the open-source movement is still energetic as opposed to stuck-in-the-mud to see if they deal with this well.  I think Git, for instance, should make it easier to see at a glance whether a project is effective in requiring AI testing of vulnerabilities in commits before release and how thorough that AI testing is. Distros' repositories should have similar policies.

I don't consider any of what I've said to be a proof that closed source is better now or in the future, and I'm undecided on that issue.  But I'm honest enough to see that open source may now have some new vulnerabilities to grapple with, which are worth facing.

On Thu, Apr 9, 2026, at 7:25 PM, Dan Ritter wrote:
> Randall Rose wrote: 
>> In the current state of the art, AI agents like Claude Mythos are good at finding exploitable bugs in code.  
>
> Objection: Anthropic says this. Pretty much everything Anthropic
> has ever said turns out to be overstated at best.
>
> (Counter-objection: Greg K-H says that LLM-discovered kernel
> bugs are now actually worth investigating.)
>
>> That affects open-source systems differently than closed-source systems, and arguably it creates more risk for open-source.
>
> I have had visibility into several companies' nominally
> closed-source software and SaaS products, and it is a mistake to
> think that the work that they do is significantly insulated from
> open-source work.
>
> The XKCD about the Internet relying on a small Jenga brick
> developed by one person in Nebraska? Approximately true for
> every large project. 
>
> Don't think of proprietary software as being different from open
> source. Think of proprietary software as being a layer of icing
> on top of a cake made mostly from open source components.
>
>> I suppose we are all biased in the pro-FOSS direction.  But these risks should be faced.  Are open-source projects doing enough against these risks?  Are there open-source projects that are so benighted that they don't even guard against risk (1)?
>
>
> No. Yes.
>
> -dsr-


More information about the Discuss mailing list