[Discuss] Is open source more secure at the current level of AI?

markw at mohawksoft.com markw at mohawksoft.com
Thu Apr 9 16:49:08 EDT 2026


> In the current state of the art, AI agents like Claude Mythos are good at
> finding exploitable bugs in code.

That is a claim that is tossed around and I remain skeptical. That being
said, I have no issue with a statistical pattern matching and prediction
engine matching patterns of poor or vulnerable code. What is important to
point out, is that it does not intuit the bug, it merely finds a bug based
on previously existing bugs. I would expect nothing less from a system
that can scan and use all the git commit data from github and other sites.

>
> That affects open-source systems differently than closed-source systems,
> and arguably it creates more risk for open-source.

I do not understand this statement. How? Why is open-source more risky
than closed? You say it, but do not justify it.

>
> I think it is, or soon will be, true that open source's main security
> vulnerability is the ability of AI to find exploitable bugs.

That is simply a non-sequitur from any logical argument thus far.

>
> There are multiple risks:
>
> (1) Open-source software might be released without any AI systems auditing
> the latest code for exploitable flaws.

OK, so? "Closed-source software might be released without any AI systems
auditing the latest code for exploitable flaws."

Do you think that open source devs AREN'T using modern tools? Do you not
understand that IBM, Google, Meta, et. al. all use open source and are
analyzing it all the time. Also, open source devs are doing this stuff
themselves already.

>
> (2) Open-source software might be released after being audited only by
> relatively weak AI, where adversaries have access to more advanced AI that
> can detect flaws inaccessible to the weak AI.

This is a baseless statement. What is the "weak AI?" What is the
alternative? Are you suggesting that the likes of IBM, Google, Meta,
Akamai, Oracle, et al. are NOT running tools on the open source they use?

>
> (3) An adversary might introduce code into an open-source project that has
> been designed, with AI assistance, to have flaws that most humans as well
> as less advanced AIs are unable to see.

This is a VERY important statement and its not isolated to open-source. Do
you remember the libxz ssh exploit that almost made it main stream? It
took years to implement and was most likely done with state actors. It was
only found by accident. It is one of the most scary supply chain attack I
know.

I currently work at google. They are actively trying to walk the fine line
between trusting devs and preventing bad actors from doing damage.

Supply chain exploits and internal "bad actors" are a real threat to
companies and open source.

The open source community and the companies that use it, are acutely aware
of these issues and there is no fundamental advantage to closed source. In
fact, *ANY* process being applied to "closed source" at a company is also
performed on the open source in use at that company. The advantage open
source has is that multiple companies will run different tests on it and
this WON'T happen on the companies proprietary software.

>
> I suppose we are all biased in the pro-FOSS direction.  But these risks
> should be faced.  Are open-source projects doing enough against these
> risks?  Are there open-source projects that are so benighted that they
> don't even guard against risk (1)?

This is the old, tired, and worn out FUD argument that has been around
since Microsoft formulated it.

The advantage to open source is the undeniable and inarguable advantage
that a plurality of entities can use, inspect, and test it. This is why
the industry runs on Linux. A closed source company can't be trusted.


> .
> _______________________________________________
> Discuss mailing list
> Discuss at lists.blu.org
> https://lists.blu.org/mailman/listinfo/discuss
>




More information about the Discuss mailing list