[Discuss] Is open source more secure at the current level of AI?
markw at mohawksoft.com
markw at mohawksoft.com
Thu Apr 9 18:37:47 EDT 2026
> Thanks. From what I've seen of current AI capabilities -- not simply
> relying on public self-promotion by AI companies -- it is believable that
> there are current AI agents that can find exploitable bugs in code, and it
> is even more believable that they soon will be either fairly good or
> unusually good at doing so.
>
> The reason why I said "arguably it creates more risk for open source" is:
>
> (1) When a company develops software as closed source, there is less of an
> attack surface for hostile AI agents to look at the part of the code that
> is closed-source. We in the FOSS community may not like this fact, but it
> is true, and it becomes more relevant if AI agents come to be better at
> finding exploitable vulnerabilities than most human coders.
Same old Microsoft FUD, secret is better, it isn't.
>
> (2) If a company that develops closed-source software checks its
> closed-source code for AI-detectable vulnerabilities, I am *not* confident
> that it does this for all open-source code it makes use of. Does the
> company check for vulnerabilties in gcc's source code, for instance?
> Perhaps few companies, if any, would, and that might extend to other
> open-source software. Does the NSA want each big tech company to use AI
> to thoroughly scrutinize all the outside code it uses, or would the NSA
> sometimes prefer if big tech does not do a thorough job on this -- and do
> we want to rely on assuming the NSA is not having a negative effect here?
> Some companies might check a *portion* of the open-source code they use,
> some might check none of the open-source code at all.
Trust me, I've been in the industry for over 4 decades. Any company that
puts the effort into scanning their source also will scan the open source.
This has been common practice for well over a decade. I have personally
managed CVE detection and mitigation in two companies.
As for the NSA, they have all the source code to all the code they use and
they test for vulnerabilities. They do a lot of security work as a matter
of national security.
>
> A few other points:
>
> (3) Even if, contrary to what I suggested, we could be confident of the
> flattering conclusion that open source has no additional vulnerabilities
> relative to closed source, it would remain true that, even apart from
> comparison to closed source, open-source software may well now be more
> vulnerable than we thought due to AI.
You keep claiming this but haven't put forward a reasonable argument for it.
>
> (4) As I implied, I think it is worth knowing whether all open-source
> projects are doing enough against the risk of AI-detected vulnerabilities.
> I share your hunch that at least some of them are not. It is a test of
> how much the open-source movement is still energetic as opposed to
> stuck-in-the-mud to see if they deal with this well. I think Git, for
> instance, should make it easier to see at a glance whether a project is
> effective in requiring AI testing of vulnerabilities in commits before
> release and how thorough that AI testing is. Distros' repositories should
> have similar policies.
"I think it is worth knowing whether all open-source projects are doing
enough against the risk of AI-detected vulnerabilities."
Why are you singling out open source? Closed source is exactly the same in
this respect.
>
> I don't consider any of what I've said to be a proof that closed source is
> better now or in the future, and I'm undecided on that issue. But I'm
> honest enough to see that open source may now have some new
> vulnerabilities to grapple with, which are worth facing.
This is just the regurgitation of the same old FUD.
More information about the Discuss
mailing list