[Discuss] Is open source more secure at the current level of AI?
Rich Pieri
richard.pieri at gmail.com
Sat Apr 11 21:48:38 EDT 2026
On Sat, 11 Apr 2026 16:44:21 -0700
Kent Borg <kentborg at borg.org> wrote:
> I did not say otherwise.
> I did say that open source means people can easily see the source,
> and people with a token budget can have AI tools look at it, too. I
> expect they will find stuff.
You just did it again: "They can see the code and they'll find bugs
they can exploit!" It's still FUD and it's still the same logical
fallacy.
It doesn't matter what token hacker finds because they're *too late*.
Anything they feed into their neural network model of choice will
already have been fed into *many* models by security experts at Google
and Red Hat and JFrog and Black Duck and etc. Anything token hacker
could find will already have been found.
We're seeing the effects of this in the wild. Higher tier attackers
aren't looking for vulnerabilities in open source projects so much.
They're *injecting* vulnerabilities and back doors and malware into
packages hosted by public repositories like PyPi and npm, or they're
attacking projects directly like XZ Tools, Notepad++ and CPUID.
--
\m/ (--) \m/
More information about the Discuss
mailing list