Boston Linux & UNIX was originally founded in 1994 as part of The Boston Computer Society. We meet on the third Wednesday of each month, online, via Jitsi Meet.

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Discuss] Is open source more secure at the current level of AI?



On Thursday, April 16th, 2026 at 11:13 AM, Dan Ritter <dsr at 
randomstring.org> wrote:

> The average open source project makes zero bucks a month.
> 
> The average highly popular, used-everywhere open source project
> makes about $8 a month*.
> 
> The tail is long.
> 
> 
> -dsr-
> 
> * I made this up, but that should be acceptable to everyone who uses
> LLMs without rigorously checking every result.

Luckily, I think those days are long gone. LOL

What I was looking at doing was taking Strix and basically running it on some 
projects. It is basically set up to integrate with GitHub Actions and 
supports multiple models. Though, they use OpenAI in their docs. So, that is 
likely their preferred provider.

https://github.com/usestrix/strix

You basically just plug in your API end point and API key and then set up the 
GH Actions. I was just going to fork the repos I wanted to test into private 
repos under my own account then then trigger some GHA runs to see what comes 
up.

Basically, it is all priced by tokens (which are about 75% of a word for 
English text and a little less clear for say C Code). So, I just ran `wc` a 
few different ways to try and estimate. For the stuff I look at that was 
mostly crypto related code and robotics code it seems to work out to around 
$350USD to $500USD mostly depending on the size of the repo. But, I haven't 
actually run through an AI audit on anything yet. So, I'm not exactly sure 
how the tokens are counted for that kind of code analysis. The models almost 
all cache prompts. I think that means they will cache the code I want to 
audit as they pass it to the different skills (memory, SQL injection, 
business logic, etc). But, I am also not sure about that. Most of what I have 
done so far w/ AI and code has either been with the VSCode plugin or just 
pasting code into Brave's search engine or Google and prompting it to 
fix/improve/explain it. That has actually worked impressively well for me.

I kind of put off the idea for later because Strix doesn't seem that mature 
yet. For example, there are a lot of open PRs to add additional basic skills 
to its testing. And, I think the models I would like to use should come down 
a lot in price as newer models are released (which seems to be about every 
2-3 months r/n for most of the AI companies).

If someone tried to crowdfund these types of AI audits on some more prominent 
projects, I would probably contribute to that depending on who was doing the 
crowdfunding/design/audit and how trustable/capable they were.

I was also thinking that there is the OpenInfra Foundation (under the Linux 
Foundation). They have sponsored a lot of maintenance/audits on core 
infrastructure software. And, of course there is Google Summer of Code (which 
sometimes has security audits and similar stuff). So, I could see someone 
like that funding these types of AI audits.

What got me thinking about all this was following the guy on LinkedIn that 
was responsible for using an AI agent based system for finding the very old 
BSD bug. I was curious how exactly he did it.

I found this page that gives a pretty good comparison of the different costs 
for all the different models that was kind of helpful but it maybe a bit 
dated, now: 

https://www.cloudzero.com/blog/openai-pricing/

I was also looking into trying to use some of the nVidia tools to basically 
train my own model and run it on my home workstation (I've got a nice GPU in 
it) or on an AWS system with a nice nVidia GPU. But, that is currently beyond 
my current understanding/abilities.

Running the model/agents locally would drastically reduce the cost of course. 
But, I'm not sure how large the context/model would need to be. Maybe, it 
would be tractable if it was for only one language and ignored more complex 
things like business logic/authentication flows.

Something as large as say Firefox or Thunderbird would cost more of course. 
But, projects of that size have a lot of money/resources. Someone told me 
once that the contracts to include certain CA certificates in the Firefox 
build are a massive revenue generator (seven figures per year).

Some projects and specialized companies had specialized prizes for reporting 
certain types of vulnerabilities. Some of them were as high as $100K IIRC. 
But, I don't think those are around very much anymore as they were difficult 
to administer and didn't produce much. There are also some specialized 
companies that will buy exploits, especially for cell phones. But, those are 
often affiliated with foreign governments and can get you into legal trouble.

I know of a couple start-ups that are basically trying to do AI code security 
audits as a service. But, they don't really seem to be going anywhere. I 
think it's because you're 100% dependent on the big AI companies if you try 
and do something like that. And, I think they are losing money on every 
single token. So, it's a very unstable ecosystem... especially with 
natgas/electricity prices all over the place lately.


       - VAB



Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org