[Discuss] Anyone Played with Programming Local LLMs, Such as Llama?

Kent Borg kentborg at borg.org
Sat Nov 30 17:09:42 EST 2024


LLMs such as Chat GPT are startlingly good at language—and good at 
giving the impression of human intelligence, because up until now only 
we humans have been so capable of language. So "General Artificial 
Intelligence" must be just around the corner, right‽‽

Nonsense, just because these are really good at pattern recognizing, and 
just because they can extrapolate and emit output using those patterns, 
doesn't mean they are smart.


Anyway, they ARE really good at pattern recognition, and they CAN 
generate output based those patterns. I think I should play with the 
technology from a programming angle to get a feel for what they can do 
and what they can't. I want to understand how  their very broad training 
can be directed in very specific ways, such as how can I get them to 
recognize patterns of my choice, I think I would like to play with 
images, but I'm not stubborn in that regard. Certainly text is useful, 
so is audio…

Googling about (well, duckduckgoing about) I see that Meta's Llama isn't 
option that is free to play with*. There are other free to use LLMs: 
Granite, Mistral, and Gemma are the ones I have found so far.

* Pedantic observation: As far as I can tell *none* of the LLMs are open 
source. Sure, the compilable code might be open source but that's not 
were the intellectual property lies. The billions of model parameters 
are the secret sauce, and they are the ultimate opaque blob.


Question: Has anyone here played with writing code to drive LLMs? Any 
pointers for getting out of the mud easily? (Any warnings?)


Thanks,

-kb, the Kent who expects he will be using his Framework 13 laptop: a 
6-core (12-thread) AMD Ryzen 7640U CPU with 64GB of RAM, but maybe he 
plugs in a Hailo M.2 AI module, too.



More information about the Discuss mailing list