BLU Discuss list archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Discuss] LLMs and AI
- Subject: [Discuss] LLMs and AI
- From: kentborg at borg.org (Kent Borg)
- Date: Tue, 3 Feb 2026 05:21:26 -0800
- In-reply-to: <20260202190429.2ba101ea.Richard.Pieri@gmail.com>
- References: <fda1d2596fa1cd8b230723e7773f61a0.squirrel@mail.mohawksoft.com> <12f1715a-0f53-4309-ad45-cbb061b49752@borg.org> <9143b8abba16d7b795f00d2893ff87cf.squirrel@mail.mohawksoft.com> <73699acd-9f05-426c-9329-e2ddcea9ab2c@borg.org> <20260202181653.428a09b4@mydesk.domain.cxm> <20260202190429.2ba101ea.Richard.Pieri@gmail.com>
On 2/2/26 4:04 PM, Rich Pieri wrote: > On Mon, 2 Feb 2026 18:16:53 -0500 > Steve Litt <slitt at troubleshooters.com> wrote: >> I think this is a marketing ploy, not a result of Large Language >> Models. By telling the human how wonderful he or she is, they subtly >> influence the human to use them more and more. I hear plenty of > This. As I wrote last week, these chatbots are designed to drive what > the operators call engagement and I call addictive behavior. It's a > deliberate design decision, not an intrinsic "feature" of LLM chatbot > tech. The main LLM training is enormous and is done on everything they can possibly find (the entire internet, every book and newspaper they can get a hold of, etc), this creates the generative part of an LLM, and it is what gives LLMs their "memorized the manual" knowledge, and their generic style of writing. There is a smaller effort of a secondary, reinforcement training,?it teaches what kinds of output are desired (high score) or not desired (low score). Unlike the original training, reinforcement training requires some external authority to score LLM output, and tell the LLM whether humans will like it or not. (I have heard that a separate model that has been trained on samples that real humans have scored, is then used to teach the LLM in this reinforcement stage. At least that is one possible approach. The details here are very proprietary.) It is this reinforcement training that determines how flattering and how engaging the final LLM will be. So yes, those aspects are not a feature of the underlying generative model. -kb
- References:
- [Discuss] LLMs and AI
- From: slitt at troubleshooters.com (Steve Litt)
- [Discuss] LLMs and AI
- From: richard.pieri at gmail.com (Rich Pieri)
- [Discuss] LLMs and AI
- Prev by Date: [Discuss] LLMs and AI
- Next by Date: [Discuss] Mail list mis-configured(?) no DKIM and no rDNS and no SPF
- Previous by thread: [Discuss] LLMs and AI
- Next by thread: [Discuss] Mail list mis-configured(?) no DKIM and no rDNS and no SPF
- Index(es):
