BLU Discuss list archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Discuss] LLMs and AI
- Subject: [Discuss] LLMs and AI
- From: richard.pieri at gmail.com (Rich Pieri)
- Date: Mon, 2 Feb 2026 19:04:29 -0500
- In-reply-to: <20260202181653.428a09b4@mydesk.domain.cxm>
- References: <fda1d2596fa1cd8b230723e7773f61a0.squirrel@mail.mohawksoft.com> <12f1715a-0f53-4309-ad45-cbb061b49752@borg.org> <9143b8abba16d7b795f00d2893ff87cf.squirrel@mail.mohawksoft.com> <73699acd-9f05-426c-9329-e2ddcea9ab2c@borg.org> <20260202181653.428a09b4@mydesk.domain.cxm>
On Mon, 2 Feb 2026 18:16:53 -0500 Steve Litt <slitt at troubleshooters.com> wrote: > I think this is a marketing ploy, not a result of Large Language > Models. By telling the human how wonderful he or she is, they subtly > influence the human to use them more and more. I hear plenty of This. As I wrote last week, these chatbots are designed to drive what the operators call engagement and I call addictive behavior. It's a deliberate design decision, not an intrinsic "feature" of LLM chatbot tech. -- \m/ (--) \m/
- Follow-Ups:
- [Discuss] LLMs and AI
- From: kentborg at borg.org (Kent Borg)
- [Discuss] LLMs and AI
- References:
- [Discuss] LLMs and AI
- From: slitt at troubleshooters.com (Steve Litt)
- [Discuss] LLMs and AI
- Prev by Date: [Discuss] LLMs and AI
- Next by Date: [Discuss] LLMs and AI
- Previous by thread: [Discuss] LLMs and AI
- Next by thread: [Discuss] LLMs and AI
- Index(es):
