boccaderlupo: Fra' Lupo (Default)
boccaderlupo ([personal profile] boccaderlupo) wrote2025-05-30 06:58 am
Entry tags:

LLMs are demonic

Conjured intelligences of questionable human provenance and ever more inscrutable intent, to which more and more people are deferring questions that would typically be handled by a person's intellect, will, and imagination...not good.
jprussell: (Default)

[personal profile] jprussell 2025-05-31 01:03 am (UTC)(link)
My own take is that they're not necessarily demonic, but they're very likely prone enough to it to be worth avoiding as a rule. A possible analogy: unboiled, unflowing water doesn't necessarily have gut-ruining microbes in it, but you probably shouldn't drink it anyway. Not a perfect analogy, since sometimes you might decide you need the water enough to risk it, and it's hard to think of a situation where you "need" an answer to a question badly enough to risk asking an LLM (rather than thinking about it, guessing, or asking you nearly-most knowledgeable friend).
jprussell: (Default)

[personal profile] jprussell 2025-06-01 03:57 am (UTC)(link)
Fully agreed. Maybe a better analogy here would be something like "drinking water from a pond filled by a creek, with a clear outflow, without boiling it" - it's safe enough to encourage repeat behavior, especially if, say, the pond is convenient, but if you make a habit of it, you'll get sick eventually. Or maybe I'm too wedded to my metaphor. My main point is that there's something short term that leads to folks making the call fairly regularly, that i slikely still unwise, but is not immediately or obviously hurtful enough to get them to turn away, even if they really should.

For what it's worth, my day job is teaching folks how to write and speak to function in the PMC (I teach "Business Communication") and the pressure to just teach them how to prompt an AI is, shall we say, rather strong, but it keeps striking me as a a short-sighted way of promoting seeming success without true progress. It's like an argument to cheat on a test that everyone has to do well on to determine their course in life - sure, immediately, doing well on the test is the goal, and cheating is the obvious way to do that, but to whatever extent actually being good at what the test is meant to ascertain will better prepare you for the life passing the test will give you, you're actually hurting yourself in the long-run, even if in the short run it seems obvious that cheating is the way to go.

Outsourcing your own thinking and ability to express yourself seems like cheating at the very game the PMC is expected to play, and play well. You might get into the game by cheating, but at some point, you'll either falter or find yourself so dependent on your means of cheating that you have to make some pretty gnarly compromises about your actual self and what you can do, which seems like a bad tradeoff, all-told.

Cheers,
Jeff
sdi: Oil painting of the Heliconian Muse whispering inspiration to Hesiod. (Default)

[personal profile] sdi 2025-06-01 04:50 pm (UTC)(link)
I'm not an expert or anything, but for whatever it's worth, I would be surprised if the LLM game continues very long; everyone is losing almost ridiculous amounts of money at it, most of the big players (Microsoft, Amazon, etc.) are cutting their losses and pulling out of investments, and every year LLMs pollute the internet (that is, their training data) more and more, making it even more ruinously expensive just to break even.

These things are dangerous, but they're dangerous more in the sense of a Red Queen's race than a Manhattan Project, I think.
jprussell: (Default)

[personal profile] jprussell 2025-06-02 05:22 am (UTC)(link)
This is also a fair point, and part of my skepticism. I've absorbed from JMG a skepticism of anything that energy-intensive, especially if the economic value is so hand-wavey. This leaves me somewhere between "LLMs can't ever say anything useful" and "to say anything useful, LLMs will have to burn more energy than they're worth." I suppose it's possible that neither of these things will be true, at least in the short term, and perhaps in the long term (maybe fusion really is right around the corner!), but that's not the way I would bet.

This makes me simultaneously more and less pessimistic than the average AI booster - I think the capability of "take-off" is less likely than most of them, but I worry that the damage in the meantime might be greater than they anticipate.

I guess we'll see.

Cheers,
Jeff
sdi: Oil painting of the Heliconian Muse whispering inspiration to Hesiod. (Default)

[personal profile] sdi 2025-06-02 02:45 pm (UTC)(link)
at what point would it just be more economical to simply pay a person to write said copy?

I think the answer is that is "at the very beginning" (e.g. don't involve LLMs at all). My reasoning is threefold:

1. OpenAI loses money on every request, even for their paying and SaaS customers.
2. Despite [1], LLMs have failed to take off in even a modest way: the LLM "industry" (which is basically just OpenAI, as it makes up almost all AI revenue) remains small in economic terms.
3. We have a massive labor crisis in the country.

To put it another way, a couple people at the apex of the LLM "industry" seem to be making a lot of money (e.g. Altman, Musk, Pichai, Nadella, etc.), at the expense of their businesses, the American tech sector, the dollar, and Western society as a whole.

The only way I can see that any of this makes sense is if (generously) LLMs were meant from the beginning to be a controlled demolition of the dollar or (cynically) LLMs were meant from the beginning to be a con to part investors from their money. Either way, it is well-described by the "Ra"/Teiresias theory where a few people are developing their negative polarization by preventing the execution of the free will of many.

Sorry to harp on this, but I want to emphasize it's pettiness: it's important to not worry about it. Don't look down! Focus on divinity!
Edited 2025-06-02 14:48 (UTC)
jprussell: (Default)

[personal profile] jprussell 2025-06-02 05:16 am (UTC)(link)
I've been teaching this subject since 2018, and before that, for about 5 years I was a management consultant, so pretty squarely in the crosshairs of "PMC".

That timing has, though, straddled the whole LLM thing. When I started - not even a consideration. Today? My department head wonders aloud whether we can get away with not teaching to the LLM default of the current PMC world.

I share your deep concerns with the maybe-harmful places trusting such tools goes, even leaving out the more egregious cases of folks driving themselves nuts by treating their local instance of an LLM as a girlfriend or spiritual helper (or both, bleh). At its mildest, my thinking as a teacher of writing and speaking competence, I think "well, you don't bring a forklift to the gym," by which I mean, okay, maybe an LLM might make some suggestions on how to tighten up your email or memo or academic paper, and it might even be good advice, but if you've never figured out how to judge a better piece of writing from another, how will you judge an LLM's output as "better?" Or "worse?" At the more extreme ends, you give over not only the judgment of is this good, but also what even is good? It's not like quality of writing is a wholly objective measure like gravity or the passage of time - there's a value-judgment there, and if you start giving that over to machines, it seems like trouble to me.

Maybe there's a helpful midpoint here, but I find myself more and more drawn to the "Butlerian Jihad" side of things, even as the professional pressures around me move in the exact opposite direction.

Cheers,
Jeff
jprussell: (Default)

[personal profile] jprussell 2025-06-03 04:21 am (UTC)(link)
It's something I've given some thought to, and honestly, it's mostly in the nature of making me doubt the way I make my living as legitimate and worth pursuing in the long run. Persuasion techniques like Cialdini's "shortcuts" strike me as little different from crass sorcery. Being clear on the goal of your communication and then thinking about how to get your audience on board with it sounds a lot like focusing your will and acting to achieve it, but if the goal is to sell a project or get a company to adopt a consulting proposal, is that worth it?

So, in practice, the way it has shaken out for me is that my esoteric work has led me to inject more skepticism/protection into my teaching - rather than saying "here's how to use Cialdini's shortcuts to get what you want" I more emphasize "here's how others use Cialdini's techniques to try to trick you, do you really want to let them?" I'm pretty savage on advertising in general.

All that said, I do think there's such a thing as legitimate influence/persuasion/advertising/marketing, but I think its borders are much hazier than many proponents insist, and that someone who wants to engage in such activities and remain ethical has to hold himself to a much higher standard than most business majors or MBAs would think. To put it more concretely, if I have a product I think is genuinely useful, it makes sense that I would use an understanding of what folks pay attention to and remember to get them engaged with it, but the moment you start doing so, you have to ask yourself if your product is really as good as you think, and whether making it easier to engage with is actually intruding on anyone's free will or not.

Lately, it's been bothering me quite a bit that I am pretty much literally a sophist - I teach rhetoric and justify it as a "tool that can be turned to different ends" and (mostly) disclaim teaching what ends its right to turn such tools to. I solace myself a bit by including a discussion of ethics, where I don't say "this is right and this wrong" but instead "here's a handful of ways of sorting out how to think of right and wrong, my preference is is this one (virtue ethics), but it's up to you to pick how you decide things in your own life." Maybe weak sauce, but when I discovered there was no required ethics content in the undergraduate business major at my school, I figured I could at least devote a lecture or two to it.

Anyhow, sorry for a long response to what may have been a mostly throwaway line. The short version is that I've noticed much of the overlap of teaching "effective communication" and "magic," and I've been left with the feeling that the default goals, ethics, and so forth of the former have a lot to learn from the latter.

Cheers,
Jeff
jprussell: (Default)

[personal profile] jprussell 2025-06-03 05:13 pm (UTC)(link)
Thank you, and that's the hope, at least!
jprussell: (Default)

[personal profile] jprussell 2025-06-03 05:22 pm (UTC)(link)
Hmm, I hadn't considered that, maybe I will!

The very short version is that Cialdini (a communication researcher) looked at both the communication research and did extensive interviews with people whose job it is to convince others to do what they want (salesmen, police interrogators, etc) and was able to derive six principles that affect whether someone is more likely to go along with what you're trying to convince them of - Similarity/Likability, Consistency, Commitment, Authority, Reciprocity, and Scarcity. These principles can be turned into "shortcuts" by sales guys and the like - you walk into a car dealership and the guy starts asking you personal questions until he hits on something you have in common, and then starts talking about that, and now you like him more because you're similar, which better disposes you to buying a car from him (and makes you less likely to bargain hard on the price, since you don't want to upset your new friend).

All of the principles are normal and natural (of course you're more likely to do something that someone you like asks you to do!), but where it gets gray or worse is when the person trying to influence contrives to use the principles in a calculated way that maximizes their benefit with as little cost to them as possible. Like offering cheap swag to instill a feeling of reciprocity, or leading the conversation to get you to agree with something he then paints as being consistent with what he really wants ("wouldn't you agree that it's bad that some children starve? Ah, so then you're willing to donate to my charity for starving children, of course.")

Anyhow, maybe I'll expand on it and try to make some connections to magic, as you suggested.

Cheers,
Jeff
jprussell: (Default)

[personal profile] jprussell 2025-06-04 03:48 am (UTC)(link)
Exactly. When "descriptive," there's almost nothing to find fault with - folks who are actually more likable, authoritative, drawing on your past genuine commitments, or whatever, are, of course, more compelling. There's little fault to find there.

The trouble is when folks start trying to be likable/authoritative/creators of consistency/scarcity/whatever and making decisions based on that. Then the ethics quickly get very fuzzy, and sometimes outright scary. A more serious example of "consistency" used for influence: the North Korean/Chinese communists, when they captured American troops, would say "hey, make this written/recorded statement about problems with America." They'd start with small, easy to agree-to stuff, like "nobody's perfect, just talk about something that could be better in America." And, of course, if you refused, you didn't eat and/or got beaten. But then, once you've talked about a problem America has, you'd be asked to talk about why America was problematic - "you already admitted one problem, why not others?" And, of course, oh yeah, if you don't escalate you or your buddies starve and/or get beaten, and so on.

So, you see this very powerful (but gross) mix of compulsion and playing on normal human psychology (why would you not want to be seen as consistent with what you've publicly asserted in the past?). I think most businesses don't got all that far, but as I've come to care about the ethics of such things more strongly, I've found "the line" is much harder to identify than we might wish.

Cheers,
Jeff
jprussell: (Default)

[personal profile] jprussell 2025-06-05 10:01 pm (UTC)(link)
I have not yet, though he's on the (very long) list, as is Eros and Magic in the Renaissance, which I understand makes the argument that that line is a direct one.
sdi: Oil painting of the Heliconian Muse whispering inspiration to Hesiod. (Default)

[personal profile] sdi 2025-05-31 02:23 pm (UTC)(link)
I'm also not sure I'd go so far as to say "demonic;" the reason being that LLMs are more stupid than humans, while even wicked demons (being in the realm of water rather than earth) are smarter than humans. I would say that LLMs are subhuman, thus of a lower level than us, and thus following them naturally leads us away from divinity, though.

I don't know if this is helpful to you, but I've actually been thinking about LLMs in terms of the "Ra"-inspired interpretation of the Teiresias myth. If the goal of the human life is to make a willed commitment to love (either love of self or love of others), then LLMs are simply a natural extension of the higher end of the "love of self" path; it's a subtle form of not just enslavement but self-enslavement, which adds to the power of those psychopathic elites who push for them (and thus furthering their path to the divine, even if in a roundabout way). Of course this seems horrific to those of us, like you and I, who are growing to the point of firm commitment to the "love of others" path... since what can we do in the face of it? We can try to educate those who are being bamboozled, but "one can lead a horse to water, but not make them drink," and the really insidious part of it is, that having allowed their thinking capacity to atrophy, education is generally ineffective, here. The horror is amplified by this general collapse of educability being pushed not only by LLMs but by all facets of society.

I have been thinking a lot of Laozi, lately; that the only way to help others is to align oneself utterly with the Tao, and in that way they will naturally be helped without our having seemed to do anything. Thus I endeavor to open myself to God with renewed vigor, and trust that Providence will work all the rest out in Its time.
sdi: Oil painting of the Heliconian Muse whispering inspiration to Hesiod. (Default)

[personal profile] sdi 2025-05-31 05:37 pm (UTC)(link)
I'm sorry that you have to interact with these things daily.

To your question, yes, I think subhuman things dehumanize us to the extent that we feed them (cf. 1 Tim. 6:10–11). But I don't think LLMs are special in that regard; it is true whenever we cease to do our best. (Certainly, offloading our capacity to think to a machine is a particularly egregious form of this, but so is any other kind of laziness, intellectual or otherwise!) Euripedes has a good line, here:

[...] ἢν δέ τις πρόθυμος ᾖ,
σθένειν τὸ θεῖον μᾶλλον εἰκότως ἔχει.
[...] Remember, when one is zealous,
the gods likewise have more strength.
(Orestes speaking. Euripedes, Iphegenia in Tauris 910–1.)

Or, a bit more loosely, "the more one strives, the more the gods strive for them." The word I translated "zealous" is πρόθυμος pro-thumos, "forward in spirit" or "engaged," the exact thing I mean when I quote my angel saying, "do your best."
Edited (revising my Greek) 2025-05-31 17:42 (UTC)