boccaderlupo (
boccaderlupo) wrote2025-05-30 06:58 am
![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Entry tags:
LLMs are demonic
Conjured intelligences of questionable human provenance and ever more inscrutable intent, to which more and more people are deferring questions that would typically be handled by a person's intellect, will, and imagination...not good.
no subject
no subject
But I'm seeing more and more people delegate critical thinking, bit by bit, to these things. The "feel" of malevolence from that alone—not necessarily the intent of the LLMs, but rather the sense that people are offloading their intellect, or opening up their intellects to some outside force—is deeply unsettling.
no subject
For what it's worth, my day job is teaching folks how to write and speak to function in the PMC (I teach "Business Communication") and the pressure to just teach them how to prompt an AI is, shall we say, rather strong, but it keeps striking me as a a short-sighted way of promoting seeming success without true progress. It's like an argument to cheat on a test that everyone has to do well on to determine their course in life - sure, immediately, doing well on the test is the goal, and cheating is the obvious way to do that, but to whatever extent actually being good at what the test is meant to ascertain will better prepare you for the life passing the test will give you, you're actually hurting yourself in the long-run, even if in the short run it seems obvious that cheating is the way to go.
Outsourcing your own thinking and ability to express yourself seems like cheating at the very game the PMC is expected to play, and play well. You might get into the game by cheating, but at some point, you'll either falter or find yourself so dependent on your means of cheating that you have to make some pretty gnarly compromises about your actual self and what you can do, which seems like a bad tradeoff, all-told.
Cheers,
Jeff
no subject
It becomes a sort of meta-thinking—instead of thinking about, say, the problem at hand, I'm thinking about how to prompt this other "intelligence" about the problem at hand. That, right there, is the opening of the door. We have adapted to this tool, much like the evolution of any athlete in their sport or worker with the tools on the bench. And yet the issue for me is that this tool itself adapts, and although in principle it's being simply fed knowledge from the outside world, I suspect is prone to go places it does not expect...and take people with it.
We can only know a tree by its fruit. Per a friend of mine who is far deeper in the weeds than I, there have been already at least nine deaths related to people acting on information from these things (vulnerable people, I suspect, but nonetheless...) I suspect there will be more.
no subject
These things are dangerous, but they're dangerous more in the sense of a Red Queen's race than a Manhattan Project, I think.
no subject
no subject
This makes me simultaneously more and less pessimistic than the average AI booster - I think the capability of "take-off" is less likely than most of them, but I worry that the damage in the meantime might be greater than they anticipate.
I guess we'll see.
Cheers,
Jeff
no subject
The apocalyptic situations I envision for these is less about genuinely evil intent and more about Bostrom's "paper clip maximizer" theory, in which an AI is given a particular mundane task and pursues it relentlessly, and there's no way to shut it off. Which, if you've ever worked with software and mistakenly given a program the slightest of wrong instructions, you know the feeling when it unleashes destruction. I don't know that such a scenario would end the planet, but I suspect it could causes considerable badness.
no subject
I think the answer is that is "at the very beginning" (e.g. don't involve LLMs at all). My reasoning is threefold:
1. OpenAI loses money on every request, even for their paying and SaaS customers.
2. Despite [1], LLMs have failed to take off in even a modest way: the LLM "industry" (which is basically just OpenAI, as it makes up almost all AI revenue) remains small in economic terms.
3. We have a massive labor crisis in the country.
To put it another way, a couple people at the apex of the LLM "industry" seem to be making a lot of money (e.g. Altman, Musk, Pichai, Nadella, etc.), at the expense of their businesses, the American tech sector, the dollar, and Western society as a whole.
The only way I can see that any of this makes sense is if (generously) LLMs were meant from the beginning to be a controlled demolition of the dollar or (cynically) LLMs were meant from the beginning to be a con to part investors from their money. Either way, it is well-described by the "Ra"/Teiresias theory where a few people are developing their negative polarization by preventing the execution of the free will of many.
Sorry to harp on this, but I want to emphasize it's pettiness: it's important to not worry about it. Don't look down! Focus on divinity!
no subject
I think your point elsewhere on this thread is also well taken: human beings by themselves generate a massive amount of noise in terms of Internet communications. One would think the introduction of LLMs makes that exponentially grater, such that sifting signal from noise becomes even more difficult.
I appreciate your concerns and advice, friends. Indeed, back to the path...
no subject
That timing has, though, straddled the whole LLM thing. When I started - not even a consideration. Today? My department head wonders aloud whether we can get away with not teaching to the LLM default of the current PMC world.
I share your deep concerns with the maybe-harmful places trusting such tools goes, even leaving out the more egregious cases of folks driving themselves nuts by treating their local instance of an LLM as a girlfriend or spiritual helper (or both, bleh). At its mildest, my thinking as a teacher of writing and speaking competence, I think "well, you don't bring a forklift to the gym," by which I mean, okay, maybe an LLM might make some suggestions on how to tighten up your email or memo or academic paper, and it might even be good advice, but if you've never figured out how to judge a better piece of writing from another, how will you judge an LLM's output as "better?" Or "worse?" At the more extreme ends, you give over not only the judgment of is this good, but also what even is good? It's not like quality of writing is a wholly objective measure like gravity or the passage of time - there's a value-judgment there, and if you start giving that over to machines, it seems like trouble to me.
Maybe there's a helpful midpoint here, but I find myself more and more drawn to the "Butlerian Jihad" side of things, even as the professional pressures around me move in the exact opposite direction.
Cheers,
Jeff
no subject
no subject
So, in practice, the way it has shaken out for me is that my esoteric work has led me to inject more skepticism/protection into my teaching - rather than saying "here's how to use Cialdini's shortcuts to get what you want" I more emphasize "here's how others use Cialdini's techniques to try to trick you, do you really want to let them?" I'm pretty savage on advertising in general.
All that said, I do think there's such a thing as legitimate influence/persuasion/advertising/marketing, but I think its borders are much hazier than many proponents insist, and that someone who wants to engage in such activities and remain ethical has to hold himself to a much higher standard than most business majors or MBAs would think. To put it more concretely, if I have a product I think is genuinely useful, it makes sense that I would use an understanding of what folks pay attention to and remember to get them engaged with it, but the moment you start doing so, you have to ask yourself if your product is really as good as you think, and whether making it easier to engage with is actually intruding on anyone's free will or not.
Lately, it's been bothering me quite a bit that I am pretty much literally a sophist - I teach rhetoric and justify it as a "tool that can be turned to different ends" and (mostly) disclaim teaching what ends its right to turn such tools to. I solace myself a bit by including a discussion of ethics, where I don't say "this is right and this wrong" but instead "here's a handful of ways of sorting out how to think of right and wrong, my preference is is this one (virtue ethics), but it's up to you to pick how you decide things in your own life." Maybe weak sauce, but when I discovered there was no required ethics content in the undergraduate business major at my school, I figured I could at least devote a lecture or two to it.
Anyhow, sorry for a long response to what may have been a mostly throwaway line. The short version is that I've noticed much of the overlap of teaching "effective communication" and "magic," and I've been left with the feeling that the default goals, ethics, and so forth of the former have a lot to learn from the latter.
Cheers,
Jeff
no subject
no subject
no subject
no subject
The very short version is that Cialdini (a communication researcher) looked at both the communication research and did extensive interviews with people whose job it is to convince others to do what they want (salesmen, police interrogators, etc) and was able to derive six principles that affect whether someone is more likely to go along with what you're trying to convince them of - Similarity/Likability, Consistency, Commitment, Authority, Reciprocity, and Scarcity. These principles can be turned into "shortcuts" by sales guys and the like - you walk into a car dealership and the guy starts asking you personal questions until he hits on something you have in common, and then starts talking about that, and now you like him more because you're similar, which better disposes you to buying a car from him (and makes you less likely to bargain hard on the price, since you don't want to upset your new friend).
All of the principles are normal and natural (of course you're more likely to do something that someone you like asks you to do!), but where it gets gray or worse is when the person trying to influence contrives to use the principles in a calculated way that maximizes their benefit with as little cost to them as possible. Like offering cheap swag to instill a feeling of reciprocity, or leading the conversation to get you to agree with something he then paints as being consistent with what he really wants ("wouldn't you agree that it's bad that some children starve? Ah, so then you're willing to donate to my charity for starving children, of course.")
Anyhow, maybe I'll expand on it and try to make some connections to magic, as you suggested.
Cheers,
Jeff
no subject
Fascinating. Sounds a bit like the "FORD" principles for relating (family, occupation, recreation, dreams). Sounds easy, but when a person with inherent charisma gets ahold of these tools...watch out.
no subject
The trouble is when folks start trying to be likable/authoritative/creators of consistency/scarcity/whatever and making decisions based on that. Then the ethics quickly get very fuzzy, and sometimes outright scary. A more serious example of "consistency" used for influence: the North Korean/Chinese communists, when they captured American troops, would say "hey, make this written/recorded statement about problems with America." They'd start with small, easy to agree-to stuff, like "nobody's perfect, just talk about something that could be better in America." And, of course, if you refused, you didn't eat and/or got beaten. But then, once you've talked about a problem America has, you'd be asked to talk about why America was problematic - "you already admitted one problem, why not others?" And, of course, oh yeah, if you don't escalate you or your buddies starve and/or get beaten, and so on.
So, you see this very powerful (but gross) mix of compulsion and playing on normal human psychology (why would you not want to be seen as consistent with what you've publicly asserted in the past?). I think most businesses don't got all that far, but as I've come to care about the ethics of such things more strongly, I've found "the line" is much harder to identify than we might wish.
Cheers,
Jeff
no subject
Have you read "De Magia" by Bruno or his other magic essays? The line between them and modern disciplines like advertising (perhaps even New Thought) seem pretty clear.
no subject
no subject
no subject
I don't know if this is helpful to you, but I've actually been thinking about LLMs in terms of the "Ra"-inspired interpretation of the Teiresias myth. If the goal of the human life is to make a willed commitment to love (either love of self or love of others), then LLMs are simply a natural extension of the higher end of the "love of self" path; it's a subtle form of not just enslavement but self-enslavement, which adds to the power of those psychopathic elites who push for them (and thus furthering their path to the divine, even if in a roundabout way). Of course this seems horrific to those of us, like you and I, who are growing to the point of firm commitment to the "love of others" path... since what can we do in the face of it? We can try to educate those who are being bamboozled, but "one can lead a horse to water, but not make them drink," and the really insidious part of it is, that having allowed their thinking capacity to atrophy, education is generally ineffective, here. The horror is amplified by this general collapse of educability being pushed not only by LLMs but by all facets of society.
I have been thinking a lot of Laozi, lately; that the only way to help others is to align oneself utterly with the Tao, and in that way they will naturally be helped without our having seemed to do anything. Thus I endeavor to open myself to God with renewed vigor, and trust that Providence will work all the rest out in Its time.
no subject
I have to interact with these systems daily, so it's interesting to see the reaction of others who also employ them. It's a disquieting accommodation to the technology, but then I guess it can be argued such adaptation is typical of all technology. Yet it's the direct access, as it were, of language into one's intellect that troubles me. At what point does the prompter become the prompted?
no subject
To your question, yes, I think subhuman things dehumanize us to the extent that we feed them (cf. 1 Tim. 6:10–11). But I don't think LLMs are special in that regard; it is true whenever we cease to do our best. (Certainly, offloading our capacity to think to a machine is a particularly egregious form of this, but so is any other kind of laziness, intellectual or otherwise!) Euripedes has a good line, here:
[...] ἢν δέ τις πρόθυμος ᾖ,
σθένειν τὸ θεῖον μᾶλλον εἰκότως ἔχει.
[...] Remember, when one is zealous,
the gods likewise have more strength.
(Orestes speaking. Euripedes, Iphegenia in Tauris 910–1.)
Or, a bit more loosely, "the more one strives, the more the gods strive for them." The word I translated "zealous" is πρόθυμος pro-thumos, "forward in spirit" or "engaged," the exact thing I mean when I quote my angel saying, "do your best."