We actually see this quasi in real time with people writing stuff (I'm an editor, of sorts, in the waking world). You have people clearly using generative AI to write or at least clean up their stuff...but we screen for that. And now you have other LLMs that "humanize" said copy, to slip it past attempts to screen for AI. At some point you have to ask: given the amount of energy used for such a process—one LLM after another—and given that we know even saying "thank you" to LLMs consumes massive amounts of energy, at what point would it just be more economical to simply pay a person to write said copy? The only way I see this coming undone is if, as with recent developments in China, they make the use of these systems much, much less energy intensive. Which I guess is possible (perhaps even probable).
The apocalyptic situations I envision for these is less about genuinely evil intent and more about Bostrom's "paper clip maximizer" theory, in which an AI is given a particular mundane task and pursues it relentlessly, and there's no way to shut it off. Which, if you've ever worked with software and mistakenly given a program the slightest of wrong instructions, you know the feeling when it unleashes destruction. I don't know that such a scenario would end the planet, but I suspect it could causes considerable badness.
no subject
Date: 2025-06-02 11:04 am (UTC)The apocalyptic situations I envision for these is less about genuinely evil intent and more about Bostrom's "paper clip maximizer" theory, in which an AI is given a particular mundane task and pursues it relentlessly, and there's no way to shut it off. Which, if you've ever worked with software and mistakenly given a program the slightest of wrong instructions, you know the feeling when it unleashes destruction. I don't know that such a scenario would end the planet, but I suspect it could causes considerable badness.