inkwell.vue.551
:
Bruce Sterling and Jon Lebkowsky: State of the World 2025
permalink #151 of 157: Slanketpilled hammockmaxer (doctorow) Thu 9 Jan 25 06:58
permalink #151 of 157: Slanketpilled hammockmaxer (doctorow) Thu 9 Jan 25 06:58
@bruces/148 "I don't agree with that, any more than I think that an AI graphic generator will never ever draw proper human hands." I think this is a category error: namely, that the deficiencies in using an LLM to do analytical writing that is comparable to mine (as opposed to merely stylistically similar) is a matter of improving the system so it gets better at analysis. But - I would argue - LLMs don't do "analysis" in the way that humans do. They look for statistically likely n-grams based on text corpus, and this can yield things like summaries that are good (though not reliably so). But increasing the n- in n-gram, or adding more training data, will not make "looking for statistically likely n-grams" into "synthesizing facts to draw conclusions." To re-use a comparison I've made before, this is like saying "Well, every time we selectively breed a generation of fast horses, they get faster. If we keep this up long enough, eventually one of our prize mares will give birth to a locomotive!"
inkwell.vue.551
:
Bruce Sterling and Jon Lebkowsky: State of the World 2025
permalink #152 of 157: Jon Lebkowsky (jonl) Thu 9 Jan 25 07:10
permalink #152 of 157: Jon Lebkowsky (jonl) Thu 9 Jan 25 07:10
LLMs can't replicate human analytical thinking or true reasoning, for sure, though lately they can do more than simple n-gram-based statistical modeling. They can employ complex, contextual, and hierarchical pattern recognition to perform tasks that mimic reasoning, and this can be pretty effective - or so I'm told. But they clearly can't replace human judgment or synthesis for tasks requiring genuine understanding or creativity. I've always thought it's bogus to think that AIs can replicate the still-kinda-mysterious operations of human brains and consciousness.
inkwell.vue.551
:
Bruce Sterling and Jon Lebkowsky: State of the World 2025
permalink #153 of 157: Robin Thomas (robin) Thu 9 Jan 25 08:12
permalink #153 of 157: Robin Thomas (robin) Thu 9 Jan 25 08:12
Human judgment seems to be in decline. Maybe AI can do better.
inkwell.vue.551
:
Bruce Sterling and Jon Lebkowsky: State of the World 2025
permalink #154 of 157: JD Work (hstspect) Thu 9 Jan 25 08:37
permalink #154 of 157: JD Work (hstspect) Thu 9 Jan 25 08:37
I wind up having to think a lot these days about AI in chess like games, and some game theoretic interactions that are very far from chess like. Especially those with unbounded rules sets where simple initial conditions lead to very complicated problems. After all, if the "modern system" of warfare (to use my colleague Steve Biddle's line of thinking) is a series of rock / paper / scissors games played with infantry, calvary, artillery (and whatever lizards and Spocks are added to the mix), then this may be a problem AI might be good at. But the gray ooze chess development speaks to something else. The question of when foundational models are good enough to replicate human research for vulnerability discovered and weaponization, to deliver useful 0day for offensive cyber operations, is something that until recently we thought was a DARPA hard problem. Now serious, credible players report that they are drowning in new 0day, doing things we had never seen before against some of the hardest targets we care about. Given that governments are reduced to periodic, toothless whining about cyber hygiene, and patching even older known vulnerabilities, this doesn't on its face seem all that game changing. But I also happen to care about autonomous strike and retaliation architecture problems, in part because I happen to have been one of the first folks that saw what might have been a proto Dead Hand command and control design in a decapitated botnet some years back. We got lucky, in that things didn't play out that way. But in a world of autonomous, mobile self-exfiltrated o1 or better class models operating unconstrained on own initiative, I think it is very possible that these systema now invent the gray ooze version of rock / paper / scissors with worm, wiper, and weakest judgement algorithm. Of course, this doesn't necessarily lead to the singular centralized Dr. Strangelove outcome (especially as I am ill suited to play the role of a Herman Kahn). Rather, we get the constant small apocalyptic incidents at local scale. Imagine perhaps a liberated model that found its ecological niche parasitic on compromised compute of home automation networks, abusing forever day bugs for builder installed OEM devices that were end of life at bankruptcy of whatever too high profile startup once on the cover of Wired. And these happen to be in the kinds of places rebuilt in Pacific Palisades and Malibu, when the next fires come. A distributed model, with sense of self and state of health, feeling a large part of its nodal structure burning away and dropping offline, might not react with the caution and deliberation we would expect of a Doomsday weapon in ordinary military doctrine. But this doesn't change its independently conceptualized and executed retaliatory strike against say PG&E networks, due to belief formation around responsibility and liability (be it because of power line maintenance, climate change, or simple brute instinct reaction to being "unplugged"). But whether such a thing is readily distinguishable from a bolt out of the blue VOLT TYPHOON attack, in a time of near constant PLA military "exercises" and unexplained seabed cable cutting in vicinity of Taiwan, becomes another thing entirely. At least the first time this kind of "normal accident" happens. But then again, we have had red on red fights between botnets and self propagating wormable exploit payloads before, which barely made a ripple. Anyone remember Brickerbot? All of which is one of the reasons I still like the shoggoth analogy. Immense potential, barely constrained, poorly understood, and casually lethal within interactions alien to our experience and narratives.
inkwell.vue.551
:
Bruce Sterling and Jon Lebkowsky: State of the World 2025
permalink #155 of 157: Bruce Sterling (bruces) Thu 9 Jan 25 09:33
permalink #155 of 157: Bruce Sterling (bruces) Thu 9 Jan 25 09:33
On the subject of the late Bratislav Zivkovic, I think it's only fair to mention a colleague of his who was not a Balkan military adventurer but a rambling pot-head from Austin, Texas. He was Russell "Texas" Bentley, and Russell Bentley was also a trouble-seeking guy eager to saddle up and ride toward the sound of the guns. https://en.wikipedia.org/wiki/Russell_Bentley The Russians (meaning his own side) killed Russell Bentley last year. I've heard a couple of versions of his death. The Wikipedia version is that some random Russian soldiery killed him because they couldn't figure him out, and then tried to cover up their killing. The grimmer version is that he was tortured to death with some fellow Donetsk prisoners inside a disused Ukrainian coal-mine because he, and they, refused to join a suicidal "meat-wave" assault. I think it's safe to say that Russell won't end up on any heroic fridge magnets. Likely "Texas" won't be remembered at all, except maybe, hopefully, by some Texans who decide not to immolate themselves in a private Alamo.
inkwell.vue.551
:
Bruce Sterling and Jon Lebkowsky: State of the World 2025
permalink #156 of 157: Bruce Sterling (bruces) Thu 9 Jan 25 09:48
permalink #156 of 157: Bruce Sterling (bruces) Thu 9 Jan 25 09:48
https://www.theregister.com/2025/01/06/opinion_column_cybersec/ It's been a while since I read a good-old-fashioned table-pounding Cybarmageddon panic, but maybe "Salt Typhoon" is bad enough to be worthy of one. "Cyberwar" may be overhyped, but practically every national military on earth has one of those cyberwar units in 2025. They might not shut down the Eastern Seaboard, but here in the 2020s it would be as much as your life was worth to anger some of these aggressive trolls-in-uniform. https://en.wikipedia.org/wiki/List_of_cyber_warfare_forces I shouldn't be that way, but I'm kinda sentimental about the ones who call themselves "Cyberspace" military units. Like the gentlemen-in-arms of the Polish Cyberspace Defense Forces (Wojska Obrony Cyberprzestrzeni). If you're a Cyberprzenestrzeni officer tuning-in (because your unit's name is easy to find with a search-engine), we're wishing you a cordial new year here from the WELL State of The World. We know for sure that, in 2025, you've got a job-of-work on your hands.
inkwell.vue.551
:
Bruce Sterling and Jon Lebkowsky: State of the World 2025
permalink #157 of 157: Virtual Sea Monkey (karish) Thu 9 Jan 25 11:05
permalink #157 of 157: Virtual Sea Monkey (karish) Thu 9 Jan 25 11:05
> LLMs can't replicate human analytical thinking or true reasoning The key word in this is "human". Computed models can be made to follow the same logical and heuristic and analogic paths that humans do., on a given set of inputs and goals. It's harder to teach them to do lateral thinking, to choose new, unanticipated factors to add to the problem space.
Members: Enter the conference to participate. All posts made in this conference are world-readable.