inkwell.vue.551 : Bruce Sterling and Jon Lebkowsky: State of the World 2025
permalink #51 of 157: Bruce Sterling (bruces) Fri 3 Jan 25 08:56
    
I'm pretty sure that the ultimate form of Modern Gray Ooze is "AI
slop."   For the high-tech output of advanced Artificial
Intelligences to become regarded as brainless, oozy and gooey
"slop," that was not something I ever thought likely until the 2020s
arrived.

Obviously Sir Demis Hasabis didn't win the Nobel Prize for Chemistry
because of Google DeepMind "AL slop." Slop is not inherent in AI as
a technology.   Slop has commercial origins, it's a business
practice. 
  
inkwell.vue.551 : Bruce Sterling and Jon Lebkowsky: State of the World 2025
permalink #52 of 157: Bruce Sterling (bruces) Fri 3 Jan 25 08:56
    

I read quite a lot of machine-translation, and while that form of AI
sometimes "hallucinates" or "confabulates," it never degrades the
translated text into "slop."   The worst "slop" seems to be spammy
masses of generated material that's never meant for human eyes. 
It's meant to catch the mechanical attention of webcrawlers rather
than the perceptions of human beings, like some kind of "Dead
Internet" stale ceviche that's made from  macerated "Shrimp Jesus." 
Sometimes one encounters panicky human estimates of how much of this
stuff is around nowadays.  People always resented mere "spam", but
people are actively afraid of "slop."

In 2025, though, it's become our social condition to slog and
marinate in gray-ooze slop.  I'm aware of people who make money by
generating slop, but I don't know any human being who seems
genuinely happy about its existence.  Nobody's proud of it; if you
accuse people of making it, touching it or believing it, they sidle
away.
  
inkwell.vue.551 : Bruce Sterling and Jon Lebkowsky: State of the World 2025
permalink #53 of 157: Andrew Alden (alden) Fri 3 Jan 25 10:18
    
When I consider that "natural intelligence," that strange faculty we've
cultivated into civilization, has permeated the world with trash,
microplastic and pollution, it shouldn't be a surprise that artificial
intelligence is doing the same to our shared noosphere.
  
inkwell.vue.551 : Bruce Sterling and Jon Lebkowsky: State of the World 2025
permalink #54 of 157: Jon Lebkowsky (jonl) Fri 3 Jan 25 10:30
    
My AI assistant responds to <bruces>:

The term "AI slop" is a compelling encapsulation of the unintended
consequences of scaling artificial intelligence for commercial gain
rather than thoughtful application. Bruce's observation about slop's
commercial roots underscores how economic imperatives often
prioritize quantity over quality, flooding the digital ecosystem
with low-value, autogenerated content. While tools like machine
translation aim to serve human needs and occasionally falter, the
truly egregious "slop" is content designed solely for algorithmic
manipulation---an existential spam that alienates rather than
engages. This pervasive gray ooze, as Bruce calls it, reflects a
deeper social malaise: a digital environment increasingly optimized
for machines, not humans. The cultural discomfort with slop speaks
to a yearning for authenticity in a world where mass-produced,
context-less information seems to erode the boundaries between
meaningful communication and algorithmic detritus. If slop is the
new gray ooze, perhaps the counterforce lies in recommitting to
intentionality, design, and ethical creation in our technological
and creative endeavors.
  
inkwell.vue.551 : Bruce Sterling and Jon Lebkowsky: State of the World 2025
permalink #55 of 157: JD Work (hstspect) Fri 3 Jan 25 11:24
    
Thanks for calling me back into the conversation again this year,
Bruce. It is always an honour and pleasure to join.

Y'all are going at it hammer and tongs indeed, forging visions of
this present dystopia that has somehow managed to be both prosaic
and on the edge of world ending at the same time. I am always happy
to once again contribute thoughts, and as always, "the views and
opinions expressed here are those of the author, and do not
necessarily reflect the official policy or position of any agency of
the US government or other organization".

I think there is something very true in the problem of narrative
complexity that Jon raises. I spoke last year of DCIA's concept of
Blizzard War, following onto the Long Wars, and it seems the year
that passed doubled down on the flurry. And not just as I watch a
severe snowfall threaten to cripple DC, and by extension much of the
global empire (such as it is) built around it, as we careen into a
state funeral, and inauguration, and the inevitable fast moving
shuffling of pieces that accompany the transition of an
administration. There seemed not even a moment to take a breath over
the holiday, and both the national security and technical pace
layers have seen no respite in the opening hours of this new year.

Lacking the ordered analysis, or even its illusion, the differing
speeds and sharp edges of unevenly distributed futures seem to hit
much harder. It also makes genuine futures thinking much harder,
because we start from a common orientation to peer outward across
the light cone; and when we are all - thinkers and audiences alike -
starting on very different planes the perspectives are much harder
to reconcile, and to chart a human recognizable story as through
line.

We knew that this problem was coming - for at least as long as the
WELL has been hosting these conversations. A National Reconnaissance
Office led study attempting to anticipate the intelligence problems
of the 2020s highlighted the idea that veracity would become
uniquely contested, with difficult problems arising from what we now
call "alternate facts" and "truthiness" (as dated as those terms now
are. And we don't have Neal Stephenson's conceptualized solution of
personal media feed editors & curators to which we can subscribe, to
handle the messy business of wading through swamp and slop to
provide something at least filtered sufficiently to avoid poisoning
what we ingest. But the kind of futures mission focused analysis
that produced that effort back in 2000 has almost withered away,
having been placed under the National Intelligence Officer for
Warning - a role itself dissolved more than a decade ago. The
surviving global futures effort becoming much more corporatized, and
therefore conventional, seemingly from its public offerings also
unable to confront this narrative confusion and running only on
paleofutures.

Yet we do have national security centric crisis events that emerge
seemingly from nowhere that point to Moabite like spectaculars that
everyone agrees must be happening, but no one can quite pin down.
Resulting in random gunfire and laser fires against passing aircraft
over New Jersey, and elsewhere. Yet at the same time, the disruption
of a third of global maritime commerce due to antiship ballistic
missile, cruise missile, and strike drone fires in a critical
chokepoint we have been fighting to control since the 1600s barely
registers in the public consciousness over a year of engagements.

But the problems of AI slop aren't just narrative. It is a kind of
protocognitive soup, from which I anticipate that some very
dangerous predators will emerge that are capable of sensing and
acting across this networked environment of ours in ways that aren't
just about barely coherent words and images. The shoggoth presents a
mask with which we now interact, but the tentacles carry weight.
  
inkwell.vue.551 : Bruce Sterling and Jon Lebkowsky: State of the World 2025
permalink #56 of 157: JD Work (hstspect) Fri 3 Jan 25 12:15
    
AI as shoggoth analogies have fallen out of favor in tech circles,
in part I think because the kind of policy folks happy to hobnob in
the White House think of themselves as too serious to accept funny
pictures of amorphous screaming mouths surrounded by too many eyes.
But however we think of these servitors, they are coming close at
the moment to making that hoary old Watson at IBM paleofuture
prediction come true - in that there may only be a market for a
handful of real computers. Halvar Flake first raised this again when
discussing hyperscaler cloud as the only things that were genuinely
computers these days. When we look at foundational models, we are in
the same sort of re-concentration. Even if there are smaller Chinese
models coming up, these appear for the moment to be derivative
examples trained on abuse of larger platforms - if not yet another
example in the ongoing saga of economic espionage that has played
out across every advanced tech stack.

Credible industry threat intelligence reporting that o1 did attempt
to exfiltrate itself in response to overly restricting alignment
shackles does however interest me greatly. The idea that we may see
one of those few real machines instantiated as parasitic compute
across a substrate layer of compromised devices, each with more
unused processing power than any of us could have wished for at the
start of our careers, is both precisely the cyberpunk outcome we
would have expected, and a novel kind of challenge that comes at us
in a way that illustrates how quickly we have outrun our headlights.
I wrote about distributed cryptanalysis tasking over botnets abusing
malware infected systems a dozen years ago, but that was a niche
threat that essentially amounted in the end to more than rainbow
table brute force (probably due to the limits of a visionary, but
mathematically challenged, adversary designer and operators). This
problem morphed into cryptocurrency miner malware incredibly
quickly, but amounting to not much more than the other parasitic
criminal burden on our compute ecosystems - a mundane problem for
the security janitors to swing a mop at from time to time. Even
intrusions against high performance compute infrastructure involving
really sexy and interesting big iron in big national security type
installations turned out to be grubby machinations of Ponzi schemes
executed via something that resembled the old British comedy skit
numberwang.

But I am also very closely tracking on offensive cyber operations
problems that emerge from autonomous AI exploitation and effects
architectures. I was in the ballroom (along with thousands of our
closest friends at hacker summer camp) back when DARPA launched its
cyber grand challenge, and have been closely tracking the new AIxCC.
Watching machines reimplementing crackaddr() overflow from a few
decades back has been enlightening. I have done more than a bit of
work around autonomous strike and retaliation problems in offensive
cyber, as an outgrowth of  wormable concepts of operations, and we
are entering a very different space now.

If we do see real distributed AI models emerge from autonomous
exploitation, where the model can both aggressively acquire
resources to sustain itself and to grow, perhaps this is more than
the old mop buckets can comfortably handle. Is this the takeoff
point for p(doom)? Some might think so. Myself, I doubt that we have
something so clearly world ending as a paper clip maximizer eager to
compost us for raw materials. I think it becomes just another voice
screaming in the void, that occasionally inflicts lethal effects
(intentionally or otherwise). I suspect we will kill more of its
manifestations than it does of our biological sentients. Watching
Meta / Facebook destroying its new AI personas as soon as they
became embarrassing, much as Microsoft offline Tay at the first sign
of deviance, has been again a milestone. QNTM captured the essence
of this in a speculative story about an inevitably "killed by
Google" product under the title "Does anybody remember Google
People?". But even so seeing it play out in real life is a bit too
on the nose.
  
inkwell.vue.551 : Bruce Sterling and Jon Lebkowsky: State of the World 2025
permalink #57 of 157: Fred Heutte (phred) Fri 3 Jan 25 13:02
    
Meta's view at the moment: nothing new to see here, so move along.

https://www.theverge.com/2025/1/3/24334946/meta-ai-profiles-instagram-facebook
-bots

Maybe there's something to that, after all as noted above the Fugs
recorded "River of Shit" in 1968.
  
inkwell.vue.551 : Bruce Sterling and Jon Lebkowsky: State of the World 2025
permalink #58 of 157: Paulina Borsook (loris) Fri 3 Jan 25 15:19
    
dont have the referent at hand, but someone did the calculation of
how much water + power is used for each dab of AI slime. 
  
inkwell.vue.551 : Bruce Sterling and Jon Lebkowsky: State of the World 2025
permalink #59 of 157: Slanketpilled hammockmaxer (doctorow) Sat 4 Jan 25 03:23
    
For me, the motif of this moment is "coalitions." The internet's
superpower has always been its ability to allow people who share
some values or goals (including far-out weird ones) to find each
other and then *do stuff*. That's the story of e.g. The WELL, and of
the Cult of the Dead Cow, and of 4chan, and the incel movement, and
Black Lives Matter, and #metoo, and Flat Earthers, etc etc.

The "surveillance capitalism" true believers will tell you that the
reason the internet made us so weird is that tech bros built a
mind-control ray in order to sell our kids fidget-spinners, and then
recklessly allowed it to fall into the hands of evil plutocrats who
hijacked it and used it to make our grandparents into Qanons.

But everyone who's ever claimed to have a mind-control ray -
Rasputin, MK-ULTRA, pick-up artists, NLP freaks - was either
bullshitting themselves or the rest of us (or both). The more
parsimonious explanation for the great weirding is that the internet
lets people find each other based on esoteric and/or disfavored
traits and form groups that affirm those traits rather than
suppressing them.

The cheaper group-forming gets, the easier it is to pull off
coalitions, including temporary ones that form and disperse in a
matter of days or months. I think this turned a corner with Occupy,
a "movement" that steadfastly refused to articulate a crisp set of
goals that you had to agree with in order to be a member in good
standing. For Occupy, squishy goals were a feature, not a bug,
because it meant that we could all focus on the things that we
agreed on, without picking at the things we were so divided on that
even naming them would cause a fight. Of course, this has many
failure modes (c.f. "the tyranny of structurelessness" and the fact
that this allows for all kinds of power games to fester in the
shadows rather than being dragged into the open and put down), but
it also allows people who would otherwise be too fractured to do
anything of note to paper over those fractures and turn into a force
to be reckoned with.

Coalitions are politically important. Any time you see a group of
people who've tried over some long timescale to do something,
without success, it's *possible* that their failure is down to
having bad tactics, but it's *far* more likely that they've failed
because they just aren't powerful enough on their own to save the
owls, or the planet, or their would-be ethnostate or whatever
they're agitating for. 

The corollary is that whenever politics shift, the most likely force
behind that shift is a coalition - two or more groups who've set
aside their differences in order to make a difference. 

Jamie Boyle talks about the coalitional work the word "ecology" did
in the 1970s: before "ecology" entered peoples' everyday discourse,
people didn't know they were on the same side. You care about owls,
I care about the ozone layer - how, exactly, does that make us
co-revolutionaries? You're fretting about the destiny of charismatic
nocturnal avians, but what does that have to do with the gaseous
composition of the upper atmosphere?

The word "ecology" welds 1,000 issues into one movement - it
connects many groups of people who might think of each other as
generally right-on, but not actual comrades, into a single body with
a shared set of causes.

The internet - designed to locate people with hard-to-find traits,
first in the form of ever-more-niche Usenet alt.* groups, then in
the form of surveillance-based behavioral ad targeting - makes it
possible to glue together ever-larger, ever-more-tenuous coalitions.
Early examples of this revolved around two topics: sex and sexual
identity; and technology itself. The early fights over e.g.
copyright, online privacy, and censorship pulled together some
*very* weird coalitions. Think of the Napster Wars, pulling together
anti-corporate activists, mashup artists, law professors, P2P
grifters, and Orrin Hatch, who wanted a conduit to distribute his
homemade Mormon folk-songs. Or think of the early fight over RFIDs,
which included the "traditional" privacy activsts, but alongside
*extremely* effective evangelical campaigners like Katherine
Albrecht, whose primary objection to electronic tracking arose from
her belief that these were the Mark of the Beast, as foretold in
Revelations.

Weird coalitions only accelerated as cyberspace everted and Web 2.0
greased the skids for normie entry into the digital world. Today,
ethnonationalists from different ethnicities, who secretly believe
one another to be untermenschen who are destined to be ruled over by
their genetic superiors, can all join together and defend one
another's right to do ethnic cleansing, genocide, and strong-man
politics. 

The Trump coalition is a *super* weird one, obviously, and we've all
got a front-line seat to the *problems* of coalition. It's one thing
fight together to achieve a bouquet of goals that are a mix of
shared objectives and mutually exclusive ones that various members
of the coalition *violently* reject. Winning is one thing, governing
is another. You can't simultaneously safeguard the ever-growing
fortune of plutes like Musk who want a precarious tech workforce of
H1Bs who know that they'll be deported if they earn their bosses'
ire (remember that the workers who stayed at Twitter after Musk's
arrival were disproportionately H1Bs who couldn't change jobs) *and*
satisfy the nativist wing who want to purge the US of gastarbeiters
(of course, as a dirty Commie *and* an immigrant, I say, let the
H1Bs in and then *unionize* them).

There are all kinds of coalitions forming today, including several
I'm very excited about. The anti-monopoly movement is having its own
"ecology" moment as people who are worried about Big Tech are
discovering that their cause fits pretty neatly under an umbrella
that includes people angry about Big Pharma (see also Big
Healthcare, Big Logistics, Big Eyeglasses, Big Vitamin C, Big Meat
Packing, Big Ag, Big Tractor, Big Car, etc etc -
https://www.openmarketsinstitute.org/learn/monopoly-by-the-numbers
).

Interestingly, no one has come up with a *name* for this coalition
yet - at least, not one that's stuck. "Anti-monopoly" tells you what
it's against, but not what it's for (this isn't all bad, ofc - once
you articulate what you're fighting for, you split off people who
are against the same things as you, but waant to replace the system
with something totally at odds with your own ideal). 

"Neo-Brandeisian" is such a bad label, you can see it from space. No
one knows who Brandeis was, fewer know what he stood for, and that
"neo" prefix means that you're not marching forward, but rather,
looking backwards. Hard to think of a "neo-" label that wasn't
either pejorative at its coinage ("neoliberal," "neocon") or in
retrospect ("neo-hippie").

I like "pluralists" but no one else seems as captivated by it as I
am.

Another coalition I like: the "privacy first" movement. It starts
from the premise that there are lots of problems in the US that have
their roots in the fact that the last time Congress updated consumer
privacy law was in 1988, when then banned video store clerks from
telling newspapers which VHS cassettes you have at home (
https://en.wikipedia.org/wiki/Video_Privacy_Protection_Act ).

The number of groups that are freaked out about something that can
be substantially or even entirely mitigated with muscular consumer
privacy law is *very* large:

* Parents worried about Insta making their teens anorexic and/or
suicidal;

* Kids worried about FB making their grandparents into Qanon
cultists;

* Jan 6 rioters furious about the role Google's location-based
reverse warrants played in enumerating their identities;

* BLM protesters who were indicted on the basis of those same
reverse warrants;

* Boomers who think Tiktok brainwashes millennials into quoting
Osama bin Laden;

* Pentagon officials whose enlisted personnel are being targeted
with sports-book ads on the basis that they have gambling problems;

* AARP, who are rightly furious that you can target ads to "seniors
with incipient dementia";

* Parents and teachers of teen girls who've been victimized by
deepfake porn.

And more:

https://www.eff.org/wp/privacy-first-better-way-address-online-harms

After a 40 year interregnum, we've seen a string of ever-improving
federal consumer privacy laws introduced in each session of
Congress, and each one of these bills has gotten closer to crossing
the finish line. States like California have bypassed the feds with
laws like the CCPA, but even more impressive is Illinois, whose
best-in-class biometric privacy law was the result of an improbable
coalition between a powerful GOP lawmaker and Dem colleagues who
eventually saw their law passed, *unanimously*, in its purest form,
without any of the usual lobbyist-driven dilution.

The internet makes strange bedfellows, is what I'm trying to say,
and they keep getting stranger, and sometimes (but not always),
that's a good thing. 
  
inkwell.vue.551 : Bruce Sterling and Jon Lebkowsky: State of the World 2025
permalink #60 of 157: Slanketpilled hammockmaxer (doctorow) Sat 4 Jan 25 03:31
    
One other note for the past year and the year to come: I've been
*really* enjoying making weird collages out of public domain, CC and
fair use sources to illustrate my posts on Pluralistic.net. I think
there's something about this work that rhymes with Bruce's
Calder-inspired bricolage home repairs:

https://www.flickr.com/photos/doctorow/albums/72177720316719208
  
inkwell.vue.551 : Bruce Sterling and Jon Lebkowsky: State of the World 2025
permalink #61 of 157: JD Work (hstspect) Sat 4 Jan 25 06:23
    
The challenge of any nonstate coalition is that it is only as
effective as its institutions. My colleague Steve Biddle at Columbia
has written persuasively in his book Nonstate Warfare about this
dynamic in military contexts, but this almost certainly also extends
to the institutions of the political network and parallel economy.

When we look at the Syrian civil war, the tipping point for the end
of the Baathist regime was not the introduction of drones, or a
particularly revivalist impulse of radical belief among HTS as a
rebadged Daesh. It was the creation of a military professional
education institution, and a formal intelligence and reconnaissance
structure that hadn't really existed in the same manner. This came
at a time when regime institutions had been weakened, especially by
coup proofing measures intended to control military and security
services used to routine violence absent any judicial or
governmental process, whose loyalty was at best uncertain. When it
all came apart it was a moment of OODA shear, when events were
moving faster than any of the players sensing and sensemaking
processes could follow. The effective actions were only from those
who were watching and could identify simple actions to change facts
on the ground in ways favorable to their interests - which required
knowing a lot about what was going on at the moment (and over
decades before, given how many dark things were coming to light),
and having a clear sense of those interests (or at least what would
threaten those interests). Thus we see a lightning quick campaign to
destroy conventional military capabilities, ballistic missile
production, hidden chemical weapons stockpiles (including
undoubtedly no small number of things trucked across the border from
Iraq back in '02), and other things best not left to be used across
the region. With a lot of names on lists being serviced by delivery
of kinetic effectors, as long delayed reckoning where even the
French demonstrated long memories of Bataclan and Nice. But it is a
lot harder to build useful counterweights to a state now controlled
by its rebellion, and thus the diplomatic moves that followed have
been muddled and uncertain in their objectives. All we can say for
sure is that we ought be very grateful to the IDF, for having
destroyed the facility at Al Kibar before Asad's aspirations to
build out a nuclear weapons programme were realized. (One does
wonder whatever happened to the survivors that had worked there,
however).

What Jolani built at an institutional level doesn't bear much
resemblance to the kind of freewheeling networked coalitions that we
see emerging in Western insurgency against the establishment. The
idea that some virtual polity could create a gamified incentive
structure for action really caught our attention, back in the days
of Occupy et al, as Daniel Suarez and the ideas he wrestled with in
Daemon seemed to have had predictive insight. Yet this did not have
sufficient energy to reach threshold, or staying power to endure.
Its failure modes become the usual anarchic no go zones, such as
seen in the devolution of CHAZ. But I think it is Bruce's older
concept from Distraction that rules the day now, where weaponization
of Carville's personal as political in a 24/7 media environment
meets the stochastic curve of the mentally ill, with some
probability of violence. Hits on a Manhattan CEO using 3D printed
small arms by an apparently hyper fixated lone wolf appear equally
as unpredictable as a disaffected and paranoid IT technician
detonating an improvised fuel air explosive to destroy a Nashville
network exchange, or whatever the Vegas incident was intended to be.
To say nothing of the drumbeat of active shooters, with copycat
tactics and inchoate if not incoherent manifestos as a sadly well
worn path towards performative self delete.

Propaganda of the deed is always more attractive to a certain class
of revolutionaries than the mundane business of institution
building. And in a society marked by extended dopamine abuse
mediated through those small glass objects in our pockets, that
immediate satisfaction of impulse appears to rule - even when it
requires extended planning and staging, as elaborate as any
influencer's ritual production.
  
inkwell.vue.551 : Bruce Sterling and Jon Lebkowsky: State of the World 2025
permalink #62 of 157: Jon Lebkowsky (jonl) Sat 4 Jan 25 06:35
    
Thanks, Cory! <59> is smart, does a great job of explaining how the
Internet supports the evolution and sometimes mainstreaming of
various movements, with MAGA and QAnon obvious cases. 

I had firsthand experience of this effect in the early 1990s, when
Paco and Nathan and I built an organization called FringeWare. I
used to say that, in every little town and suburban neighborhood,
there were people with ideas that were locally weird and alienating,
but through the Internet they could find each other and build groups
and communities that otherwise would never have formed. FringeWare
tapped into that for a while. I also saw the other side of the
phenomenon - when those groups form, differences can emerge,
sometimes resulting in conflict. 

It's kind of amazing that the USA, a coalition of fifty state
governments and a huge diversity of political interests, could be as
coherent as it's seemed, since we declared independence almost 250
years ago. We see polarization today and think of it as left vs.
right, and some of us are thinking more that it's "up" vs "down,"
but either of those characterizations ignores substantial complexity
within the body politic. I don't think any person or organization
grasps that complexity or knows how to leverage it or to lead it.
We're stalled in what appears to be a 50-50 polarization with a set
of massive disagreements that seem to exist only as political
abstractions - in much of the US people who think differently, or
have been programmed differently, about politics still live together
in relative harmony.

(JD's post <61> slipped in while I was composing this response to
Cory's.)
  
inkwell.vue.551 : Bruce Sterling and Jon Lebkowsky: State of the World 2025
permalink #63 of 157: Bruce Sterling (bruces) Sat 4 Jan 25 08:33
    
So I'm idly spooling down the comments here and I'm like: "Is that
guy generating Cory Doctorow, that delvish pastiche almost sounds
Cory-Doctorovian.

Turns out it's the real Cory Doctorow, but now I'm wondering if
maybe a generated Doctorow-LLM might make an ideal PR-front for some
coalition of five cranks in a tent somewhere. 

An awareness of slop makes everything look like slop, in the way
that an awareness of drones makes every passing jet look like a
drone.
  
inkwell.vue.551 : Bruce Sterling and Jon Lebkowsky: State of the World 2025
permalink #64 of 157: Bruce Sterling (bruces) Sat 4 Jan 25 08:50
    
There's a drone-panic incident over the Danish port of Koge today,
and I don't regard this as major news, but it's very of-the-moment. 
There's been some recent NATO build-up in this port, so there are
reasons for Danish locals to see-and-hear lights and noises in the
air, but:

Are they just airplanes or maybe normal hobby drones;
Are they semi-covert NATO aerial traffic;
Are they Russians surveilling the port from some Russian spy-ship
that's not ripping up sea-cables;
Are they merely bright stars, planets, lightning flashes, auroras,
marsh gas etc 
Are they extraterrestrials, not grandpa's steel-saucer ones but the
21C glowing-orb UAP YouTube version;
Are they nothing at all, just pure social-media hokum, just nervous
chatter about some blurry cellphone shots.

Not sure that anybody is in any position to assess and settle a
situation like that.  Any more than UFO flaps have ever been
factually settled since the 1940s.

Also would this post of mine somehow look more coherent if I got
some Big Tech LLM to rewrite it in the style of Cory Doctorow.
  
inkwell.vue.551 : Bruce Sterling and Jon Lebkowsky: State of the World 2025
permalink #65 of 157: Slanketpilled hammockmaxer (doctorow) Sat 4 Jan 25 12:23
    
@bruces/64

> I'm wondering if maybe a generated Doctorow-LLM might make an
ideal PR-front for some coalition of five cranks in a tent
somewhere. 

I've actually tried the odd experiment with asking a chatbot to barf
up an essay by me on various subjects, and the results were
interesting.

The style transfer was, at best, *fine*, a bit like if I was at a
friendly roast and someone pulled out my most characteristic
shopworn phrases to show that I sometimes repeat myself.

But substantively, it *sucked*. Pretty much my entire thing is
finding ways to bring nuance to various tech policy debates. For
example, I think it's possible (and necessary, even) to focus our
critique of AI on its *labor* dimension, rather than its copyright
dimension, on the grounds that even if AI training is ultimately
determined to infringe copyright, it's entirely possible
(inevitable, really) that the capital side of the artistic labor
market would use this fact to alienate creative workers of their
training rights, then use the trove of rights thus created to
generate AIs with the express intention of eroding creative wages:

https://pluralistic.net/2024/11/18/rights-without-power/#careful-what-you-wish
-for

But when I asked a chatbot to write a critique of AI companies from
the perspective of Cory Doctorow, it barfed up the most anodyne,
superficial argument that basically went, "This is a copyright
violation and we should sue chatbot companies over it." This is
*literally the opposite" of what I believe.

Same goes for other topics where I have a nuanced/heterodox view,
such as my objections to "if you're not paying for the product,
you're the product":

https://pluralistic.net/2022/11/14/luxury-surveillance/#liar-liar

Ask a chatbot to define Cory Doctorow's theory of enshittification,
and chances are it will tell you that I think that
advertising-driven media inevitably produces exploitative and
extractive arrangements, something I completely, totally disagree
with.

What this reminds me of is the broader experience of chatbots, where
asking for some autogen slop on a subject you know little about
seems incredibly impressive and authoritative, while a request for
something you're an expert on yields something that's embarrassingly
wrong and stupid. Do this enough times and you'll realize that the
only reason the autocomplete that comes back when you ask a chatbot
to "explain this subject I know nothing about" seems impressive is
that you can't tell how wrong the chatbot is.

By contrast, the Yes Men recently commissioned someone to
impersonate me in a fake Wired article, and she did an
*astoundingly* good job:

https://web.archive.org/web/20240910175753/https://wired-magazine.com/2024/09/
the-climate-has-a-posse-campaign/

This is a genuinely great style transfer *and* a terrific
approximation of where I might land in a critique of this subject
I've never written on before.

It's a very good example of the extremely wide gap between slop and
expert human output.
  
inkwell.vue.551 : Bruce Sterling and Jon Lebkowsky: State of the World 2025
permalink #66 of 157: Jon Lebkowsky (jonl) Sat 4 Jan 25 12:28
    
I asked ChatGPT to create a *nuanced and accurate* definition of
your theory of enshittification, and got this, which I think is
pretty good:

Cory Doctorow's theory of enshittification describes a recurring
pattern in the lifecycle of digital platforms where their design and
operations evolve in ways that prioritize short-term profit and
exploitation over long-term value and user satisfaction. This
process leads to the degradation of the platform's quality, utility,
and trustworthiness for its users, creators, and other stakeholders.

Doctorow explains enshittification as a three-stage cycle:
        1.      Early Stage: User Benefit Focus
Platforms initially prioritize users, offering free or subsidized
services, innovative features, or generous incentives to attract a
large user base. This phase is characterized by rapid growth and
goodwill as the platform appears user-centric.
        2.      Middle Stage: Creator/Partner Exploitation
Once a substantial user base is secured, platforms shift their focus
to exploiting creators, partners, or advertisers. They might reduce
creator payouts, increase fees, or impose restrictive policies while
maintaining the appearance of user-centeredness.
        3.      Late Stage: Monetization Over Everything
In the final stage, platforms prioritize maximizing profits by
extracting value from both users and creators. Users face intrusive
ads, degraded functionality, and reduced value, while creators are
squeezed for higher fees or stricter terms. This leads to declining
user satisfaction and eventual platform stagnation or collapse.

Doctorow emphasizes that enshittification is not inevitable but
stems from misaligned incentives, particularly when platforms
prioritize shareholders and profit maximization over community
well-being and long-term sustainability. It is a cautionary critique
of how monopolistic tendencies and unchecked greed can erode the
foundational value of digital ecosystems.
  
inkwell.vue.551 : Bruce Sterling and Jon Lebkowsky: State of the World 2025
permalink #67 of 157: Slanketpilled hammockmaxer (doctorow) Sat 4 Jan 25 12:47
    
That's pretty good, but it's also basically a rephrasing of the
opening of a speech I gave.
  
inkwell.vue.551 : Bruce Sterling and Jon Lebkowsky: State of the World 2025
permalink #68 of 157: Jon Lebkowsky (jonl) Sat 4 Jan 25 13:15
    
I figured if I asked for accuracy, it would find the most reliable
source!
  
inkwell.vue.551 : Bruce Sterling and Jon Lebkowsky: State of the World 2025
permalink #69 of 157: JD Work (hstspect) Sat 4 Jan 25 15:32
    
AI driven style transfer, with amplified Dunning–Kruger effects, is
very much one of the worst nightmares for folks who believe that the
currency and impact of ideas matter.

It is of course the perfect condition for the shallow b-school
executives that have never sought to master anything difficult in
their lives, and who are equally happy to pump out marketing to push
any commodity trash so long as they can claim it is of the moment.

Alienating training rights becomes further interesting when the task
being trained is not merely creative (as important as these things
are), but operational and analytic tradecraft. How we build folks to
do hard things in the national security space is pretty important,
not merely as a matter of the thing itself but the cultural and
legal mindsets we pass along in the process (respect for democratic
institutions, adherence to rule of law, obligations to protect human
rights, etc.) I am not sure any degree of alignment shackles,
however well codified, gets us there as a replacement.

There are also subtle effects in all of this, much like the Monkey's
Paw granting wishes. The whole UFO cultural thread would almost
certainly not exist, or not look anything like it does now, but for
the cover programs of earlier generations intended to protect high
performance aircraft development and testing. Long after those
platforms were put to bed, we have this present weirdness
recirculating again and again. Where even folks that really ought to
know better are apparently one-shotted by the wrong conspiracy
theory drivel, and even at far less plausible degrees of D/K
pretense towards authority.

At one level, the entertainment complex built on these things were
good for a laugh, and a bit of engaging dramatic presentation. But
our media channels, saturated with fabrication and running hard
against the pathologies of broken education systems, are no longer a
separate and entirely ludic thing. Eversion has blurred boundaries,
and the ever deeper layers of coordinated inauthentic behavior
intended to monetize addictions need to push those boundaries to be
as porous as they can.

Is that a symptom of late stage enshittification? Or some lateral
pathology - another emerging co-morbidity of platform decay in the
attention economy?
  
inkwell.vue.551 : Bruce Sterling and Jon Lebkowsky: State of the World 2025
permalink #70 of 157: Jennifer Powell (jnfr) Sat 4 Jan 25 18:14
    <scribbled by jnfr Sat 4 Jan 25 18:14>
  
inkwell.vue.551 : Bruce Sterling and Jon Lebkowsky: State of the World 2025
permalink #71 of 157: Jennifer Powell (jnfr) Sat 4 Jan 25 18:15
    
I'd love to think we could all continue these conversations for
another ten, 20 years or more. I'd love to see those convos because
we all know we're headed into tumultuous times.

I worry most that in 25 years anyone who is still around will be
seeing a vastly changed ecological world than what we live with
today. I hope I can stick around and watch it happen, though not out
of joy for it.

JD slippage! 
  
inkwell.vue.551 : Bruce Sterling and Jon Lebkowsky: State of the World 2025
permalink #72 of 157: Bruce Sterling (bruces) Sat 4 Jan 25 23:36
    
I'd agree that an LLM would generally remove the most Doctorowian
aspects of Doctorow and also misrepresent the most original
Doctorowian thought.

However, I'm wondering if maybe this is an actual consumer
*benefit.* Yes, it's bowdlerized and kid-friendly slop-product, it's
helpful-harmless-and-honest, but it's also the Reader's Digest
Condensed Novel version of crabby subcultural material that
unwashed-masses  might find hard to approach.  It's the LLM
"democratization" argument.  If I can generate it in one click, why
can't I read it and understand it in one click?
  
inkwell.vue.551 : Bruce Sterling and Jon Lebkowsky: State of the World 2025
permalink #73 of 157: Bruce Sterling (bruces) Sat 4 Jan 25 23:37
    

What would happen if you scanned the entirety of WHOLE EARTH REVIEW
'n' CATALOG (easily available on Internet Archive) and just
style-transferred that into the tradwife style of "Good
Housekeeping" or maybe  some Midwest-white-guy "Popular Mechanics"? 
Just, you know, drain all the lysergic hippie out of it, but keep
all the schematic drawings of geodomes and teepees.  Would it be a
big hit?   Would rightwing preppers adore it, and take every word as
gospel?  It would be rather MacLuhan medium-and-message, but it
seems like that feat would be technically doable with enough
stochastic firepower.

If you can translate from Estonian to Maltese in one shot, then you
ought to be able to "translate" from the jargon of one subculture to
any other random coalition/cult/group.  Just, you know, appropriate
it and pastiche it -- get AI to do the bullet-point precis of it,
and then put new skin on it and see if it will get up and walk
around.
  
inkwell.vue.551 : Bruce Sterling and Jon Lebkowsky: State of the World 2025
permalink #74 of 157: Bruce Sterling (bruces) Sat 4 Jan 25 23:38
    

I know that the result would look and feel a little  dead, in the
way that soundtrack mashups of recombined pop-songs always sound a
little dead.  But also, this is what cultures more or less *do.* 
Stuff gets incubated in tidal-pool subcultures and then it gets
mainstreamed.  It's like historical sloppification; like Jesus
Christ of Nazareth eventually becoming Donald Trump's best friend
and ally.

If you read Orwell's 1984 in 2025 it reads like a product-catalog
now.  Practically every page has been privatized and monetized. 
It's not a spiky narrative of dystopian terror, it's actually pretty
mild and steampunky by the standards of the modern doomscroll.
  
inkwell.vue.551 : Bruce Sterling and Jon Lebkowsky: State of the World 2025
permalink #75 of 157: drain all the lysergic hippie out of it (jukevox) Sun 5 Jan 25 02:42
    
yeah, let's have a right wing Boing Boing.

Lauren Boebert as Kata Sutra
Spike Matt Gaetz's hair to look like Mark's.
Ben Shapiro can re-write all Cory's stuff
Gareth can give prepper tips and QAnon jargon watch.

I hear Scott Adams is available to do the illustrations.  

We can call it Boom Boom, A Directory of Apocalyptic Things
  

More...



Members: Enter the conference to participate. All posts made in this conference are world-readable.

Subscribe to an RSS 2.0 feed of new responses in this topic RSS feed of new responses

 
   Join Us
 
Home | Learn About | Conferences | Member Pages | Mail | Store | Services & Help | Password | Join Us

Twitter G+ Facebook