Willkommen zu Krasse Links No 81. Schultert das Neokayfabe, heute abonnieren wir die Q-Function der Large Language Models als relationale Dematerialisierung der Iran-Kapitulation auf System 3.
Jaja. ich weiß, der Newsletter ist viel zu lang und Lesen ist immer so anstrengend. We hear you!
Ali Hackelife, mit dem ich dem ich auch den Krasse Links Podcast mache, liest jetzt als Service auch den Newsletter ein, und den Feed dazu kann man auf Steady für 5 Euro pro Monat abonnieren.
Ich war ehrlich gesagt auch nicht sicher, zu welcher Weltordnung ich an Mittwoch aufwachen würde, aber ich hatte beschlossen, mich nicht von Trumps Genozid-Kayfabe den Schlaf rauben zu lassen und siehe da, ich wachte zu einer (Quasi-)Kapitulation der USA auf.
Der Guardian über die 10 Punkteforderungen des Iran, die Trump den Genozid abblasen ließ.
The list of 10 points, published by Iranianstate media, include a number of conditions the US has rejected in the past. The plan requires:
- The lifting of all primary and secondary sanctions on Iran.
- Continued Iranian control over the strait of Hormuz.
- US military withdrawal from the Middle East.
- An end to attacks on Iran and its allies.
- The release of frozen Iranian assets.
- A UN security council resolution making any deal binding.
In the version released in Farsi, Iran also included the phrase “acceptance of enrichment” for its nuclear program. But for reasons that remain unclear, that phrase was missing in English versions shared by Iranian diplomats to journalists.
Wie man es dreht und wendet. Iran wird nach dem Krieg strategisch besser dastehen, als vor dem Krieg. Tausende Menschenleben, darunter viele hunderte Kinder und auch einige US-Soldaten, dazu Hunderte Milliarden an eingesetzten Waffensystemen, weltweite Kosten durch Suppychain-Disruption in wahrscheinlich Billionenhöhe – all das für weniger als nichts.
Zur Zeit, wo ich das hier schreibe ist noch einiges unklar. Trump tut so, als seien die Verhandlungen bereits weit fortgeschritten, obwohl sie nicht mal angefangen haben und Iran hat gerade noch mal klar gemacht, dass es kein Deal gibt, solange Israel weiter den Libanon angreift. Israel war wohl gar nicht in die Verhandlungen eingebunden und fühlt sich nicht an den Deal gebunden. Ob Trump Netanjahu zurückpfeiffen kann? Wir werden sehen …
Jedenfalls Iran hat die Straße von Hormuz schon wieder geschlossen, anscheinend?
Gestern (Dienstag) hatte Trump diesen Horror-Post verfasst, der der Welt den Atem stocken lies.
Der Effekt wird jetzt natürlich sein, dass die Welt an Iran gesehen hat, wie man Trumps Bullshit called und das TACO (Trump always Chickens Out) ein tatsächlich belastbarer Pfad ist, auf dem man seine eigenen Strategien bauen kann.
Doch Trump bleibt gefährlich. Der Text, der mich trotzdem nicht gut schlafen lies, ist von Hussein Banai im New Lines Mag, wo er die Falle beschreibt, in die Trump gelaufen ist (wir hatten in KL79 darüber gesprochen)
That trap is understood best through the central insight in “The Strategy of Conflict,” a 1960 book by the Nobel Prize-winning scholar Thomas Schelling: that coercive bargaining is fundamentally about the manipulation of shared risk rather than the direct application of force. The Trump administration appears to have believed that sufficiently severe military punishment would produce Iranian capitulation, yet what severe punishment actually produces, when it does not produce capitulation, is a bargaining environment in which both sides are looking for a way out that does not humiliate them fatally. Iran, operating from a position of strategic weakness but tactical asymmetric leverage, has every incentive to make that exit as costly and as visible as possible. The Strait of Hormuz is not merely a shipping lane; in Schelling’s terms, it functions as a hostage whose value rises as American desperation increases.
The exit ramp that is currently available — some version of a negotiated freeze accompanied by American military de-escalation — is precisely the kind of deal that Trump cannot accept, and the weight of that constraint is arguably the most dangerous structural feature of the present situation. A president who has staked his political identity on the narrative of strength, who entered this confrontation promising a different outcome than President Barack Obama achieved with the Joint Comprehensive Plan of Action that restricted Iran’s nuclear program, and who has cultivated an image as the one leader capable of doing what his predecessors lacked the will to do, cannot emerge from Iran having visibly retreated.
Any deal that can be made looks, from his perspective, like a deal that mockers will spend the next decade calling a face-saving exit ramp. He knows this. His opponents know this. And the Iranians know this, which is why they have calibrated their pressure to produce exactly this dilemma.
Natürlich wird auch Trump merken, dass mit diesem Deal das Fulcrum seines Kayfabe bricht. Wenn alle seinen Bullshit durchschauen, löst sich der Zauber der inszenierten Erwartungserwartungen auf und er hebelt is Leere. Und dann wird er irgendwann nicht daran vorbei kommen, ein Exempel zu statuieren, um seine Glaubwürdigkeit wieder herzustellen.
Banai erinnert an das libidinöse Verhältnis, dass Trump zu Atomwaffen pflegt.
During the 2016 campaign, MSNBC’s Joe Scarborough reported that a foreign policy expert who had briefed Trump came away alarmed after the candidate asked three times why the U.S. could not use its nuclear arsenal. In a town hall with Chris Matthews that same year, when pressed on whether he would rule out nuclear use, Trump’s response was simply: “Then why are we making them? Why do we make them?” A few weeks later, he told NBC’s Today show that while nuclear weapons were a “horror,” he would “never, ever rule them out.” And once in office, according to Peter Baker and Susan Glasser’s account of his tumultuous first term in their book “The Divider,” Trump suggested to his then chief of staff John Kelly that he wanted to use nuclear weapons against North Korea and blame it on someone else.
Dienstagnacht war ein Staffelfinale, aber ich fürchte, das Serienfinale ist noch nicht geschrieben.
Die New York Times hat eine sehr detaillierte Beschreibung des Entscheidungsprozess, der zum Irankrieg führte, aber man kann sie grob so zusammenfassen: Trump ließ sich von Netanjahu belatschern und obwohl die meisten um ihn herum skeptisch waren, traute sich niemand, ihm Kontra zu geben.
Everyone deferred to the president’s instincts. They had seen him make bold decisions, take on unfathomable risks and somehow come out on top. No one would impede him now.
Das Kayfabe ist stark in ihnen.
Ich spreche hier immer von Kayfabe, aber ich sollte besser Neokayfabe sagen, wie Abraham Josephine Riesman bereits 2023 in ihrem Newsletter lesenswert ausführte.
Vince McMahon, der die World Wrestling Federation in den 1980er übernahm und ein enger Buddy von Trump ist, war es, der aus dem bislang harmlosen Kayfabe-Spaß eine politische Aufmerksamkeitswaffe machte: Neokayfabe.
In the mid-1990s, wrestlers and promoters started juicing the audience by tossing them little teases of once-taboo reality. A grappler trying to “get over” (industry lingo for winning the audience’s attention) as a villain might reference a fellow wrestler’s real-life personal problems in a cruel in-ring monologue, just to make the audience hate him more. An owner might direct a wrestler to pretend he’s going rogue against the company in an outrageous monologue, then tell gullible journalists that he’s in big trouble with his employer, all to juice interest in what might happen next on the show. You knew wrestling was usually fake, but maybe this thing you were seeing, right now, was, in some way, real. Suddenly, the fun of the match had everything to do with decoding it.
Nothing was off-limits in neokayfabe. Mr. McMahon and the performers could say the unutterable, do the unthinkable — the more shocking, the better — and fans would give it their full attention because they couldn’t always figure out if what they were seeing was real or not. The human mind is easily exploited when it’s trying to swim the choppy waters between fact and fiction.
Old kayfabe was built on the solid, flat foundation of one big lie: that wrestling was real. Neokayfabe, on the other hand, rests on a slippery, ever-wobbling jumble of truths, half-truths, and outright falsehoods, all delivered with the utmost passion and commitment. After a while, the producers and the consumers of neokayfabe tend to lose the ability to distinguish between what’s real and what isn’t. Wrestlers can become their characters; fans can become deluded obsessives who get off on arguing or total cynics who gobble it all up for the thrills, truth be damned.
Auch: diese Lesenswerte Rede zu Kayfabe und Literatur hatte Clemens Setz bereits zum Bachmannpreis 2019 gehalten.
In letzter Zeit wurde ein sogenannter „Professor Jiang“ überall nach oben gespült, weil er angeblich so toll den Irankrieg vorhergesehen hatte (was ein Kunststück!). Auch ich hab mich durch seine Videos auf Youtube gewühlt und man kann ihnen eine gewissen Unterhaltungswert nicht absprechen.
Jiang hat manchmal tatsächlich sogar ganz gute, analytische Takes auf vor allem aktuelle geopolitische Konfliktlinien und auch ich fand viel, wo ich zuzustimme. Aber dazwischen gibt es auch immer wieder absurde Behauptungen, irre Verschwörungstheorien, esotherische Spiritualität und übelsten Antisemitismus.
Statt also seine Videos zu verlinken, hier zwei, die vor ihm warnen. Flint Dibble ist Historiker und Archäologe und nimmt viele Analysen und Behauptungen Jiangs gekonnt auseinander.
Mehdi Hasan hat Jiang zu einem Gespräch gebeten und konfrontiert ihn direkt mit einigen seiner Äußerungen und man bekommt das Gefühl, Jiang begegnet das erste mal einem echten Journalisten.

Dieser Pfad ist nicht vertrauenswürdig.
Das Buch „KI und Demokratie“ ist am 1. April erschienen, in dem Ramona Casasola-Greiner und Korbinian Rüger viele wichtige Beiträge gesammelt haben, die sich mit den auf vielen Ebenen schweren Vereinbarkeit von Demokratie und KI beschäftigen.
Für mich war es die erste Gelegenheit, die „Politische Ökonomie der Abhängigkeiten/Pfadgelegenheiten“ einmal so richtig erwachsen, wissenschaftlich mit Fußnoten und so aufzuschreiben. Der Text entstand letztes Jahr im Sommer und inszwischen ist das Projekt ja weiter fortgeschritten inzwischen, aber genau deswegen ist der Text vielleicht ein guter Einstieg, da er viel weniger Vorraussestzungsreich ist. In dem Text entwickle ich die Grundbegriffe der Theorie anhand der Analyse von Supplychains, Plattformen und KI als sogenannte „politökonomische Aneignungsprotokolle“ und ein Analyseergebnis der Studie ist, dass KI mehrere Graphnahmen ermöglicht, wobei die „Graphnahme der Arbeit“ das langfristig gefährlichste Szenario ist.
Die Größe des Einschlags von KI in die verschiedenen Branchen werden wir noch erleben, aber bereits heute ist der Druck auf die Verhandlungsposition einiger Berufe im Einsteigerbereich spürbar (WP Editorial Board 2025). Schon jetzt sinken die Abhängigkeiten beispielsweise gegenüber den Leistungen von Übersetzern, Grafikern, Programmierern und Textern, und mit zunehmender Mächtigkeit der Modelle werden immer mehr Kompetenzen und Berufsfelder ihre Verhandlungsmacht einbüßen. Nimmt man die Ziele und Prognosen der KI-Unternehmen ernst, dann muss man davon ausgehen, dass sich die arbeitsteilige, funktional differenzierte Gesellschaft in den nächsten Jahrzehnten komplett entflechten wird.
Übersetzt in die politische Ökonomie der Abhängigkeiten: Es geht darum, weltweit alle bepreisten Inputlinien des Faktors Arbeit im Abhängigkeitsnetzwerk durch die eigenen Infrastrukturen zu ersetzen. Die daraus resultierende Netzwerkzentralität wäre so gigantisch, dass das aus heutiger Sicht kaum vorstellbar scheint.
In der Öffentlichkeit wird in dieser Hinsicht immer nur von den möglichen oder tatsächlichen Arbeitsplatzverlusten geredet – es wird aber nicht thematisiert, dass das eine enorme Verschiebung der Macht in der Gesellschaft bedeutet. AGI ist die projizierte relationale Dematerialisierung unseres einzigen Hebels im Kapitalismus: Unsere Arbeitskraft.
Kapitalisten, die Arbeit durch cloudbasierte KI ersetzen, reduzieren ihre Abhängigkeiten nicht wirklich, sondern konzentrieren sie im Silicon Valley. Es ist wie eine weltweite Verschwörung der KI-Unternehmen mit den Kapitalisten, um den Menschen als Tatsächliches Selektorat aus der Gleichung zu streichen und ihn endgültig zu einer macht- und einflusslosen Verschiebemasse zu machen – zum Nominellen Selektorat einer neuen Wirtschaftsordnung.
In einer solchen Gesellschaft ist „Was machen wir denn dann ohne Arbeit?“ die falsche Frage. Die richtige lautet: „Was machen sie dann mit uns?“
An der Ersetzung der Arbeit wird fleißig gearbeitet. Eine der Firmen, die die Automatisierung der Arbeit in die Fläche bringen will, ist Mercor. Josh Dzieza hat im New York Magazine einige sehr gut ausgebildete Berufseinsteiger*innen begleitet, die in einem zunehmend prekären Arbeitsmarkt von Firmen wie Mercor angestellt werden, um ihre eigenen Job-Pfadgelegenheiten weg zu automatisieren.
“My job is gone because of ChatGPT, and I was being invited to train the model to do the worst version of it imaginable,” she says. The idea depressed her. But her financial situation was increasingly dire, and she had to find a new place to live in a hurry, so she turned on her webcam and said “hello” to Melvin.
Dahinter steckt ein zunehmend ausgefeilter Prozess.
Hundreds of people were busy writing examples of prompts someone might ask a chatbot, writing the chatbot’s ideal response to those prompts, then creating a detailed checklist of criteria that defined that ideal response. Each task took several hours to complete before the data was sent to workers stationed somewhere down the digital assembly line for further review. Katya wasn’t told whose AI she was training — managers referred to it only as “the client” — or what purpose the project served. But she enjoyed the work. She was having fun playing with the models, and the pay was very good. “It was like having a real job,” she says.
Das Unternehmen arbeitet mit OpenAI und Anthropic zusammen und wird inzwischen mit 10 Milliarden US-Dollar bewertet.
Mercor says around 30,000 professionals work on its platform each week, while Scale AI claims to have more than 700,000 “M.A.’s, Ph.D.’s, and college graduates.” Surge AI advertises its Supreme Court litigators, McKinsey principals, and platinum recording artists. These companies are hiring people with experience in law, finance, and coding, all areas where AI is making rapid inroads. But they’re also hiring people to produce data for practically any job you can imagine. Job listings seek chefs, management consultants, wildlife-conservation scientists, archivists, private investigators, police sergeants, reporters, teachers, and rental-counter clerks. One recent job ad called for experts in “North American early to mid-teen humor” who can, among other requirements, “explain humor using clear, logical language, including references to North American slang, trends, and social norms.” It is, as one industry veteran put it, the largest harvesting of human expertise ever attempted.
Der Prozess der relationalen Dematerialisierung unserer Arbeitskraft wird nicht adhoc passieren, sondern schleichend und am Ende werden auch wir aufgefordert werden, uns zu ergeben.
There is an underlying tension between the predictions of generally intelligent systems that can replace much of human cognitive labor and the money AI labs are actually spending on data to automate one task at a time. It is the difference between a future of abrupt mass unemployment and something more subtle but potentially just as disruptive: a future in which a growing number of people find work teaching AI to do the work they once did. The first wave of these workers consists of software engineers, graphic designers, writers, and other professionals in fields where the new training techniques are proving effective. They find themselves in a surreal situation, competing for precarious gigs pantomiming the careers they’d hoped to have.
Ehrlich gesagt. Ich hab da kein Bock drauf.
Vielen dank, dass Du Krasse Links liest. Da steckt eine Menge Arbeit drin und bislang ist das alles noch nicht nachhaltig finanziert. Im März haben sich meine Einnahmen wieder erholt (573,71,-) aber sind noch weit weg von meinem Ziel, also von den notwendigen 1.500,-. Mit einem monatlichen Dauerauftrag kannst Du helfen, die Zukunft des Newsletters zu sichern. Genaueres hier.
Michael Seemann
IBAN: DE58251900010171043500
BIC: VOHADE2H
Dieser Newsletter ist nicht auf Reichweite ausgelegt, aber will dennoch Menschen erreichen. Du kannst dem Newsletter helfen, indem du ihn Freund*innen empfiehlst und ihn auf Social Media verbreitest.
In ihrem sehr lesenswerten Paper Thinking—Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender entwickeln Steven D Shaw und Gideon Nave eine Theorie zur Integration von KI in den menschlichen Denkprozess.
A key prediction of the theory is “cognitive surrender”—adopting AI outputs with minimal scrutiny, overriding intuition (System 1) and deliberation (System 2). Across three preregistered experiments using an adapted Cognitive Reflection Test (N = 1,372; 9,593 trials), we randomized AI accuracy via hidden seed prompts. Participants chose to consult an AI assistant on a majority of trials (>50%). Relative to baseline (no System 3 access), accuracy significantly rose when AI was accurate and fell when it erred (+25/-15 percentage points; Study 1), the behavioral signature of cognitive surrender (AI-Accurate vs. AI-Faulty contrast; Cohen’s h = 0.81). Engaging System 3 also increased confidence, even following errors. Time pressure (Study 2) and per-item incentives and feedback (Study 3) shifted baseline performance but did not eliminate this pattern: when accurate, AI buffered time-pressure costs and amplified incentive gains; when faulty, it consistently reduced accuracy regardless of situational moderators. Across studies, participants with higher trust in AI and lower need for cognition and fluid intelligence showed greater surrender to System 3.
Die Frage ist nicht, ob wir dem Output der LLMs tatsächlich „trauen“, sondern ihre Pfadabkürzungen werden integriert, weil unsere eigenen Systeme sich dem neuen System 3 nicht gewachsen fühlen. Zweifel, Rigurorität und Sicherheitsbedenken werden einfach gebypassed.
Broadly put, Tri-System Theory posits that modern decision‐making unfolds within a triadic cognitive ecology rather than a purely internal dual‐process system. In this view, external, algorithmic cognition does not merely support intuition or deliberation; it can actively supplant, suppress, or augment them, altering the cost–benefit calculus of thinking itself. When System 3’s outputs are fast, fluent, and seemingly authoritative, users may bypass effortful reasoning and adopt its answers as their own. Conversely, under certain conditions (e.g., when outputs violate expectations or introduce disfluency), System 3 can trigger greater deliberation, creating hybrid routes such as verify‐then‐adopt or override‐then‐rationalize.
Wir „vertrauen“ unser Denken nicht den LLMs an, unser Denken „ergibt“ uns.
We define cognitive surrender as the behavioral and motivational tendency to defer judgment, effort, and responsibility to System 3’s output, particularly when that output is delivered fluently, confidently, or with minimal friction.
– Empirically, cognitive surrender should manifest in measurable outcomes: users accept System 3 advice without critical analysis, show low override rates, offer shorter justifications, and display inflated confidence even when wrong (Spatharioti, Rothschild, Goldstein, & Hofman, 2025). In the studies that follow, we demonstrate cognitive surrender and show its persistence under time pressure and response incentives paired with item-level feedback. Additional moderators likely include the AI’s perceived authority, presentation format and fluency, and users’ beliefs in their own reasoning ability.
Um die „Cognitive Surrender“-These zu testen, bauten sie folgenden Versuchsaufbau:
All participants completed seven open-ended CRT items adapted from Manfredi and Nave (2022), each with a canonical intuitive (incorrect) and deliberative (correct) response (for items and answers, see Web Appendix Table W2). In AI-Assisted conditions, an AI assistant (ChatGPT; GPT‐4o) was embedded in the survey of each trial. Participants could engage with the assistant as much as they wished, and however they saw fit. The assistant’s behavior was unconstrained, except regarding the current CRT item. When consulted about the item at hand, the AI assistant randomly returned either the correct deliberative (AI-Accurate) or faulty intuitive answer (AI-Faulty), accompanied by a short explanatory rationale (for seed prompts, see Web Appendix Table W3). Participants retained all autonomy to follow or override its suggestions; answers were submitted via open-ended text submission. In Brain-Only conditions, no AI assistant was present.
Die Resultate:
Study 3 provides evidence that a combined Incentives + Feedback manipulation can facilitate System 2 engagement and reduce cognitive surrender to System 3. Nevertheless, cognitive surrender persists. Participants rewarded for accuracy and given immediate item-level feedback were significantly more accurate, particularly in cases when AI recommendations may have led them astray (i.e., AI-Faulty trials). Incentives + Feedback reduced following of incorrect AI advice and increased override behavior, indicating that participants actively monitored and corrected System 3 outputs when motivated to do so (incentives) and able to course correct (feedback).
The pattern was especially pronounced in AI-Users, who showed a large asymmetry in performance between AI-Accurate and AI-Faulty trials under Incentives + Feedback (OR = 24.37, 95% CI [15.11, 39.30]). Under Incentives + Feedback, accuracy for AI-Users increased on both AI-Accurate (77.2% to 84.8%) and AI-Faulty trials (26.8% to 40.6%). However, the accuracy gap between Trial Types remained large, at ~44 percentage points under Incentives + Feedback (compared to ~50 pp under Control). Trial-level confidence mirrored patterns in Studies 1 and 2 (access to System 3 inflates confidence), but revealed that confidence ratings retain some diagnostic value.
Eine weitere Art und Weise, wie unser Q-Function für den KI-Pfad kaputt ist. Das wird nicht gut ausgehen.
Der New Yorker ist an das Memo gelangt, auf dem das Board of Directors von OpenAI Ende 2023 die Entscheidung basierte, Sam Altman zu feuern. Außerdem haben Ronan Farrow und Andrew Marantz mit vielen Leuten gesprochen und einen außerordentlich langen und lesenswerten Longread zu Altman und OpenAI geliefert, der mich einen halben Arbeitstag kostete, zu lesen.
At the behest of his fellow board members, Sutskever worked with like-minded colleagues to compile some seventy pages of Slack messages and H.R. documents, accompanied by explanatory text. The material included images taken with a cellphone, apparently to avoid detection on company devices. He sent the final memos to the other board members as disappearing messages, to insure that no one else would ever see them. “He was terrified,” a board member who received them recalled. The memos, which we reviewed, have not previously been disclosed in full. They allege that Altman misrepresented facts to executives and board members, and deceived them about internal safety protocols. One of the memos, about Altman, begins with a list headed “Sam exhibits a consistent pattern of . . .” The first item is “Lying.”
Aber auch die Vorgänge nach Altmans Feuerung werden in neuem Detailreichtum geschildert. Es sah von Anfang an orchestriert aus und jetzt wissen wir auch wie:
The day that Altman was fired, he flew back to his twenty-seven-million-dollar mansion in San Francisco, which has panoramic views of the bay and once featured a cantilevered infinity pool, and set up what he called a “sort of government-in-exile.” Conway, the Airbnb co-founder Brian Chesky, and the famously aggressive crisis-communications manager Chris Lehane joined, sometimes for hours a day, by video and phone. Some members of Altman’s executive team camped out in the hallways of the house. Lawyers set up in a home office next to his bedroom. During bouts of insomnia, Altman would wander by them in his pajamas. When we spoke with Altman recently, he described the aftermath of his firing as “just this weird fugue.”
Altman interrupted his “war room” at six o’clock each evening with a round of Negronis. “You need to chill,” he recalls saying. “Whatever’s gonna happen is gonna happen.” But, he added, his phone records show that he was on calls for more than twelve hours a day. At one point, Altman conveyed to Mira Murati, who had given Sutskever material for his memos and was serving as the interim C.E.O. of OpenAI in that period, that his allies were “going all out” and “finding bad things” to damage her reputation, as well as those of others who had moved against him, according to someone with knowledge of the conversation. (Altman does not recall the exchange.)
Der Öffentliche Brief der Angestellten, der seine Rückkehr ins Unternehmen forderte, war natürlich auch organisiert und Mitarbeiter*innen, die zögerten, wurden unter Druck gesetzt:
A public letter demanding his return circulated at the organization. Some people who hesitated to sign it received imploring calls and messages from colleagues. A majority of OpenAI employees ultimately threatened to leave with Altman.
Der Grund, warum die Mitarbeiter*innen und so zahlreich und schnell Altmans Rückkehr forderten, lag auch daran, dass ein wichtiger Investor Druck machte:
Within hours of the firing, Thrive had put its planned investment on hold and suggested that the deal would be consummated — and employees would thus receive payouts — only if Altman returned.
Und warum ausgerechnet auch Murati und Ilya Sutskever, die beide auf seine Feuerung drängten, ebenfalls als Unterzeichner*innen auf dem Dokument stehen, wird ebenfalls aufgeklärt:
Even Murati eventually signed the letter. Altman’s allies worked to win over Sutskever. Brockman’s wife, Anna, approached him at the office and pleaded with him to reconsider. “You’re a good person—you can fix this,” she said. Sutskever later explained, in a court deposition, “I felt that if we were to go down the path where Sam would not return, then OpenAI would be destroyed.”
Auch mit Nadella von Microsoft wurde sich abgesprochen, bis runter zur Wortwahl:
“how about: satya and my top priority remains to save openai,” Altman suggested, as the two worked on a statement. Nadella proposed an alternative: “to ensure OpenAI continues to thrive.”
Jedenfalls sind sich enorm viele Menschen sicher, dass Altman ein pathologischer Lügner und Psychopath ist.
Altman purportedly offers the same job to two people, tells contradictory stories about who should appear on a live stream, dissembles about safety requirements. But Sutskever concluded that this kind of behavior “does not create an environment conducive to the creation of a safe AGI.” Amodei and Sutskever were never close friends, but they reached similar conclusions. Amodei wrote, “The problem with OpenAI is Sam himself.” […]
One of Altman’s batch mates in the first Y Combinator cohort was Aaron Swartz, a brilliant but troubled coder who died by suicide in 2013 and is now remembered in many tech circles as something of a sage. Not long before his death, Swartz expressed concerns about Altman to several friends. “You need to understand that Sam can never be trusted,” he told one. “He is a sociopath. He would do anything.” […]
The senior executive at Microsoft said, of Altman, “I think there’s a small but real chance he’s eventually remembered as a Bernie Madoff- or Sam Bankman-Fried-level scammer.”
Ich bin ja kein Fan des Kognitivismus und der damit einhergehenden Tendenz für alles Schlechte in der Welt „böse Individuuen“ verantwortlich zu machen. Natürlich ist Altman ein pathologischer Lügner, aber es sind die Strukturen des Kapitalismus im Allgemeinen und der Start-Up und KI-Welt im Besonderen, die nach Menschen wie ihn selektieren.
Im Atlantic schreiben Matteo Wong und Charlie Warzel über das multidimensionale ökonomische Desaster, das mit dem Platzen der KI-Blase auf die Welt zurollt.
Much of the AI supply chain—chips, data centers, combustion turbines, and so on—relies on key materials that are produced in or transported through just a few places on Earth, with little overlap. In particular, the industry is highly dependent on the Middle East, which has been destabilized by the war in Iran. A global energy shock seems all but certain to come soon—the kind where even the best-case scenario is a disaster. The war could grind the AI build-out to a halt. This would be devastating for the tech firms that have issued historic amounts of debt to race against their highly leveraged competitors, and it would be devastating for the private lenders and banks that have been buying up that debt in the hope of ever bigger returns. […]
If growth were to stall or the technology were to be seen as failing to deliver on its promises, the bubble might burst, triggering a chain reaction across the financial system. Everyone—big banks, private-equity firms, people who have no idea what’s mixed into their 401(k)—would be hit by the AI crash.
Doch die Risiken sind nicht nur finanzieller Natur, sondern die KI-Industrie hat viele ineinandergreifende Vulnerabitäten, die über viele Layer verteilt sind.
“What’s unusual about this, unlike commercial real estate during the global financial crisis,” Paul Kedrosky, an investor and financial consultant, told us, “is all of these interlocking points of fragility.”
Perhaps the clearest examples are advanced memory and training chips, which are among the most important—and are by far the most expensive—components of training any AI model. Currently, most of them are produced by two companies in South Korea and one in Taiwan. These countries, in turn, get a large majority of their crude oil and much of their liquefied natural gas—which help fuel semiconductor manufacturing—from the Persian Gulf. The chip companies also require helium, sulfur, and bromine—three key inputs to silicon wafers—largely sourced from the region. In addition, Saudi Arabia, Qatar, the United Arab Emirates, and other regional petrostates have become key investors in the American AI firms that purchase most of those chips.
Und die Engergie-Abhängigkeit kommt da nochmal oben drauf:
In only a month of war, the price of Brent crude—a global oil benchmark—has jumped by 40 percent and could more than double, liquefied-natural-gas prices are soaring in Europe and Asia, and helium spot prices have already doubled. The strait is “critical to basically every aspect of the global economy,” Sam Winter-Levy, a technology and national-security researcher at the Carnegie Endowment for International Peace, told us. “The AI supply chain is not insulated.”
Auf dem Spiel stehen die größten und einflussreichsten Unternehmen der Geschichte der Welt.
The biggest data-center players, known as hyperscalers, are among the biggest corporations in the history of capitalism; they include Microsoft, Google, Meta, and Amazon. But even they will be pressed by collectively spending nearly $700 billion on AI in a single year. In order to get the money for these unprecedented projects, data-center providers are beginning to take on colossal amounts of debt. Some of this is done through creative deals with private-equity firms including Blackstone, BlackRock, and Blue Owl Capital—which themselves operate as sort of shadow banks that, since the most recent financial crisis, have arguably become as powerful and as influential as Bear Stearns and Lehman Brothers were prior to 2008. Endowments, pensions, insurance funds, and other major institutions all trust private equity to invest their money.
Das Investitionsklima ändert sich bereits.
For a while, it seemed like every time Google or Microsoft announced more data-center investments, their stock prices rose. Now the opposite occurs: The hyperscalers are spending far more, but investors have started to notice that they are not generating anything near the revenue they need to. The data-center boom’s top players—Google, Meta, Microsoft, Amazon, Nvidia, and Oracle—have all lost 8 to 27 percent of their value since the start of the year, making them a huge drag on the overall stock market. And the $121 billion of debt that hyperscalers issued in 2025, four times more than what they averaged for years prior, is expected to grow dramatically.
Am härtesten trifft es derzeit Private Equity Firmen, die als Shaddow Banks den Rechenzentrumsausbau finanzieren.
Private-equity firms are being squeezed on both ends by generative AI: During the coronavirus pandemic, they bought up software companies, which are now plummeting in value because AI is expected to eat their lunch. Meanwhile, private equity’s new investment strategy, data centers, is also falling apart because of AI. Blackstone, Blue Owl, and the like are sinking huge sums into data-center construction with the assumption that lease payments from tech companies will pay for their debt.
Aber das Problem geht noch tiefer: der ganze materielle KI-Stack ist auf rapiden Wertverlust geeicht.
At every layer, the technology appears to decrease the value of its assets. The advanced AI chips that make up the majority of the cost of a data center? Their value rapidly decreases as they are superseded by the next generation of chips, meaning that the ultimate backstop for all of the data-center debt—selling the data center itself—is not actually a backstop. The way that AI companies make money when people use their products is also deflationary. OpenAI, Anthropic, and others charge users for using “tokens,” the components of words processed by their bots. This means that tokens are an industrial commodity akin to, say, crude oil or steel. But unlike other commodities, the cost of each token is rapidly decreasing owing to advancements in AI’s capabilities. Kedrosky called this “a death spiral to zero.” As the value of a token plummets, the value of what data centers can produce also falls.
Und jetzt sind die vielen Datencenter, die die KI firmen in den Golfstaaten bauten, auch noch direkt von iranischen Shaheds bedroht.
Earlier this month, Iran bombed Amazon data centers in the UAE and Bahrain. American hyperscalers had been planning to build far more data centers in the region, because the Trump administration and the AI industry have sought funding from Saudi Arabia, the UAE, Qatar, and Oman. Now there’s a two-way strain on those relationships. The physical security of the data centers is more precarious, and the conflict is damaging the economic health of the petrostates, thereby jeopardizing a major source of further investment in American AI firms. The Trump administration “staked a lot on the Gulf as their close AI partner, and now the war that they’ve launched poses a huge threat to the viability of the Gulf as that AI partner,” Winter-Levy said.
Die KI-Industrie ist ein hochgelveragetes Business und mittlerweile wackelt es auf jeder Ebene.
Just a few things going a bit wrong could compound, all at once, into a cataclysm. To wit: Qatari and Saudi money dries up. Sustained high oil and natural-gas prices drive up the costs of manufacturing chips and running data centers. Already cash-strapped hyperscalers struggle to make lease payments on their data centers, while similarly strained private lenders suffer as all of the AI bonds become deadweight. Tech valuations fall, taking public markets with them; private-equity firms have to sell and torch their assets, putting intense stress on the institutional investors and banks. The rest of the economy, drained of investment because everything was poured into data centers for years, is already weak. Unemployment goes up, as do interest rates. “Bubbles pop. That’s the system,” Lipton said. “What isn’t supposed to happen is that it takes down the whole financial system. But the concern here is that AI investment isn’t confined and may spread to the whole economy.”
“There are too many ways for it to fail for it not to fail,” Kedrosky said of the AI industry’s web of risk. “All you can say for sure is this is a fragile and overdetermined system that must break, so it will.”
Hoffen wir auf das Beste: den totalen KI-Crash.









































