Willkommen zu Krasse Links No 48. Navigiert Eure emotionalen Abhängigkeiten durchs Signal Gate, heute coden wir vibe, wenn Peter Thiel die Spiegelneuronen zur Systemkonkurrenz versloppt.
Von palästinensischen Opfern erfährt man meist nur die Statistik und das ist sicher auch ein Grund, warum, palästinensiches Leben in der westlichen Berichterstattung nicht zählt.
NPR hat das für eine Familie geändert, deren Schicksal von einer tödlichen Attacke auf ein privates Wohnhaus besiegelt wurde.
An Israeli strike on a Gaza apartment building killed 132 members of one family in October 2024. It was one of the deadliest Israeli strikes of the Israel-Hamas war.
[…]
Out of more than 1,000 strikes the group has assessed in the Gaza war, it said, the strike on the building housing the Abu Naser family was among the three deadliest.
Auch wenn ihr Signal-Gate sicher mitbekommen habt, lohnt es sich wirklich die beiden Stücke im Atlantic zu lesen.
Aber nachdem wir uns ein kurzes Schmunzeln gegönnt haben, möchte ich Euch noch Timothy Snyders Take dazu ans Herz legen.
But in the Signalgate scandal, we encounter something more chilling: our government is openly compromising our national security, the better to violate our rights. Its position is that it is worth risking the lives of soldiers abroad in order to be able to persecute civilians at home.
Snyder erinnert uns daran, dass dieser Signal-Gruppe kaum Einzelfall sein kann.
The assumption that Jeffrey Goldberg is the only person who was inadvertently added to a national security group, just because he is the only case we know about, is unsustainable.
No one during the chat wrote anything like: „hey, why are we using Signal?“ The reason that no one did so, most likely, is that they all do this every day.
Es ist also eine Angewohnheit dieser Regierung Entscheidungen in Singal Gruppen zu treffen. Und das trotz der enormen Risiken. Die Motivation kann nicht nur Bequemlichkeit sein.
But here’s the point: the authorities knew of these risks to national security, and thought that they were worth taking, and for a reason. I suggest that this reason is that Signal chats provide American authorities with cover to plan the violation of human rights.
Die größte Gefahr, die von Social Media für die Gesellschaft ausgeht, erwächst nicht aus Tiktok-verseuchten Jugendlichen, sondern aus verschlüsselten Chatgruppen mächtiger Menschen.
Nicholas Bequelin spricht mir in Foreign Policy aus der Seele, indem er den zentralen blinden Fleck aufdeckt, in dem die liberalen Eliten operieren: Putin und nun auch Trump sehen sich mit Liberalen Demokratien, Multilateralen Blöcken wie der EU und woken Kram wie „Menschenrechte“ in einer ideologischen Systemkonkurrenz.
This is because both traditional geopolitics and the realist school of international relations overlook a crucial dimension of state behavior in the international system: regime competition. While realism focuses primarily on power balances, territorial interests, and security concerns, it often neglects how the ideological nature of regimes—whether democratic or autocratic—profoundly shapes their strategic choices and interactions. It is within the framework of regime competition that Trump’s seemingly self-defeating policies can be understood as part of a deeper struggle between competing political systems. As Trump autocratizes elements of the American political system, his foreign policy increasingly espouses the characteristics of autocratic regimes. Whether deliberately or unwittingly, Trump’s policies so far are mostly in service of countering democratic restraints at home and abroad, reflecting his self-avowed preference for strongman rule over democratic oversight.
Diese ideologiosche Systemkonkurrenz erwächst nicht aus einem tief empfundenen politischen Glauben, sondern aus den materiellen Erfordernissen, die eigene Autokratie zu stabilisieren.
When a country turns into a democracy, it is accepted almost mechanically by a fairly homogeneous community of democratic states. This stems from the highly specific values that democratic regimes have in common, such as periodically elected governments, respect for the rule of law, the open character of their societies, and their proven tendency not to go to war with each other—a phenomenon known in political science as democratic peace theory. This cohesion is a distinct advantage in a competitive international system as well as a permanent pull for domestic political opposition in autocracies.
Autocracies, on the other hand, tend to be highly particularistic, each built on a fixed power structure that is highly contingent on historical, ethnic, social, or religious factors. Saudi Arabia, Venezuela, and Eritrea have almost nothing in common. Knowledge of China’s political system provides almost no insight into the functioning of the Russian state. Autocracies have fewer shared characteristics than democracies, face higher transaction costs in dealing with one another due to the closed nature of their societies, and, since “autocratic peace theory” has yet to be discovered, can never fully trust the others’ intentions. As a result, the coherence of the autocratic camp is always more tenuous than the more natural grouping of liberal democracies.
[…]
The logic of regime competition offers a potential explanation for why Trump is willing to impose enormous costs on long-standing U.S. geopolitical interests through unprecedented foreign and domestic initiatives. He is following the very strategy expected of an autocratic state by prioritizing the imperative to counter democratic forces both at home and abroad. It is autocratic regime consolidation—not the pursuit of national interest—that drives the U.S. administration’s seemingly chaotic and contradictory policies. Trump’s ambition to emulate strongman rule—evident in his frequent admiration for authoritarian leaders—puts him at odds with democratically elected leaders while earning him, at least rhetorically, the support of autocratic regimes. This backing, in turn, can be leveraged to further weaken democratic checks and balances at home.
Solange wir nicht verstehen, dass das Spiel, das gespielt wird Liberale Demokratien vs. oligarchische Autokratien ist, werden wir Putin und Trump weiter in die Arme laufen.
Versteht man diese Systemkonkurrenz, erahnt man wie komplett irre der Versuch ist, im Bundesrat Länderübergreifend Peter Thiels Palantier in den Politzeibehörden durchzudrücken.
Peter Thiel steht so ziemlich gegen alle Werte, die wir immer behauptet haben, zu vertreten. Er bekämpft seit Jahren explizit und mit viel Geld gegen Gleichheit und Demokratie.
Und ausgerechnet diesem Mann wollen wir viel Geld für eine hochumstrittene Überwachungssoftware zahlen, uns damit vertraglich an ihn binden, uns abhängig machen und sensible polizeiliche Daten über uns anvertrauen? Das kann doch alles nicht wahr sein.
Während „Blackrot“ bereit ist, aus Angst vor dem Russen unseren Wohlstand in Panzer umzumünzen, verscherbelt sie die Demokratie an die Tech-Oligarchen und trägt uns so auf der letzten Meile über Putins Ziellinie. Was genau wollen wir denn dann noch verteidigen?
Im Blog mit dem irre guten Titel „The Nerd Reich“ erzählt Gil Duran, wie Moritz Döpfner zum deutschen J.D. Vance aufgebaut wird. Duran zitiert das Manager Magazin, das davon erzählt, dass Moritz Döpfner, wie einst J.D. Vance erst bei Thiel Capital arbeitete und dann mit Thiels finanzieller Hilfe einen eigenen Investment Fond aufsetzt.
Thiel will contribute the majority of this, several insiders tell Manager Magazin. Thiel will be the anchor investor of the Döpfner fund and will support its development with $50 million. Moritz Döpfner and Peter Thiel declined to comment when contacted.
Fehlt nur noch die politische Karriere, aber da kann Papa Matthias sicher auch mithelfen.
Thiel’s investment in the Döpfner bloodline hardly seems coincidental. As journalist Eoin Higgins argues in Owned: How Tech Billionaires on the Right Bought the Loudest Voices on the Left, Thiel and his cohort are working to codify, capitalize, and normalize tech billionaire control of the media itself.
Eine der letzten Amtshandlungen von Moritz Döpfner bei Thiel Capital war offensichtlich das Investment in eine deutsche Drohnenfirma namens „Stark Defence“.
Reicht es Thiel nicht, unsere Polizeien in seine Cloud-GeStaPo zu integrieren? Ich will nicht auch noch von seinen Drohnen gejagt werden.
Wenn ihr die Gelegenheit habt, geht in „Spiegelneuronen„, den „dokumentarischen Tanzabend“ von Stefan Kaegi, mit Sasha Waltz & Guests und Rimini Protokoll, derzeit im Radialsystem.
Ich werde nichts verraten, außer dass es ein durch und durch dividuelles Erlebnis ist.
Im Prospect Magazin denkt man über die unvorstellbar riesige Generative KI-Blase nach.
Venture capital (VC) funds, drunk on a decade of “growth at all costs,” have poured about $200 billion into generative AI. Making matters worse, the stock market’s bull run is deeply dependent on the growth of the Big Tech companies fueling the AI bubble. In 2023, 71 percent of the total gains in the S&P 500 were attributable to the “Magnificent Seven”—Apple, Nvidia, Tesla, Alphabet, Meta, Amazon, and Microsoft—all of which are among the biggest spenders on AI. Just four—Microsoft, Alphabet, Amazon, and Meta—combined for $246 billion of capital expenditure in 2024 to support the AI build-out. Goldman Sachs expects Big Tech to spend over $1 trillion on chips and data centers to power AI over the next five years.
[…]
Between VCs, Big Tech, and power utilities, the bill for generative AI comes out to close to $2 trillion in spending over the next five years alone.
All diese Erwartungen und mit den Einnahmen läufts eher mau.
Yet OpenAI, the current market leader, expects to lose $5 billion this year, and its annual losses to swell to $11 billion by 2026. If the AI bubble bursts, it not only threatens to wipe out VC firms in the Valley but also blow a gaping hole in the public markets and cause an economy-wide meltdown.
[…]
The latest funding round is its third in the last two years, atypical for a startup, that also included a $4 billion revolving line of credit—a loan on tap, essentially—on top of the $6.6 billion of equity, revealing an insatiable need for investor cash to survive. Despite $3.7 billion in sales this year, OpenAI expects to lose $5 billion due to the stratospheric costs of building and running generative AI models, which includes $4 billion in cloud computing to run their AI models, $3 billion in computing to train the next generation of models, and $1.5 billion for its staff. According to its own numbers, OpenAI loses $2 for every $1 it makes, a red flag for the sustainability of any business. Worse, these costs are expected to increase as ChatGPT gains users and OpenAI seeks to upgrade its foundation model from GPT-4 to GPT-5 sometime in the next six months.
Financial documents reviewed by The Information confirm this trajectory as the startup predicts its annual losses will hit $14 billion by 2026. Further, OpenAI sees $100 billion in annual revenue—a number that would rival Nestlé and Target’s returns—as the point at which it will finally break even. For comparison, Google’s parent company, Alphabet, only cleared $100 billion in sales in 2021, 23 years after its founding, yet boasted a portfolio of money-making products, including Google Search, the Android operating system, Gmail, and cloud computing.
Silicon Valleys Problem sitzt jedoch noch tiefer als ihr KI-Investment. Wenn KI fällt, dann stehen die Tech-Lords ohne ihre letzte Hose da.
The basic problem facing Silicon Valley today is, ironically, one of growth. There are no more digital frontiers to conquer. The young, pioneering upstarts—Facebook, Google, Amazon—that struck out toward the digital wilderness are now the monopolists, constraining growth with onerous rentier fees they can charge because of their market-making size. The software industry’s spectacular returns from the launch of the internet in the ’90s to the end of the 2010s would never come back, but venture capitalists still chased the chance to invest in the next Facebook or Google. This has led to what AI critic Ed Zitron calls the “rot economy,” in which VCs overhype a series of digital technologies—the blockchain, then cryptocurrencies, then NFTs, and then the metaverse—promising the limitless growth of the early internet companies. According to Zitron, each of these innovations failed to either transform existing industries or become sustainable industries themselves, because the business case at the heart of these technologies was rotten, pushed forward by wasteful, bloated venture investments still selling an endless digital frontier of growth that no longer existed. Enter AGI, the proposed creation of an AI with an intelligence that dwarfs any single person’s and possibly the collective intelligence of humanity. Once AGI is built, we can easily solve many of the toughest challenges facing humanity: climate change, cancer, new net-zero energy sources.
Es ist schwer vorstellbar, aber das heutige Silicon Valley ist ein One Trick Pony:
Mitte der 2000er sind sie in den Topf voller Netzwerkeffekte gefallen und seitdem generieren ihre Plattformen LockIn-Renten, deren Höhe sie bequem via Enshittyfication regulieren können. DAS ist ihre einzige „Innovation“, alles andere ist Tand.
Und weil sie sich seitdem als Genies erzählen, verstehen sie nicht, warum keiner ihrer anderen Hirnfürze funktioniert.
Elon Musk sagt, er habe X mit seiner anderen Firma xAI gekauft.
Elon Musk said on Friday that his startup xAI has merged with X, his social network, in an all-stock transaction that values the artificial intelligence company at $80 billion and the social media company at $33 billion.
Hier was wirklich passiert ist:
Musk hat das Geld der Investoren aus seinem KI-Startup genommen (das er eh nicht mehr braucht, weil die KI-Blase jetzt eh platzt, siehe oben) und damit die Schulden vom Kauf von X beglichen (das eh schon kurz vor Bankrott steht).
Und er musste diese Schulden tilgen, weil die Kredite mit Tesla Aktien Tesla gesichert waren und Tesla kurz vor Kollaps steht. Fällt Tesla unter einen bestimmten Wert, callen die Banken Margin, was sein ganzes Nerd-Reich einstürzen lassen könnte.
Es war eine Rettungsaktion.
Alles was er dann noch machen musste, ist die Verträge von X in den Leitz-Ordner mit der Aufschrift „XAi“ zu heften, fertig!
Vielen dank, dass Du Krasse Links liest. Da steckt eine Menge Arbeit drin und bislang ist das alles noch nicht nachhaltig finanziert. Da heute exakt Monatsende ist, kann ich sagen, dass ich diesen Monat genau € 251,92 – eingenommen habe, von eigentlich notwendigen € 1.500,-. Mit einem monatlichen Dauerauftrag kannst Du helfen, die Zukunft des Newsletters zu sichern. Genaueres hier.
Michael Seemann
IBAN: DE58251900010171043500
BIC: VOHADE2H
Contrapoints hat ein neues Video veröffentlicht. Das wars. Das ist die Empfehlung.
Tante bringt OpenAIs Grenzverletzung, ihr neues Bildgenrierungsmodell auf Studio Ghibli Filme zu fine tunen, auf den Punkt.
It is a display of power: You as an artist, an animator, an illustrator, a writer, any creative person are powerless. We will take what we want and do what we want. Because we can.
Ansonsten bleibt nur mit den Worten von Hayao Miyazaki zu sagen:
“I am utterly disgusted. If you really want to make creepy stuff, you can go ahead and do it. I would never wish to incorporate this technology into my work at all.”
[…]
“I feel like we are nearing the end of times. We humans are losing faith in ourselves.”
Allison Morrow kauft auf CNN das Narrativ nicht, dass Apple an Generativer KI gescheitert sei. Alle seien daran gescheitert.
Apple, like every other big player in tech, is scrambling to find ways to inject AI into its products. Why? Well, it’s the future! What problems is it solving? Well, so far that’s not clear! Are customers demanding it? LOL, no. In fact, last year the backlash against one of Apple’s early ads for its AI was so hostile the company had to pull the commercial.
Die Wahrheit ist, KI ist über den bereits bestehenden Usecase hinaus nicht (oder kaum) Produktfähig.
Back in June, Apple floated a compelling scenario for its newfangled Siri. Imagine yourself, frazzled and running late for work, simply saying into your phone: Hey Siri, what time does my mom’s flight land? And is it at JFK or LaGuardia? In theory, Siri could scan your email and texts with your mom and give you an answer. That saves you several annoying steps of opening your email to find the flight number, copying it, then pasting it into Google to find the flight’s status.
If it’s 100% accurate, it’s a fantastic time saver. If it is anything less than 100% accurate, it’s useless. Because even if there’s a 2% chance it’s wrong, there’s a 2% chance you’re stranding mom at the airport, and mom will be, rightly, very disappointed. Our moms deserve better!
Mein Take: Die Funktionsweise des Transformers macht es unmöglich, eine gewisse Unschärfe zu überwinden, denn es ist exakt diese Unschärfe, die die Illusion von Intelligenz überhaupt erst ermöglicht.
Eine Studie hat Zitationen in KI-Suchmaschinen untersucht:
Citation error rates varied notably among the tested platforms. Perplexity provided incorrect information in 37 percent of the queries tested, whereas ChatGPT Search incorrectly identified 67 percent (134 out of 200) of articles queried. Grok 3 demonstrated the highest error rate, at 94 percent. In total, researchers ran 1,600 queries across the eight different generative search tools.
Das macht Sinn. Je genauer eine Information reproduziert werden muss, desto stärker schlägt die eingebaute Unschärfe ins Gewicht.
Auch das „Vibe-Coding“ – also das Programmieren via LLM, dessen alltäglichen Nutzen ich durchaus verstehe – entpuppt sich bei genauerem Hinsehen als … schwierig.
The models took on tasks cumulatively worth hundreds of thousands of dollars on Upwork, but they were only able to fix surface-level software issues, while remaining unable to actually find bugs in larger projects or find their root causes. These shoddy and half-baked „solutions“ are likely familiar to anyone who’s worked with AI — which is great at spitting out confident-sounding information that often falls apart on closer inspection.
Though all three LLMs were often able to operate „far faster than a human would,“ the paper notes, they also failed to grasp how widespread bugs were or to understand their context, „leading to solutions that are incorrect or insufficiently comprehensive.“
Read my Lips: Es wird keine Agents geben.
Jason Kobler fasst KI-Slop als Brute Force Hacking des Newsfeed-Algorithmus.
The best way to think of the slop and spam that generative AI enables is as a brute force attack on the algorithms that control the internet and which govern how a large segment of the public interprets the nature of reality. It is not just that people making AI slop are spamming the internet, it’s that the intended “audience” of AI slop is social media and search algorithms, not human beings.
What this means, and what I have already seen on my own timelines, is that human-created content is getting almost entirely drowned out by AI-generated content because of the sheer amount of it. On top of the quantity of AI slop, because AI-generated content can be easily tailored to whatever is performing on a platform at any given moment, there is a near total collapse of the information ecosystem and thus of „reality“ online. I no longer see almost anything real on my Instagram Reels anymore, and, as I have often reported, many users seem to have completely lost the ability to tell what is real and what is fake, or simply do not care anymore.
Daraus folgt eigentlich, dass zwei Algorithmen kurzgeschlossen werden. Generative KIs versuchen den Newsfeed-Algo zu hacken und als Reward Aufmerksamkeit abzusaugen. Die ersten Opfer werden Influencer*innen sein.
Both Mustafa and Bitton tell users that it makes no sense trying to become the next Mr. Beast, who they see as a singular figure. “All these ‘premium’ channels with perfect production? They’re slowly dying. Why? Because they need a $5,000 camera, studio lighting, professional editing, days to produce… And for what?,” Bitton writes. “To compete with Mr Beast and barely get 1000 views? Heck even if they get 100k views, it would still not be worth it. Because it doesn’t even compare with what creators who pump out consistent Shorts make.”
Das ist alles sehr ekelhaft, indeed, aber ist das ein nachhaltiges Geschäftsmodell?
Even though many of the AI images and reels I see have millions of views, likes, and comments, it is not clear to me that people actually want this, and many of the comments I’ve seen are from people who are disgusted or annoyed. The strategy with these types of posts is to make a human linger on them long enough to say to themselves “what the fuck,” or to be so horrified as to comment “what the fuck,” or send it to a friend saying “what the fuck,” all of which are signals to the algorithm that it should boost this type of content but are decidedly not signals that the average person actually wants to see this type of thing. It’s brute forcing a weakness in the Instagram algorithm that takes any engagement at all as positive signals, and the people creating this type of content know this.
Die Verschmelzung von News-Feed Algo und Generativer KI ist die endgültige semantische Schließung. Das Limbische System tauscht die Neokortex durch Slop-Machines aus.
Eine unter anderem von OpenAI selbst durchgeführte Studie zur Nutzung von Chatbots, bringt eine creepy Erkenntnis zu tage.
Though the vast majority of people surveyed didn’t engage emotionally with ChatGPT, those who used the chatbot for longer periods of time seemed to start considering it to be a „friend.“ The survey participants who chatted with ChatGPT the longest tended to be lonelier and get more stressed out over subtle changes in the model’s behavior, too.
[…]
Perhaps the biggest takeaway, however, was that prolonged usage seemed to exacerbate problematic use across the board. Whether you’re using ChatGPT text or voice, asking it personal questions, or just brainstorming for work, it seems that the longer you use the chatbot, the more likely you are to become emotionally dependent upon it.
Die enormen potentiale von automatisierten „abusive Relationships“ als Geschäftsmodell hatte ich schon mal hier beschrieben und wenn die KI-Branche irgendwas retten wird, dann vermutlich die ruchlose Ausbeutung emotionaler Abhängigkeit.