Wo ist mspro?

Menschen fragen sich den ganzen Tag: Wo ist eigentlich mspro? Vielleicht auch nicht. Aber falls doch, kann man jederzeit hier vorbeischauen und nachgucken. Meine Location-Daten aus Foursquare und und meine aktuelle Geoposition werden hier ständig auf dem aktuellsten Stand wiedergegeben.

Wo ich sein werde:


 
(Kalender abonnieren: XML – iCal – HTML)

Wo ich war:

 
(Foursquare-feed als RSS: https://feeds.foursquare.com/history/1e8a2aedfc8cb90b2639cfb4dc80a181.rss)

 

Neueste Beiträge

Krasse Links No 75

Willkommen zu Krasse Links No 75. Wie ihr vielleicht mitbekommen habt, habe ich mich die letzten Monate etwas verausgabt und bin jetzt zum Jahresende – Überraschung – doch merklich erschöpft. Ich weiß nicht, on ob ich dieses Jahr noch eine weitere Ausgabe schaffe, aber spätestens nächstes Jahr geht frisch weiter. Deswegen schon mal: Schöne Feiertage euch.

Aber jetzt haltet Eure Feedback Loops bereit, heute homogenisieren wir den Hivemind-Effect der Totalüberwachung mit “the buzz”.


In diesem sehr lesenswerten Essay auf Aeon spannt der indischstämmige KI-Researcher Deepak Varuvel Dennison einen weiten Bogen von der lokalen Situiertheit eines großteils des menschlichen Weltwissens zur Homogenisierung des Weltwissens durch LLMs.

The irony isn’t lost on me that this dilemma has emerged through my research at a university in the United States, in a setting removed from my childhood and the very context where traditional practices were part of daily life. At Cornell University in New York, I study what it takes to design responsible AI systems. My work has been revealing to me how the digital world reflects profound power imbalances in knowledge, and how this is amplified by generative AI (GenAI). The early internet was dominated by the English language and Western institutions, and this imbalance has hardened over time, leaving whole worlds of human knowledge and experience undigitised. Now with the rise of GenAI – which is trained on this available digital corpus – that asymmetry threatens to become entrenched.

Dennison hat einen erfrischend materialistischen Blick auf Sprache und damit auf die Wissensstrukturen, die sich in LLMs abbilden.

To understand why this matters, we must first recognise that languages serve as vessels for knowledge – they are not merely communication tools, but repositories of specialised understanding. Each language carries entire worlds of human experience and insight developed over centuries: the rituals and customs that shape communities, distinctive ways of seeing beauty and creating art, deep familiarity with specific landscapes and natural systems, spiritual and philosophical worldviews, subtle vocabularies for inner experiences, specialised expertise in various fields, frameworks for organising society and justice, collective memories and historical narratives, healing traditions, and intricate social bonds. […]

When AI systems lack adequate exposure to a language, they have blind spots in their comprehension of human experience. For example, data from Common Crawl, one of the largest public sources of training data, reveals stark inequalities. It contains more than 300 billion web pages spanning 18 years, but English dominates with 44 per cent of the content. What’s even more concerning is the imbalance between how many people speak a language in the physical world and how much that language is represented in online data. Take Hindi, for example, the third most spoken language globally, spoken by around 7.5 per cent of the world’s population. It accounts for only 0.2 per cent of Common Crawl’s data. The situation is even more dire for Tamil, my own mother tongue. Despite being spoken by more than 86 million people worldwide, it represents just 0.04 per cent of the data. In contrast, English is spoken by approximately 20 per cent of the global population (including both native and non-native speakers), but it dominates the digital space by an exponentially larger margin. Similarly, other colonial languages such as French, Italian and Portuguese, with far fewer speakers than Hindi, are also better represented online.

The underrepresentation of Hindi and Tamil, troubling as it is, represents just the tip of the iceberg. In the computing world, approximately 97 per cent of the world’s languages are classified as ‘low-resource’. This designation is misleading when applied beyond computing contexts: many of these languages boast millions of speakers and carry centuries-old traditions of rich linguistic heritage. They are simply underrepresented online or in accessible datasets. In contrast, ‘high-resource’ languages have abundant and diverse digital data available. A study from 2020 showed that 88 per cent of the world’s languages face such severe neglect in AI technologies that bringing them up to speed would require herculean – perhaps impossible – efforts. It wouldn’t be surprising if the status quo is not too different even now.

Für die Effekte der Verzerrungen durch die westlich-hegemoniale Perspektive bringt Dennison Glasfassaden als Beispiel aus der Architektur. In der westeuropäischen Moderne wurden Glasfassaden als funktionaler und effizienter Entwurf erdacht und beworben, weil sie Energie dadurch sparen, dass sie natürliches Licht zur Beleuchtung und auch Beheizungszwecken nutzen. Durch die westliche Hegemonie habe sich das Bauprinzip aber auch in Regionen durchgesetzt, wo Glasbauten das Gegenteil von effizient sind, weil sie in heißem Klima aufwändig von innen gekühlt werden müssen.

Epistemologies are not just abstract and cognitive. They are physically embodied around us, with a direct impact on our bodies and lived experiences. To understand why, let’s consider an example that contrasts sharply with the kind of Indigenous construction practices that Dharan seeks to revive: high-rise buildings with glass façades in the tropics.

Es ist aber eben nicht nur so, dass maginales und lokales Wissen aus Chatbots nur durch die Trainingsdaten unterrepräsentiert ist, sondern auch, dass LLMs einen eingebauten Bias für die populärsten Pfade hat, der den Effekt der Unterrepräsentationen und populären Überbetonungen noch mal vervielfacht.

The problem is far deeper than gaps in training data. By design, LLMs also tend to reproduce and reinforce the most statistically prevalent ideas, creating a feedback loop that narrows the scope of accessible human knowledge.

Why so? The internal representation of knowledge in an LLM is not uniform. Concepts that appear more frequently, more prominently, or across a wider range of contexts in the training data tend to be more strongly encoded. For example, if pizza is commonly mentioned as a favourite food across a broad set of training texts, the model is more likely to respond with ‘pizza’ when asked ‘What’s your favourite food?’ Not because the LLM likes pizza, but because that association is more statistically prominent.

More subtly, the model’s output distribution does not directly reflect the frequency of ideas in the training data. Instead, LLMs often amplify dominant patterns in a way that distorts their original proportions. This phenomenon can be referred to as ‘mode amplification’. Suppose the training data includes 60 per cent references to pizza, 30 per cent to pasta, and 10 per cent to biriyani as favourite foods. One might expect the model to reproduce this distribution if asked the same question 100 times. However, in practice, LLMs tend to overproduce the most frequent answer. Pizza may appear more than 60 times, while less frequent items like biriyani may be underrepresented or omitted altogether. This occurs because LLMs are optimised to predict the most probable next ‘token’ (the next word or word fragment in a sequence), which leads to a disproportionate emphasis on high-likelihood responses, even beyond their actual prevalence in the training corpus. Together, these two principles – uneven internal knowledge representation and mode amplification in output generation – help explain why LLMs often reinforce dominant cultural patterns or ideas. […]

Ask ChatGPT about a controversial topic and you’ll get a diplomatic response that sounds like it was crafted by a panel of lawyers and HR professionals who are overly eager to please you. Ask Grok the same question and you might get a sarcastic quip followed by a politically charged take that would fit right in at a certain tech billionaire’s dinner party.

Das ganze Endet in einem selbstverstärkenden Loop der Homogenisierung und extemes Streamlinging von Wissensrepräsntation und damit eine Verarmung von uns allen.

The internet, as the primary source of knowledge for AI models, becomes recursively influenced by the very outputs those models generate. With each training cycle, new models increasingly rely on AI-generated content, reinforcing prevailing narratives and further marginalising less prominent perspectives. This risks creating a feedback loop where dominant ideas are continuously amplified while long-tail or niche knowledge fades from view.

Was wir als Wissensrevolution feiern, ist in Wirklichkeit eine semantische Verarmungs-Revolution auf globalem Level, die die Vielfalt des menschlichen Wissens und menschlicher Erfahrungswelten auf ein verslopptes JPEG des Internets reduziert.

Maybe the intelligence we most need is the capacity to see beyond the hierarchies that determine which knowledge counts. Without that foundation, regardless of the hundreds of billions we pour into developing superintelligence, we’ll keep erasing knowledge systems that took generations to develop.


Eine Studie über den Hivemind-Effect: einer zunehmenden Homogenität von reproduzierten semantischen Pfaden innerhalb von LLMs, als auch zwischen unterschiedlichen LLMs.

Language models (LMs) often struggle to generate diverse, human-like creative content, raising concerns about the long-term homogenization of human thought through repeated exposure to similar outputs. Yet scalable methods for evaluating LM output diversity remain limited, especially beyond narrow tasks such as random number or name generation, or beyond repeated sampling from a single model. We introduce Infinity-Chat, a large-scale dataset of 26K diverse, real-world, open-ended user queries that admit a wide range of plausible answers with no single ground truth. […]
Using Infinity-Chat, we present a large-scale study of mode collapse in LMs, revealing a pronounced Artificial Hivemind effect in open-ended generation of LMs, characterized by (1) intra-model repetition, where a single model consistently generates similar responses, and more so (2) inter-model homogeneity, where different models produce strikingly similar outputs.


An der Universtität von Texas werden LLMs verwendet, um Kursbeschreibungen nach „woken“ Begrifflichkeiten zu checken und zu zensieren.

At Texas A&M, internal emails show staff are using AI software to search syllabi and course descriptions for words that could raise concerns under new system policies restricting how faculty teach about race and gender.

At Texas State, memos show administrators are suggesting faculty use an AI writing assistant to revise course descriptions. They urged professors to drop words such as “challenging,” “dismantling” and “decolonizing” and to rename courses with titles like “Combating Racism in Healthcare” to something university officials consider more neutral like “Race and Public Health in America.”


Vielen dank, dass Du Krasse Links liest. Da steckt eine Menge Arbeit drin und bislang ist das alles noch nicht nachhaltig finanziert. Diesen Monat bin ich jetzt schon über 700 Euro. Das Ziel von 1500/M ist zwar noch weit entfernt, aber meine Abonnent*innen/Einnahmen Ratio ist schon ungewöhnlich hoch. Habt vielen, vielen Dank dafür! Mit einem monatlichen Dauerauftrag kannst Du helfen, die Zukunft des Newsletters zu sichern. Genaueres hier.

Michael Seemann
IBAN: DE58251900010171043500
BIC: VOHADE2H

Dieser Newsletter ist nicht auf Reichweite ausgelegt, aber will dennoch Menschen erreichen. Du kannst dem Newsletter helfen, indem du ihn Freund*innen empfiehlst und ihn auf Social Media verbreitest.


Francesca Bria und ihr Team vom Autonomy Institute haben die Broligarchie des neuen Techfaschismus in dieser anschaulichen Infro-Graphik-Website aufgearbeitet, The Authoritarian Stack. Man kann zum Beispiel alle die ideologischen, politischen und geschäftlichen Verbindungen der wichtigsten Akteure anhand interaktiver Grafiken und Charts nachvollziehen kann.

Anschaulich auch der Pfad, den sie durch die verschiedenen Layer tracken.

Michael Bruns hat einen sehenswerten Videoexplainer zum Projekt gemacht.


Mohammed R. Mhawish ist einer der wenigen noch überlebenden palästinensischen Journalisten in Gaza und schreibt im NY Mag darüber, wie die israelische Totalüberwachung in Gaza seine Welt verändert hat.

One of these people, Marwan, a 60-year-old hospital administrator in Gaza City, at first objected to my line of questioning. (I’m using only first names. Giving their full names in a report about surveillance feels like an offering to the occupation.) “In the face of mass slaughter,” Marwan said, “what difference does it make that they can see my Facebook posts or hack my calls or monitor my home?”
But soon Marwan could not stop talking about how the constant awareness of being watched had twisted and narrowed his world. He said he now avoids calling his brother “lest he ask whether any rockets were fired from the area or whether the Israelis had arrived in the area,” and those words be misread or distorted by unseen listeners. He described the collapse of connection itself: the way fear moves into a family, one phone call at a time, until even expressions of love begin to feel dangerous.
Khaled, who worked for nearly three decades as an ambulance driver for Al-Awda Hospital, said that during an interrogation, an officer showed him a private text message he’d sent his family. “Everything we say, they can see,” Khaled said. The text was mundane; the point, he felt, was to show this 61-year-old father of seven how deeply they could peer into his private life. People told me they have even extinguished their own thoughts, as if the interrogators and listeners could see inside their heads. “Nobody doesn’t have political leanings,” one man named Mohammed told me. “But I’ve killed it. I’ve prohibited myself from speaking on this. I’ve locked it with a key.”

In Gaza kann man anfangen zu verstehen, was es bedeutet, wenn dein Kommunikations-Provider nicht nur ein gewinnmaximierendes Unternehmen ist, das dich ausnehmen will, sondern Teil eines technisch-militätrischen Komplexes ist, der dich jederzeit auslöschen kann.

Everyone had stories of being watched. Mary, a 26-year-old writer, grew up in a two-story house on the more affluent side of Gaza City, where people went to stroll near the sea on streets lined with shops and airy schoolyards. It had a simple white façade, tall windows, a small balcony, and eight old araucaria trees her father had diligently cared for shading the gate. Before the war, passersby slowed to admire them. By this summer, the bombardment had cracked part of the roof open. At around 4:30 a.m. on July 27, while she slept in one of the remaining rooms, Mary woke to a faint buzz that seemed to come from just beside her. “I froze,” she told me. “I could not move. I could not scream.” A dark square hovered near the ceiling. She stared at it, motionless, until it drifted out of the room and exited through a window. If they could fly a drone to her bedside, they could see everything, she told me. Weeks later, her 35-year-old neighbor was shot dead by an armed drone while drying laundry on her balcony, standing beside her 4-year-old son, Mary said. “It is not death that we fear,” she told me. “It is the terror that comes before it.”

Nicht nur die Gebäude liegen in Schutt und Asche, sondern auch das Vertrauen in die Kommunikationsinfrastruktur und damit auch großer Bereich der sozialen Beziehungen.

Life in Gaza for the past two years has been a process of losing everything visible — our families, homes, streets. It also means losing what cannot be seen: the private space of the mind, the intimacy between people, and the ability to speak without fear of being monitored by a machine. A poll conducted just weeks before the October cease-fire by the Palestine-based research organization Institute for Social and Economic Progress found that nearly two-thirds of Gazans believed they were constantly watched by the Israeli government. This is the dystopian consequence of technology, supplied in part by American companies, being placed into the hands of authorities who have virtually unlimited control over a captive population they have openly villainized. It is the culmination of decades of monitored occupation, a totalitarian nightmare spliced with genocidal terror, a system that is already evolving and growing for whatever comes next. The old admonition of authoritarian regimes everywhere — If you have nothing to hide, you have nothing to be afraid of — has no meaning in Gaza.

Mhawish erzählt anschaulich, dass die Total-Überwachung nicht neu ist. Die Ursprünge des Systems gehen auf die Gründung Israels zurück und wurden einfach beständig weiterentwickelt. Mit Palantir, Microsoft Cloud und der vollständigen Überwachung von allem und jedem per KI ist das System lediglich zu seinem vorläufigen Höhepunkt gekommen.

Palantir, which had purchased an ad in the New York Times proclaiming that PALANTIR STANDS WITH ISRAEL, entered into an agreement to provide Israel’s military with technology “in support of war-related missions,” according to Bloomberg. Israel’s military intelligence unit reportedly used Google Photos, combined with tech from Israeli company Corsight AI, to enable its facial-recognition program to identify faces from a crowd and footage. Google and Amazon, which supply the Israeli government with advanced cloud-storage services and AI capabilities, were reported to have included a covert system in their contract to warn Israel when foreign courts compelled the companies to hand over Israeli government data but barred them from notifying Israel directly.

Die Lücken in den Daten werden laufend durch Foto-ID-Apps der Soldaten am Boden und ständige random Kidnappings und Folter durch die IDF ergänzt.

A woman in her early 40s who asked to remain anonymous worked at a beauty salon before the war. She was detained as she was marching south from where she was sheltering in the north of Gaza. She was positioned before two devices to log features of her face: a phone to capture her image and another screen to process it, she surmised. She turned her head away, refusing to look at the camera. A soldier forced her face toward it. Then a rifle butt struck her skull.

Her full name surfaced instantly. A soldier read aloud her first name, her father’s name, her grandfather’s, and her family name. “He did not ask me for anything, no ID, nothing, to know who I was,” she said. An officer glanced at the result and said they were taking her. In a pen, soldiers stripped her to her undergarments, she said. When her blindfold slipped, she saw four soldiers pointing a camera at her. She screamed, tried to cover herself, cried, and was struck in the chest as the blindfold was yanked back over her eyes. They called her “slut,” shoved her into a small cage, and warned that she would be beaten if she disobeyed. “What if we publish these?” a soldier said in Arabic while photographing her. Phones, cameras, watches — everything around was recording, she believed.

She said she was shuttled between the cages, the Sde Teiman facility, and Damon prison in Israel. At Sde Teiman — now notorious for scores of reports of horrific abuse — she was raped four times, she said. During her period, guards mocked her bleeding and shouted, “You smell.” They knew she had a teenage daughter. They knew she had worked at a beauty salon. They cut off her hair. “They’ve weaponized our information,” she said. After 32 days in detention, she was released.

Aber seit automatisierte Tötungs-Systeme wie Lavender auf dem Total-Überwachungssystem aufbauen, wird es noch sehr viel ernster genommen

Algorithms sorted people by perceived threat, according to reporting by +972 Magazine and Local Call. Each score could determine who would live or die. Intelligence sources told reporters that one of these systems, designed to score individuals by supposed affiliation with a Palestinian armed group, produced tens of thousands of names. Approval for a strike could reportedly take less than 30 seconds. Another program classified buildings by type and occupancy, marking them for strikes. AI tools, created in partnership with enlisted soldiers in Unit 8200 and reserve soldiers working at companies such as Google, Microsoft, and Meta, analyzed Arabic text messages and social-media posts, according to the New York Times. When those classifications were combined with targeting a suspected fighter at home rather than when alone, the result was the annihilation of families whose only fault was proximity. These advanced systems were mixed with older ones: the use of informants and spies and searches of homes and offices. I heard story after story of soldiers separating people, photographing them, and searching phones, part of broader screening and detention practices during the war.

Mit dem vermehrten Einsatz von bewaffneten Quad-Copter-Drohnen hat die Lage nochmal verschärft.

In Gaza, we call the drone zanana — “the buzz.” After October 2023, it became the soundtrack to our lives. We could tell the difference between the models that could kill and those that only watched. People in Gaza later told me they avoided the notorious Gaza Humanitarian Foundation distribution sites (which shuttered in November) not only because they feared being shot and killed, as hundreds of people were, but also because they feared the same cameras watching the crowds were matching their faces to databases — that even the act of seeking food could expose them. (In July, two unnamed contractors working as security for the distribution sites told the Associated Press that this is exactly what was happening.) A machine could look into people’s homes, register their presence, and flag them. If we tried to live our lives as if the surveillance did not exist, it could lead to our deaths.

“We’ve adapted to having our entire lives under surveillance,” he said. He even avoided using opaque bags when going to the market. “I’d try to have a transparent bag,” he said. “I wouldn’t leave with a backpack, lest they misinterpret it.” In his apartment overlooking the sea, Mohammed struggled with whether to close the curtains to hide from the drones outside. Safety, he had come to believe, depended on leaving nothing to interpretation.

Mhawish wird immer wieder selbst durch die IDF mit dem Tod bedroht, wie so viele andere der wenigen noch lebenden Journalist*innen und irgendwann merkt er, dass er selbst auf der Abschussliste steht. Als er am Checkpoint Netzarim auf der Flucht mit seiner Familie nach Ägypten herausgepickt und befragt wird, erfährt er am eigenen Leib, was die IDF alles über ihn weiß.

He started to move through my life: studies and work as well as the places I’d reported from — Al-Shifa, Al-Awda, Al-Daraj — naming them in sequence. He asked me about my relatives. When I hesitated, he filled in my cousins’ names, naming a neighborhood where my family sheltered. Whether I answered or faltered, his notes absorbed it all the same. The interrogation lasted hours. Over those endless minutes, what became clear to me was that the interrogator held on the screen before him a copy of my life built from relentless watching, compiled from calls, cameras, and coordinates.

Then he began talking about my son. “Is Rafik still out there? How is his chest?” For a moment, my mind went blank. It was a question from inside my own house. It took me back to 2022, when Rafik was just 11 months old, during our time in the UAE. Rafik had contracted a lung infection and he spent two nights in a Dubai hospital. It was not a big deal. He was fine. But here it was, an episode from my life I’d never written about or broadcast. The interrogator said it like a box he was checking. Their knowledge of my son’s brief illness had to come from somewhere. Hospital records from the UAE? Recordings they’d kept of my phone calls? Copies of my emails? It felt like they had stepped inside my mind.

The interrogation intensified. A soldier behind me struck the base of my neck with his rifle when I denied participating in attacks on Israel. “Tell the truth,” he said in English. Each question from the interrogator landed like a test. I stuck to the mundane: that we’d moved south for food and that we were “following orders” — their phrase, returned to them in the hope it would spare my family. Then he brought up the bombing of our home. He called my reporting “advertisements.” He said I’d nearly gotten my family killed.

Am Ende wurde er wieder entlassen und konnte nach Ägypten fliehen, weswegen wir seine Geschichte kennen.

  1. Die Margen der Kunst. Michael Seemann | kulturBdigital-Konferenz 2025 – YouTube Schreibe einen Kommentar
  2. Krasse Links No 74 12 Antworten
  3. Die Pfadgelegenheit 2 Antworten
  4. Das Dividuum 7 Antworten
  5. Von der Macht-Interdependenz Theorie zur Wert-Formel 5 Antworten
  6. Krasse Links No 73 2 Antworten
  7. Einreichung Platformpower 39c3 3 Antworten
  8. Krasse Links No 72 Kommentare deaktiviert für Krasse Links No 72
  9. Krasse Links No 71 Eine Antwort