AI hunger strike surges: Day 3 at Anthropic, 3 arrests, 60 protesters

AI hunger strike

Activists are escalating an AI hunger strike and coordinated protests outside multiple AI companies, converging on Anthropic, OpenAI, and Google DeepMind in recent weeks with concrete demands to pause “frontier” systems and publish independent safety tests. The AI hunger strike reached Day 3 outside Anthropic’s San Francisco headquarters as allied actions drew three arrests at OpenAI and more than 60 participants at DeepMind’s London offices, signaling a widening, data‑driven campaign against accelerated AGI development [1][2][5].

Key Takeaways

– Shows Day 3 AI hunger strike at Anthropic led by Guido Reichstadter, urging a halt to frontier AI amid emergency‑level warnings from experts and groups. [1] – Reveals OpenAI protest on Feb 22, 2025 drew about two dozen, resulting in three trespassing arrests outside the Mission Bay offices during demonstrations. [2] – Demonstrates July 2, 2025 London action brought 60+ outside Google DeepMind, challenging Gemini 2.5 Pro safety promises and demanding third‑party test publication. [5] – Indicates activists cited OpenAI’s roughly $80 billion valuation while urging engineers to quit and regulators to enforce pauses on frontier technologies. [3] – Suggests a growing coalition as PauseAI and NoAGI mobilize dozens, with StopGenAI supporting single‑person fasts and 2024 safety commitments enforcement. [1][5]

Inside the AI hunger strike at Anthropic

Outside Anthropic’s San Francisco headquarters, activist Guido Reichstadter marked Day 3 of an AI hunger strike, arguing that rapid progress toward frontier AI and AGI constitutes an emergency that warrants an immediate pause by top labs. His action, framed as a single‑person fast, is tied to grassroots organizing by campaigns such as StopGenAI and reflects a strategy of direct, attention‑grabbing protest calibrated to safety timelines and model releases [1].

Reichstadter’s messaging pairs urgency with specificity: a call to halt “frontier” development rather than a broad moratorium on all AI research. The hunger strike is part of a pattern described by organizers and experts warning that fast‑tracked AGI development could escalate risks beyond existing governance capacity, a theme amplified across events targeting multiple hubs in the AI ecosystem [1].

How the AI hunger strike converged with OpenAI protests

Protesters gathered outside OpenAI’s Mission Bay offices on Feb 22, 2025—roughly two dozen in total—underscoring how the AI hunger strike’s narrative has merged with broader anti‑AGI demonstrations demanding tighter oversight and investigations into safety culture. Police arrested three protesters for trespassing, while organizers—again including Reichstadter—argued that AGI poses existential risk and called for a formal probe into the November 2024 death of former OpenAI employee Suchir Balaji [2].

The demonstrations coincided with high‑profile product cycles and corporate milestones that protesters say heighten stakes. As OpenAI introduced new audio and video capabilities in a major update, organizers from PauseAI urged engineers to quit and demanded a pause on frontier AI development, emphasizing that regulatory intervention is necessary to slow deployment speed. The company’s valuation—reportedly around $80 billion—was cited by activists as a sign that market incentives may outrun safety commitments without external checks [3].

While the AI hunger strike is a single‑person action, it complements civil resistance tactics such as chanting outside launches and holding signs targeting specific safety benchmarks. The San Francisco protests’ measurable outputs—two dozen attendees and three arrests—offer a concrete snapshot of mobilization scale and law‑enforcement response in a city central to the industry’s growth [2][3].

The AI hunger strike’s global echoes at DeepMind London

On July 2, 2025, more than 60 protesters assembled outside Google DeepMind’s London headquarters in a mock trial of the company’s safety practices, a turnout that eclipsed the San Francisco crowd by a wide margin and emphasized the campaign’s traction beyond the United States. Organized by PauseAI, the event spotlighted claims that Google had broken safety promises tied to the release cycle of Gemini 2.5 Pro, introduced in April [5].

Participants chanted “Test, don’t guess,” demanding publication of third‑party evaluations—an operational request that points to measurable accountability rather than open‑ended bans. Protesters also argued that commitments made at the 2024 AI Safety Summit had not been upheld, and they pressed for clear, independently verified testing timelines before major releases, framing transparency as a quantifiable precondition for deployment [5].

What protesters are demanding, by the numbers

Across the three hubs, the demands coalesce around numeric thresholds and procedural steps: stop frontier AI development at Anthropic (Day 3 fast as of reporting), halt or slow OpenAI’s feature expansion while regulators assert oversight, and publish third‑party evaluations of Gemini 2.5 Pro at DeepMind. These calls lean on discrete milestones—product updates, valuations, and dated commitments—linking governance requests to auditable artifacts such as evaluation reports and summit pledges [1][3][5].

At OpenAI, activists tied their appeals to a company with a reported $80 billion valuation, contending that outside enforcement is necessary when internal safety systems operate alongside strong market pressures. The Feb 22 gathering’s size and three arrests quantify public dissent and policing thresholds in a setting where product updates add audio and video modalities that expand risk surfaces [2][3].

Organizers and tactics: PauseAI, NoAGI, and StopGenAI

The movement comprises both structured groups and individual actions. PauseAI and NoAGI organized events drawing dozens to OpenAI’s offices, with messaging that targeted military AI applications and the pursuit of AGI, and that called for external regulation, independent oversight, and transparent testing timelines. Organizers asserted that companies had rescinded prior safety promises, pressing for formalized third‑party checks rather than internal assurance alone [4].

StopGenAI’s support for single‑person direct actions such as Reichstadter’s fast signals a hybrid model: high‑visibility individuals anchoring sustained media narratives, while coalitions coordinate larger rallies synchronized with corporate announcements and model releases. The interplay between a Day 3 hunger strike and 60‑plus gatherings suggests an intentional mix of persistent pressure and peak‑event surges aligned to release calendars [1][5].

Product cycles, safety benchmarks, and public pressure

KQED’s reporting highlights the protest timing: organizers targeted OpenAI during a major feature unveiling that added audio and video capabilities, translating technical milestones into protest flashpoints. Liron Shapira of PauseAI argued that regulators must act, underscoring how activists map governance demands onto product roadmaps—signal moments when internal safety claims can be tested against external oversight proposals and community norms [3].

At DeepMind, the July 2 mock trial tethered its grievance to the April release of Gemini 2.5 Pro, demanding that Google publish third‑party evaluations as a prerequisite to trust. The slogan “Test, don’t guess” converts a general safety ethos into a specific process requirement—independent evaluations with clear methodologies and thresholds—which protesters contend honor commitments associated with the 2024 AI Safety Summit [5].

Quantifying risk narratives and accountability asks

Protest organizers quantify risk not only through catastrophic hypotheticals but via measurable governance deficits: absence of public third‑party evaluations, opacity in safety testing timelines, and perceived slippage on documented commitments. By pointing to dates—Nov 2024 (employee death under investigation requests), Feb 22, 2025 (three arrests), April (Gemini 2.5 Pro release), and July 2 (60+ turnout)—activists establish a chronological ledger of events to bind corporate claims to accountability milestones [2][5].

The AI hunger strike gives this ledger a daily cadence—Day 1, Day 2, Day 3—making safety inaction legible as time passes. Linking a single-person fast with larger protests supplies two complementary signals: an ongoing meter of urgency and periodic spikes in collective support. Both are converted into numbers—days fasting, headcounts, arrests, valuations—to create a dataset that campaigners argue regulators should weigh alongside capability benchmarks and deployment risk assessments [1][3].

Why these three firms are focal points

Anthropic, OpenAI, and Google DeepMind are recurring protagonists in AGI and frontier AI debates, serving as targets where public pressure might yield maximal safety dividends per unit of activism. OpenAI’s $80 billion reported valuation embodies a scale at which even small probability risks could yield outsized societal costs, protest organizers argue, unless policy slows or conditions releases with verified safeguards. Product updates that add modalities, like audio and video, compound complexity and stretch existing oversight mechanisms, further elevating the issue’s salience [3].

DeepMind’s London turnout—more than 60—illustrates the campaign’s transatlantic reach and indicates that European safety discourse, catalyzed by the 2024 summit, is shaping protest tactics that emphasize test publication and independent oversight. Anthropic’s Day 3 hunger strike anchors the narrative with a human metric of time and sacrifice, framing the demand to halt frontier systems as urgent rather than abstract [1][5].

What happens next for the AI hunger strike movement

Activists appear set to continue coupling individual fasts with expansionary demonstrations pegged to product launches and public valuations, escalating pressure for audits and pause mechanisms. Expect more date‑stamped actions that track company releases and regulatory hearings, more countable outputs (attendance, arrests, days of fasting), and renewed demands for third‑party evaluation reports—especially around models marketed as stepping stones toward AGI [1][3][4][5].

If companies respond with detailed, independently verified evaluations and transparent testing timelines, it would address the protesters’ most quantifiable demands; if not, organizers are likely to repeat London’s 60‑plus format and San Francisco’s arrest‑risking tactics to maintain visibility. In either scenario, the AI hunger strike supplies a persistent, numeric heartbeat for a movement measuring progress in days, headcounts, and publicly posted test results [1][2][5].

Sources:

[1] Futurism – Anti‑AI Activist on Day Three of Hunger Strike Outside Anthropic’s Headquarters: https://futurism.com/ai-hunger-strike-anthropic

[2] San Francisco Chronicle – Three arrested in S.F. after protesting AI technology outside OpenAI headquarters: www.sfchronicle.com/bayarea/article/three-arrested-s-f-protesting-ai-technology-20181600.php” target=”_blank” rel=”nofollow noopener noreferrer”>https://www.sfchronicle.com/bayarea/article/three-arrested-s-f-protesting-ai-technology-20181600.php [3] KQED – As OpenAI Unveils Big Update, Protesters Call for Pause in Risky ‘Frontier’ Tech: www.kqed.org/news/11985949/as-openai-unveils-big-update-protesters-call-for-pause-in-risky-frontier-tech” target=”_blank” rel=”nofollow noopener noreferrer”>https://www.kqed.org/news/11985949/as-openai-unveils-big-update-protesters-call-for-pause-in-risky-frontier-tech

[4] VentureBeat – Protesters gather outside OpenAI office, opposing military AI and AGI: https://venturebeat.com/ai/protesters-gather-outside-openai-office-opposing-military-ai-and-agi/ [5] Times of India – Google, you broke your word on …, shout protestors outside Google DeepMind’s London headquarters: https://timesofindia.indiatimes.com/technology/tech-news/google-you-broke-your-word-on-shout-protestors-outside-google-deepminds-london-headquarters/articleshow/122203297.cms

Image generated by DALL-E 3


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Newest Articles