Altman flags alarming dead internet risk as bots hit ~50%

dead internet

Sam Altman, the CEO of OpenAI, publicly entertained the “dead internet theory” on 4 September 2025, posting that there now seem to be “a lot of LLM-run twitter accounts.” [1] The remark arrives amid longstanding evidence that bots account for roughly half of all web activity, with analyses in 2016 and again in 2023 pointing to about 50% automated traffic. [3] At the same time, industry reporting in July 2025 warns that AI-powered search answers—flagged in Google statements on March 5, 2024—are reducing referral traffic to publishers. [2]

Key Takeaways

– Shows Sam Altman’s 4 September 2025 X post flagging ‘LLM‑run’ accounts, a sharp signal that dead internet concerns are no longer fringe. – Reveals bots comprised about 50% of all web traffic in 2016 and again in 2023, per Imperva studies cited by experts. – Demonstrates AI search shifts since March 5, 2024 statements, with July 2025 analyses warning of reduced referrals and rising zero‑click answers. – Indicates LLM content has surged since ChatGPT’s late‑2022 launch, fueling authenticity debates and platform moderation gaps spotlighted by backlash. – Suggests proof‑of‑personhood pushes, like Worldcoin, aim to verify humans at scale as dead internet anxieties intensify across 2025.

Altman’s 4 September 2025 comment and why it matters

Altman’s post—“i never took the dead internet theory that seriously but it seems like there are really a lot of LLM-run twitter accounts now”—quickly ricocheted across X and tech media. The Independent framed it as a rare nod from a major AI executive toward a once-marginal theory, linking the observation to the rapid rise of machine-authored content since late 2022. [1] The timing matters: it coincides with growing public frustration over inauthentic accounts and the difficulty of distinguishing people from bots on large social platforms. [4]

Coverage in India and elsewhere emphasized the backlash to Altman’s comment, particularly amid wider criticism of platform moderation gaps that allow automated accounts to persist. The episode connects to a broader authenticity debate in which users face algorithmically amplified content whose human origin can be unclear or contested. [4] That debate is not hypothetical; it affects political discourse, consumer reviews, and even the visibility of news, where credibility cues compete with automation at scale. [1]

Measuring the dead internet: what the data shows

The “dead internet” label is colorful, but the measurable core is the volume of automated traffic. Imperva’s long-running bot traffic studies, cited by The Guardian, found approximately 50% of web traffic was automated as far back as 2016 and again in 2023. That level—roughly half—suggests users routinely interact with systems rather than people in many online venues. [3] Wikipedia’s overview of the theory similarly notes the ~50% figure and places Altman’s 2025 remark within that statistical context. [5]

Experts interviewed by The Guardian, including Toby Walsh, point to how algorithmic systems can create feedback loops where bots respond to bots, and ranking signals are gradually “inverted” by synthetic content. In that environment, measurable authenticity erodes: a steady 50% automation share means any given click, impression, or reply may be non-human. [3] While the exact composition of “good” versus “bad” bots varies by study and platform, the persistent share underscores a structural challenge rather than a passing anomaly. [3]

The statistical persistence from 2016 to 2023 matters because it suggests countermeasures and detection have not fundamentally changed the macro trend. Two datapoints a generation apart—separated by major platform shifts and AI innovations—arrive at roughly the same fraction of automated traffic. That strengthens the argument that the web’s baseline now includes a large automated substrate that surfaces across search, social, and content networks. [5] As Altman’s comment implies, large language models may accelerate visibility of that substrate in public discourse. [1]

How AI search reshapes discovery and revenue

A July 2025 analysis argues that AI-driven “answers”—from products like Perplexity and Google’s evolving AI mode—are changing how users discover information, with publishers seeing measurable declines in referrals. This follows Google’s March 5, 2024 statements about the role of AI-generated summaries within search, a model that inherently reduces click-through opportunities by satisfying queries in-line. [2] For publishers, the shift compounds the dead internet problem: fewer human visits and more AI-mediated sessions mean weaker signals of genuine engagement. [2]

The Week quotes industry voices who worry that human-made content may be displaced by synthesized responses, even as AI systems are trained on publisher material. In purely quantitative terms, if a growing share of searchers receive answers without clicking, the visible portion of human-to-human interactions shrinks relative to machine intermediation. That redistributes traffic away from open-web pages and toward platform-contained experiences. [2] Reduced referrals also cloud audience measurement, complicating attribution and weakening the feedback loops that sustain high-quality reporting and niche communities. [2]

Dead internet signals across platforms and policy

Users have long reported waves of suspicious activity—sudden gluts of lookalike posts, accounts with improbable behavior patterns, and recycled text that reads like machine output. The Times of India’s coverage of Altman’s post highlights public anger at perceived moderation failures, since eliminating industrial-scale inauthentic activity should be measurable at the account and network levels. That perception gap sharpens the intuitive appeal of the dead internet frame. [4] In policy terms, it pushes platforms to document detection efficacy and publish verifiable bot-removal statistics. [4]

Wikipedia’s evolving entry, last updated in September 2025, traces the theory’s arc from forums and 2021 magazine coverage to mainstream debate, now anchored to executives’ public statements. The entry synthesizes research, including Imperva’s ~50% automated share, and catalogs expert concerns about generative content displacing human material across feeds and search results. By situating these milestones chronologically—2016, 2021, 2023, 2025—it turns a vibe into a timeline. [5] That timeline helps distinguish anecdotal frustrations from a pattern visible in multiple datasets across years. [5]

Can proof-of-personhood fix the dead internet problem?

The Independent explicitly links Altman’s comment to parallel ventures in identity and authenticity, including Worldcoin’s proof-of-personhood efforts backed by eye-scanning hardware. The logic is straightforward: if bots are indistinguishable from people in text, then robust, voluntary verification could re-weight the web toward authenticated humans in at least some contexts. It’s an attempt to attach cryptographic scarcity—one person, one credential—to digital presence. [1] Yet identity schemes raise their own governance, privacy, and inclusion questions that any large-scale rollout must quantify and address. [5]

Even if verification solves some problems, adoption rates would determine impact. A world where 20%, 40%, or 60% of active accounts are verified would yield very different effects on discourse quality and recommendation systems. The debate is now about thresholds: what share of authenticated users is sufficient to bend the platform equilibrium back toward human-centric conversation? Those are the metrics authenticity pilots must report transparently to move beyond slogans. [1] Without such data, proof-of-personhood risks becoming another untested theory rather than a measurable fix. [5]

What a data-driven response should look like

The strongest path out of speculation is measurement. Platforms and search engines can publish standardized quarterly indicators: the share of automated traffic, the proportion of AI-origin content in feeds, the percentage of queries answered without clicks, and the ratio of verified-to-unverified active accounts. Those KPIs, tracked across 2016, 2023, 2024, and 2025 baselines, would make the dead internet conversation empirical. [3] Publishers, for their part, can report referral trends, time-on-page changes, and the variance between AI-mediated and direct traffic cohorts to clarify demand shifts. [2]

Regulators and standards bodies could also encourage auditability. For example, requiring platforms to disclose how many suspected LLM-run accounts were removed each month, and how many evaded detection, would turn anecdote into rates and confidence intervals. While such proposals are nascent, they align with concerns raised by journalists and researchers who have tracked the steady ~50% automation share over the past decade. [3] If the problem is half-automated ecosystems, then the solution space must be judged by its ability to move that denominator. [5]

Contextualizing Altman’s remark in 2025

Altman’s post is best read as a signal that even AI’s champions acknowledge the scale of machine output in public spaces. The comment follows nearly three years of mainstream generative AI, beginning with ChatGPT’s late-2022 public debut, during which synthetic text and images have saturated timelines and search snippets. That saturation has been documented by journalists since 2024, with growing expert concern about “algorithmic inversion” in content ranking. [3] In that sense, his 4 September 2025 remark is less a provocation than a confirmation. [1]

What remains uncertain is the speed of change across 2025–2026. Will the automated share rise above the ~50% plateau documented in 2016 and 2023, or will it stabilize as detection improves? Will AI answers continue to cannibalize clicks, or will publishers adapt through structured content and licensing? Those trajectories will determine whether the dead internet label becomes a durable diagnosis or a passing metaphor for a transitional era. [2] Either way, the next phase will be measured, not asserted. [5]

Reporting responsibly on the dead internet

Journalists covering this topic face two pitfalls: overstating causality and undercounting baselines. The best practice is to anchor stories in published figures—like the persistent “about 50%” automated share—and time-stamped statements, such as Google’s March 5, 2024 remarks and Altman’s 4 September 2025 post. That keeps analysis tied to verifiable points rather than vibes. [2] Equally important is separating platform-specific failures from systemic shifts wrought by generative AI and automation, both of which are moving in tandem. [4]

For audiences, the key is literacy about signals of authenticity and the limits of those signals. Verified identities can reduce uncertainty but cannot eliminate manipulation; AI answers can be useful but carry sourcing and attribution costs that affect the broader information ecosystem. Recognizing those trade-offs, and insisting on transparent metrics, is how the dead internet conversation can evolve from worry to action. [1] The debate only matures when numbers—not slogans—lead. [3]

Sources: [1] The Independent – ChatGPT boss suggests the ‘dead internet theory’ might be correct: www.the-independent.com/tech/chatgpt-openai-dead-internet-theory-sam-altman-llm-b2820375.html” target=”_blank” rel=”nofollow noopener noreferrer”>https://www.the-independent.com/tech/chatgpt-openai-dead-internet-theory-sam-altman-llm-b2820375.html [2] The Week – Is AI killing the internet?: https://theweek.com/tech/is-ai-killing-the-internet [3] The Guardian – TechScape: On the internet, where does the line between person end and bot begin?: https://www.theguardian.com/technology/2024/apr/30/techscape-artificial-intelligence-bots-dead-internet-theory [4] Times of India – Man asks ChatGPT about ‘dead internet theory’, gets OpenAI CEO Sam Altman’s post making fun of Elon Musk’s Twitter as reply: https://timesofindia.indiatimes.com/technology/social/man-asks-chatgpt-about-dead-internet-theory-gets-openai-ceo-sam-altmans-post-making-fun-of-elon-musks-twitter-as-reply/articleshow/123695599.cms [5] Wikipedia – Dead Internet theory: https://en.wikipedia.org/wiki/Dead_Internet_theory

Image generated by DALL-E 3


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Newest Articles