AI jobs alarm: 3 vs 350 devs and ’15 years of hell’ forecast

AI jobs

In an unusually blunt assessment of AI jobs, former Google X chief business officer Mo Gawdat argued the idea that AI will create new employment is “100% crap.” He framed the current moment as an inflection point, citing a three-person build of his Emma.love startup that he says historically would have required 350 developers, and warning that even CEOs could be displaced by artificial general intelligence (AGI) on decision-making and execution speed. [1]

Gawdat’s remarks came in an August 4, 2025 appearance on The Diary of a CEO and were amplified a day later by coverage across global business media. He outlined a disruption timeline starting around 2027 and lasting up to 15 years, with widespread knowledge-worker displacement possible by 2037 absent aggressive policy responses such as regulation and universal basic income (UBI). [2]

Key Takeaways

– Shows Mo Gawdat’s “100% crap” view on AI jobs, delivered Aug. 5, 2025, citing Emma.love built by 3 people instead of 350 developers. – Reveals a 3-vs-350 staffing contrast in software, signaling extreme productivity gains that could compress AI jobs across engineering and content workflows. – Demonstrates a disruption timeline: 2027 onset, “15 years of hell,” and potential knowledge-worker role elimination risks by 2037 absent countervailing policy. – Indicates even CEOs at risk as AGI is forecast to be better than humans at everything, including executive decisions, within 2027–2037 windows. – Suggests urgent responses—regulation and UBI—to mitigate short-term dystopia, mass unemployment, and inequality as AI adoption accelerates through 2027–2037.

Inside the ‘100% crap’ claim and its timing

The core of Gawdat’s argument is that the historical narrative—new technologies destroy some roles but create more over time—breaks under modern AI’s scalability. On August 5, 2025, he told CNBC that the expectation of net new job creation from AI is “100% crap,” positioning today’s wave as categorically different because core knowledge tasks can be replicated and amplified at near-zero marginal cost. [1]

He connected that claim to his own product example, Emma.love. Rather than hiring a large software team, he said only three people built the product, contrasting it with an era when such a platform might have needed 350 developers. The implication: steep headcount compression is already live in the build process, not just a theoretical future. [1]

What a 3-versus-350 build signals for teams

For software and content organizations, the 3-versus-350 figure is a tangible indicator of how AI-native toolchains collapse timelines, reduce coordination load, and minimize specialized staffing. Gawdat’s comparison suggests multiple layers—coding, testing, deployment, content generation, and even distribution—can be consolidated by a small core team augmented by AI agents, models, and automation frameworks. [1]

Business Insider’s summary underscored the same example to show productivity concentration in smaller squads, which could cascade across engineering, design, documentation, and marketing functions. If fewer people can ship comparable output, budget lines tied to large headcounts face structural pressure—especially in knowledge-heavy workflows that were previously protected by complexity and coordination costs. [2]

A disruption timeline: 2027 shock to a 2037 horizon

Gawdat outlined a specific time horizon. He warned that a disruptive phase could begin as soon as 2027, describing “15 years of hell” before economies and institutions adapt. In his framing, the endpoint for major workforce reconfiguration could stretch to around 2037, by which time many traditional knowledge-worker roles risk obsolescence without policy guardrails. [3]

The phrase “15 years of hell” was delivered on The Diary of a CEO, where Gawdat emphasized that the near term might look dystopian unless societies proactively cushion rapid transitions in employment and earnings. He pointed to UBI as one such cushion, arguing for proactive safety nets instead of reactive crisis measures. [3]

Why even CEOs are in the blast radius

Contrary to the belief that leadership and executive roles are insulated, Gawdat argued AGI will be “better than humans at everything,” including CEO-level decision-making. He said machines will surpass humans on data digestion, iterative learning, and execution speed—attributes central to modern management. That puts C-suite roles on the same continuum of risk as engineering, editing, and production functions. [5]

Times of India’s coverage echoed Gawdat’s examples of at-risk roles from podcasters and video editors to executives, highlighting the breadth of exposure across creative and managerial domains. This is not only about repetitive tasks; it’s about higher-order synthesis and strategic choice-making being modeled, simulated, and improved at machine speed. [4]

AI jobs narratives vs. early evidence on output per head

Traditional “creative destruction” arguments hold that technology boosts productivity and ultimately spawns more jobs. Gawdat’s rebuttal leans on early evidence that AI raises output per head while reducing the absolute number of heads needed—his three-person example being the salient case. That dynamic flips the old equation if the productivity dividend accrues to capital and platforms faster than markets can generate new categories of human demand. [1]

Business media recaps reiterated his contention that knowledge-work categories—from coders to content creators—could be structurally reduced as AI systems internalize best practices, automate workflows, and learn across domains. The outcome could be fewer hiring cycles and slimmer teams as standard practice, even as overall production volume rises. [2]

Policy, UBI, and the governance gap

Gawdat’s policy prescription centers on regulation and social safety nets like universal basic income to offset the transition shock. He warned that without proactive measures, capitalism’s winner-take-most dynamics could intensify, concentrating power and worsening inequality as firms deploy AI to reduce labor dependency. The short-term, he said, could look dystopian without deliberate guardrails. [4]

CNBC also noted his call for regulation, while the podcast emphasized global collaboration on AI governance—suggesting nation-by-nation patchworks won’t be sufficient for systems that scale across borders in minutes. Both points aim at creating buffers and standards before displacement peaks, not after. [1]

Scenarios for AI jobs and knowledge work through 2037

Consider three stylized pathways for AI jobs consistent with Gawdat’s framing. In a fast-adoption scenario beginning 2027, headcounts in software, media, and operations compress quickest as AI tools displace multiple roles. The risk intensifies toward 2037, with fewer openings for entry-level paths that once fed mid-level expertise. In a moderate path, displacement still arrives but with more hybrid roles. [2]

In a managed path where regulation and UBI roll out early, organizations might retain larger teams to preserve mentorship and oversight, while governments smooth income shocks. Gawdat’s point is that without timely interventions, the first scenario dominates, leaving the market to optimize for output per person, not total employment—especially for knowledge-heavy roles vulnerable to end-to-end automation. [3]

The 2027–2037 window and capital allocation

When a founder says three people can do what 350 did, investors and executives take note. Smaller teams reduce burn and can push companies to redeploy capital toward compute, data acquisition, and distribution instead of headcount. That reallocation can become self-reinforcing, driving tools that further compress staffing needs and entrenching the productivity-over-people equilibrium for AI jobs. [1]

Economic Times captured the starkness of Gawdat’s prediction by labeling it a possible “dystopian job apocalypse” as early as 2027. If firms widely adopt similar tooling, hiring freezes and role consolidations could appear well before governments finalize comprehensive AI frameworks—magnifying the lag between technical capability and labor-market adaptation. [5]

What this means for AI jobs policy: UBI, regulation, governance

The thrust of Gawdat’s advocacy is speed: implement safety nets like UBI, enact sensible regulation, and coordinate governance internationally before the shock compounds. The window he names—2027 through about 2037—frames a decade in which support systems must be trialed, scaled, and iterated alongside AI deployment so that productivity gains don’t translate into mass unemployment and instability. [3]

Times of India’s account highlighted the risks of concentrated power under existing capitalist dynamics, a point that interacts with policy choices on taxation, social transfers, and access to AI infrastructure. Whether societies turn productivity into broadly shared prosperity or a narrower concentration of gains depends on decisions taken well before displacement peaks. [4]

Reading the claims: a stress test for optimism

Optimists argue new job categories will emerge as they have historically. Gawdat’s pushback is that the speed and breadth of AI capability expansion make the lag between displacement and creation dangerously long. He emphasizes that knowledge work—long insulated by complexity and tacit expertise—now faces direct substitution by models coordinating across coding, content, and strategy. [2]

That doesn’t foreclose new roles entirely, but it puts the burden of proof on policy and corporate design to create them at pace. Gawdat’s case study—three people vs. 350—functions as a live stress test of optimistic projections. If that ratio generalizes, the default path is fewer hires, leaner teams, and a persistent oversupply of skilled labor competing for shrinking openings. [1]

Bottom line for business and workers

For executives, the takeaway is two-sided: AI offers immediate cost and speed advantages, but unmanaged adoption carries system-level risks, from consumer demand erosion to political backlash if livelihoods crater. For workers, especially in white-collar fields, the 2027–2037 window is pivotal for reskilling, domain specialization, and navigating organizations that are redesigning workflows around AI-first processes. [5]

Gawdat’s framing is deliberately provocative—“100% crap” is meant to cut through complacency. But the numbers attached to his claims—3 versus 350, 2027 to 2037, and the scale of roles implicated up to and including CEOs—are concrete markers against which businesses, workers, and policymakers can measure what happens next. [1]

Sources:

[1] CNBC – Ex-Google exec: The idea that AI will create new jobs is ‘100% crap’: https://www.cnbc.com/2025/08/05/ex-google-exec-the-idea-that-ai-will-create-new-jobs-is-100percent-crap.html

[2] Business Insider – Ex-Google exec says AI is coming for your job — even if you’re a podcaster, developer, or CEO: www.businessinsider.com/ex-google-exec-predicts-end-of-white-collar-jobs-starting-in-2027-2025-8″ target=”_blank” rel=”nofollow noopener noreferrer”>https://www.businessinsider.com/ex-google-exec-predicts-end-of-white-collar-jobs-starting-in-2027-2025-8 [3] The Diary Of A CEO / iHeart (podcast episode) – Ex-Google Exec (Mo Gawdat) on AI: The Next 15 Years Will Be Hell Before We Get To Heaven… And Only These 5 Jobs Will Remain!: https://www.iheart.com/podcast/270-the-diary-of-a-ceo-with-st-59414390/episode/ex-google-exec-mo-gawdat-on-ai-288455200/

[4] Times of India – ‘100% crap,’ says Ex-Google exec on the idea that AI will create new jobs; has a warning: There will be a time…: https://timesofindia.indiatimes.com/technology/tech-news/100-crap-says-ex-google-exec-on-the-idea-that-ai-will-create-new-jobs-has-a-warning-there-will-be-a-time/articleshow/123146526.cms [5] Economic Times – Ex-Google executive predicts a dystopian job apocalypse by 2027: ‘AI will be better than humans at everything… even CEOs’: https://m.economictimes.com/magazines/panache/ex-google-exec-mo-gawdat-predicts-a-dystopian-job-apocalypse-by-2027-ai-will-be-better-than-humans-at-everything-even-ceos/amp_articleshow/123123024.cms

Image generated by DALL-E 3


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Newest Articles