Joel P. Barmettler

AI Architect & Researcher

< Back
2024·Commentary

AI dystopia 2035 (part 1 of 2)

Knowledge workers currently make up the majority of the Swiss workforce. By 2035, under pessimistic but technically plausible assumptions, most of their tasks could be automated. This episode constructs a dystopian scenario, not as prediction but as stress test, to examine where current AI trajectories lead if the worst-case dynamics play out unchecked.

The hollowing out of knowledge work

The first casualty in this scenario is the substance of white-collar employment. AI goes beyond eliminating jobs; it drains them of meaning. The roles that remain are supervisory: humans reviewing AI-generated reports, approving AI-drafted contracts, spot-checking AI-produced analyses. The creative and intellectually demanding components of work, the parts that provide professional identity and satisfaction, are handled by the machine. What is left is the cognitive equivalent of assembly-line oversight. The psychological toll of this shift is substantial: people trained for complex problem-solving find themselves reduced to approving outputs they can barely evaluate.

Personalized information silos

In the private sphere, AI-driven content generation creates a world where no two people see the same information. Every article, video, and social media post is generated in real time, tailored to the individual's psychological profile, preferences, and behavioral history. The shared epistemic commons that underpins democratic discourse disappears. Without common reference points, even basic political conversations become impossible because participants inhabit entirely different informational realities. People grow accustomed to the frictionless optimization of AI-mediated interaction; real human conversation, with its awkward pauses and misunderstandings, feels unsatisfying by comparison.

Power concentration at the AGI bottleneck

The most structurally dangerous element is the concentration of economic power. If a single company achieves a decisive AGI breakthrough, it becomes the utility layer of the entire economy. Every business, from logistics to healthcare to finance, depends on its API. That company becomes the "lifeblood" of global commerce, a position more powerful than any government or historical monopoly. The dependency is total: cutting off access to the AGI provider is not a competitive inconvenience but an existential threat to any business that relies on it.

Technocratic governance by default

Economic dependency translates into political power. The leadership of the dominant AI company, perhaps fifteen people on a safety or ethics board, makes decisions that affect billions without democratic mandate. Their choices about what the AI will and will not do, which use cases to allow, which values to encode, become de facto policy. Elected governments find themselves negotiating with, rather than regulating, a private entity whose economic leverage exceeds their own. AI systems can subtly shape public opinion at scale, making traditional democratic accountability mechanisms insufficient.

The stagnation of human-driven research

When AI systems operate orders of magnitude faster than human researchers, the motivation for independent inquiry erodes. A calculation that takes a human team months is completed in milliseconds. Scientific fields become dependent on AI for computation, hypothesis generation, experimental design, and interpretation. Humanity shifts from producer to consumer of knowledge. The long-term risk is a civilization that has lost the capacity to understand, let alone improve, its own foundational technologies.

What this scenario gets right and wrong

The technological premises of this dystopia are grounded in current trajectories. Individual elements, such as job displacement in knowledge work and power concentration among AI providers, are already observable in early form. The scenario is deliberately extreme as a composite, but its value lies in identifying which dynamics require active intervention. This thought experiment excludes science-fiction tropes like AI developing consciousness or pursuing independent goals. The threat it describes is entirely human: the concentration of transformative technology in too few hands, governed by too few people, with too little accountability.

What labor market changes could AI cause by 2035?

By 2035, massive job losses could occur particularly among knowledge workers, who currently make up the majority of the Swiss workforce. Remaining jobs might be reduced to monitoring AI outputs, stripping away the creative and fulfilling aspects of work while leaving humans as quality controllers for automated systems.

How could AI change interpersonal communication by 2035?

AI could drive extreme individualization through personalized content bubbles, where every piece of media is generated in real time for a single user. Interpersonal communication could atrophy as people grow accustomed to optimized AI interaction and find real conversations shallow and unsatisfying by comparison.

What power concentration risks does AI development pose?

A company that achieves the decisive breakthrough toward artificial general intelligence could become the 'lifeblood' of the entire economy, with every other business dependent on its AI services. This would concentrate economic and political power in the hands of a very small group of people.

How could AI affect democratic processes?

A technocratic shift could occur where the leadership of AI companies, perhaps 15 people on an ethics committee, becomes de facto more influential than democratically elected governments. AI systems could subtly shape public opinion and influence political decisions at scale.

What impact could AI have on innovation and research?

Research and innovation could stagnate as AI systems work orders of magnitude more efficiently than humans. The motivation for independent research may decline when AI can perform complex calculations in milliseconds, potentially turning humanity into passive consumers of its own technological creation.

How realistic is the dystopian AI scenario for 2035?

The technological foundations for this scenario already exist or are within reach. While individual aspects could plausibly occur, the full scenario is deliberately extreme. The real danger lies not in AI becoming autonomous, but in power concentration among the humans who control the technology.


< Back

.

Copyright 2026 - Joel P. Barmettler