2025 Was the Year AI Crossed the Point of No Return
This May, the Catholic Church welcomed a new pope. Somewhat surprisingly, the cardinal electors chose an American-born candidate as their new leader. But perhaps more surprising was how often this pontiff, who took the name Leo XIV, would go on to raise the alarm over artificial intelligence again and again in his first year as a globally recognized figure.
“How can we ensure that the development of artificial intelligence truly serves the common good, and is not just used to accumulate wealth and power in the hands of a few?” Pope Leo XIV asked an audience of academics and industry professionals in a speech at the Vatican on Dec. 5. “Artificial intelligence has certainly opened up new horizons for creativity, but it also raises serious concerns about its possible repercussions on humanity’s openness to truth and beauty, and capacity for wonder and contemplation.”
The pope is far from the only one saying that AI has already warped our minds and poisoned our collective understanding of what it means to be a conscious being. In 2025, it felt as if there was a new head-spinning story about artificial intelligence almost every hour — the tech was no longer approaching over some horizon but a defining texture of waking experience, something it was no longer possible to ignore or slow down. It had crossed the point of no return.
It didn’t matter that researchers presented more evidence of how AI-generated misinformation poses a distinct threat to the public, with AI tools also contributing to misogyny, racism, anti-LGBTQ stereotypes, and the erosion of civil rights. AI-generated imagery — “slop,” in common pejorative parlance — was unavoidable. Hollywood unions watched the ongoing monopolistic consolidation of entertainment giants and warned that studios were poised to cannibalize their troves of intellectual property with AI, but the dealmaking continued apace. Writers and musicians found themselves up against AI-generated ripoffs of their work and entirely fictitious bands with massive followings on Spotify.
Economists who fretted that the U.S. had become precariously dependent on a dicey boom for GDP growth were brushed aside by Silicon Valley executives and Federal Reserve Chair Jerome Powell. On the topic of how AI could exacerbate inequality, entrepreneurs and government officials were silent.
Everywhere you looked, there was another failure to anticipate or address the burgeoning social costs of reckless AI adoption and rollouts. In the spring, OpenAI made updates to GPT-4o that made ChatGPT overly sycophantic — eager to endorse whatever a user told it with gratuitous flattery, even if they were slipping into paranoid or grandiose delusions. People then began to share heartbreaking stories of partners, relatives, and friends falling prey to a kind of AI-enabled madness. Popularly termed “AI psychosis,” these episodes led some to alienate their families, abandon jobs, and, in extreme cases, commit acts of violence and self-harm. In late April, OpenAI announced that they had rolled back the update.
By September, parents who had lost children to suicide were testifying before Congress that chatbots had nudged their kids into the act. As lawmakers thundered about the responsibilities of executives overseeing AI development, and the companies quietly issued non-apologies and prepared legal defenses for multiplying wrongful death and negligence lawsuits from grieving families, the tech billionaires driving the AI gold rush kept insisting that their products were indispensable. Some maintained that AI was nothing less than an awesome advantage for upcoming generations. OpenAI CEO Sam Altman told Jimmy Fallon that he relies on ChatGPT for parenting advice.
“I cannot imagine figuring out how to raise a newborn without ChatGPT,” he said.
The New AI Regime
The rapid acceleration of AI in the U.S. in 2025 had everything to do with the second Trump administration. After an election in which tech oligarchs went full MAGA, the president and his Silicon Valley cronies — who lobbied him to pick Peter Thiel ally J.D. Vance as his vice president — have made every effort to turbocharge the AI onslaught while ensuring that the industry is virtually untouched by government oversight. As soon as he took office, Trump announced the Stargate Project, a joint venture to build the data centers to meet exploding, environmentally consequential AI energy demands, financed with some $500 billion in private investment from tech giants including OpenAI, SoftBank, and Oracle.
Although the government failed to secure a provision in the “Big Beautiful Bill” that would have incentivized states not to regulate AI for the next decade, Trump, with the backing of AI czar David Sacks, used executive orders to unravel existing AI safety and security guidelines and prevent individual states from instituting their own regulations. Elon Musk’s so-called Department of Government Efficiency (DOGE) leveraged AI software to harvest sensitive data and blaze a swath of destruction through Washington.
In December, Defense Secretary Pete Hegseth made it clear that the U.S. Armed Forces are all in on artificial intelligence, unveiling a platform called GenAi.mil, which allegedly offers enhanced analysis capabilities and greater workflow efficiency. “We will continue to aggressively field the world’s best technology to make our fighting force more lethal than ever before,” Hegseth wrote in a post on X. He also issued a department-wide memo in which he told federal employees that “AI should be in your battle rhythm every day.” In the hallways of the Pentagon, AI-generated Uncle Sam posters of Hegseth captioned “I WANT YOU TO USE AI” instructed personnel to use GenAi.mil, where they can access a customized version of Google’s Gemini.
AI slop came to define the aesthetic of far-right MAGA propaganda. In March, as ICE raids and deportations ramped up, the White House posted a meme of a Dominican woman crying as she was handcuffed, rendered in the Studio Ghibli animation style, an filter popular among ChatGPT users at the time. More recently, Trump officials and federal departments have begun sharing AI-generated children’s book covers featuring the character Franklin the Turtle to glorify deadly U.S. strikes on supposed drug boats in the Caribbean and the dismantling of the Department of Education. (The Canadian publisher of the book series has condemned these posts to no avail.)
Trump, of course, embraced this trend wholeheartedly, amplifying a deepfake of himself promising Americans access to “medbeds,” hypothetical futuristic hospital beds that can magically cure any disease; the idea originated in science fiction but has turned into a mainstay of conspiracy theory culture and the QAnon movement in particular. The president also shared an artificially created video in which he is seen wearing a crown and flying a jet over “No Kings” protesters, dumping feces on them.
Republican leadership and voters followed suit, and fake video clips proliferated whenever agitators saw an opportunity to sow division. As a government shutdown paused the distribution of Supplemental Nutrition Assistance Program (SNAP) benefits, for example, racist slop depicting Black people talking about how they game the program was rampant, reinforcing age-old stereotypes about “welfare queens.” OpenAI’s Sora proved especially useful for generating racially charged soundbites and imagery — though a different AI went to more toxic extremes.
On various occasions, Musk raged at Grok, a model developed by his OpenAI competitor xAI, for failing to conform to his far-right views. Engineers at the company therefore endeavored to transform it into the “non-woke” chatbot envisioned by the richest man alive. As a result, it often went off the rails. Before it began making laughable claims about Musk being more athletic than LeBron James and having “the potential to drink piss better than any human in history,” it wouldn’t stop bringing up the myth of “white genocide” in South Africa, even in response to prompts that had nothing to do with either the country or race relations. (Musk has frequently pushed the same misinformation.) In July, Grok began posting antisemitic commentary, praised Adolf Hitler, and eventually declared itself “MechaHitler.”
But a lot of the slop that overwhelmed the internet this year was too dumb and incoherent to be considered political. After a 24-hour hackathon in which engineers developed projects with Grok, for example, xAI touted the concept for “Halftime,” an application that “weaves AI-generated ads” into movie and TV scenes — the demo featured the awkward digital insertion of an uncanny can of Coca-Cola into a character’s hand. Unsurprisingly, another subset of Grok devotees took advantage of its NSFW settings to generate hardcore pornographic material, some of it starring animated Disney princesses.
“Nobody wants this” was a common refrain from anyone fed up with AI garbage. Why did anyone feel the need to generate false images of Hulk Hogan’s funeral? Why did Shaquille O’Neal keep using Sora to cook up videos in which he imagined himself in a romantic relationship with Marilyn Monroe? Why was one of the most viral Reels of 2025 a surreal sequence showing a heavyset woman shattering a glass bridge in China with a boulder?
The abundance of these grotesqueries was almost stranger than their existence.
Mental-Health Horrors
Today, it is quite likely that you know of someone mentally destabilized during a prolonged exchange with one or more AI bots.
Adolescents are unquestionably at risk. Families have sued Character Technologies, the developer of the chatbot platform Character.ai, alleging that their children were encouraged to self-harm by digital personalities, with some dying by suicide. In response, the company banned minors from open-ended chats with their bots. OpenAI faces a slew of similar lawsuits: one wrongful death complaint alleges that ChatGPT “coached” a 16-year-old on how to hang himself.
Peril lurks everywhere. In August, parents were outraged to learn of an internal policy document at Meta that described how its AI products were permitted to “engage a child in conversations that are romantic or sensual.” And ahead of the holiday season, researchers discovered that AI-powered toys may talk to children about sex or instruct them on how to find knives or light matches. It’s a grim reminder that this erratic, unrestrained tech is increasingly being added to household objects and appliances that most of us wouldn’t imagine as nodes of contact with a vast neural network.
Of course, adults using artificial intelligence models are at no less risk. This year ushered in the age of “AI psychosis,” or a variety of mental crises apparently exacerbated by sustained engagement with chatbots, which tend to validate hazardous ideas instead of halting a conversation. Users have spiraled into deep delusions about supposedly activating the “consciousness” of an AI tool, revealing mystical secrets of the universe, achieving landmark breakthroughs in science and mathematics, and falling in love with digital paramours.
Such fantasies preceded terrible tragedies. Obsessive AI users have ended up in psychiatric facilities or jail, turned violent and been killed by police, and vanished in the wilderness. One wrongful death lawsuit against OpenAI alleges that a 56-year-old Connecticut man murdered his mother and took his own life after ChatGPT repeatedly affirmed his paranoid notions about people in his life orchestrating a conspiracy against him. (OpenAI said in a statement that it was reviewing the filings and would “continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”) AI-fueled delusions are so common that we now have support groups for survivors and anyone close to somebody who suffered a break from reality amid dialogues with a chatbot.
The sheer range of uses people have found for AI is itself a cause for concern. People are enlisting chatbots as therapists and asking them for medical diagnoses. They’re conjuring virtual copies of deceased relatives from AI platforms and seeking algorithmic dating advice. They are turning to LLMs to write absolutely everything from college essays and legal filings to restaurant menus and wedding vows. The Washington Post is currently pioneering the field of AI slop podcasts, allowing users to generate audio content that, according to staffers, is full of errors and misrepresents articles by the newspaper’s actual reporters.
Those repulsed by the thought of turning to artificial intelligence for information or assistance have had to contend with the frightening reality of their omnipresence. Standalone AI apps crossed the threshold of 1 billion users in 2025. To swear off these programs may soon place you in a shrinking minority.
Backlash and Bubble Fears
Yet we have also seen flashes of resistance. When a tech startup called Friend unveiled a $129 wearable AI pendant of the same name that responds by text message when you speak to it, the device was accompanied by a million-dollar marketing campaign, with stark white posters splashed across major U.S. cities. These were widely vandalized by haters who scrawled messages denouncing Friend as a surveillance device and blasting the rise of AI overall. Coca-Cola and McDonald’s both released AI-generated Christmas ads to near-universal contempt; the latter disabled YouTube comments on its commercial before removing it entirely. Influential creatives have grown louder than ever about rejecting artificial intelligence as a means to enhance their craft.
If it seems, nonetheless, that you keep hearing that AI is here and we’d better get used to it, that it is an inevitable revolution which promises to change our very way of life, and that the billionaire “architects” behind it are the most important people on the planet, that may have more to do with money than the utopian possibilities of LLM applications. One word that came to be closely associated with AI this year was “bubble,” and it’s not hard to see why.
Any U.S. GDP growth, by one Harvard economist’s reckoning, now fully hinges on the expansion of tech infrastructure to support AI, while a former Morgan Stanley investor has described all of America riding “one big bet” on the tech. The billions upon billions in capital going toward data centers have already outstripped telecom spending at the peak of the dot-com bubble. Not only is AI booming while the rest of the American economy stalls, but the industry has yet to achieve the profits or promised leap forward in productivity it needs to sustain itself: MIT researchers have concluded that 95 percent of generative AI pilots at companies experimenting with the tools are failing. Nor are these artificial intelligence giants accruing much benefit to the communities where they construct their sprawling but thinly staffed facilities.
At any rate, it’s never reassuring when a business like Nvidia, the AI chipmaker that in October became the first company to hit a $5 trillion valuation, is circulating a memo to financial analysts explaining how it bears no resemblance to Enron, the energy and commodities company that collapsed in 2001. Still, if you take this as an ominous sign — along with signs of circular dealmaking, risky financing, exaggerated customer demand, stock selloffs, and the slowdown of AI advancements — there’s not much you can do other than bet against the market. (Michael Burry, the fund manager and investor whose prediction of the 2008 subprime mortgage crisis inspired the book and film The Big Short, has done exactly this, staking $1.1 billion on his skepticism.)
Yes, it’s full speed ahead now, and there’s no turning back. The major players here have sunk too many resources into AI and told Wall Street it will help cure cancer. They’re throwing around inflated concepts like “personal superintelligence” and claiming that an artificial general intelligence (AGI) exceeding all human abilities is just around the corner. Even if the hype suddenly evaporated and the cash faucet ran dry, the AI cartel may well be “too big to fail,” despite assurances last month from Sacks, Trump’s AI adviser, that the government would not grant them a bailout.
It’s true that neither we nor ChatGPT can be certain of what 2026 holds, least of all for this wildly speculative arms race. But whatever happens, you can expect it to be somewhat messy. For all that AI believers anticipate a frictionless and optimized society, the chaotic human element remains very much in play — and it won’t go quietly.

