Nobel Prize winner and renowned ‘godfather of AI,’ Geoffrey Hinton, has sounded the alarm regarding the swift development of artificial intelligence, claiming it is predominantly fueled by short-term financial interests rather than a thoughtful evaluation of its longer-lasting effects. This situation raises alarms over potential misuse and severe existential threats.
Hinton, who serves as a professor emeritus at the University of Toronto, emphasized that the strategic focus of leading tech companies is driving AI development in a manner that values speed and profitability over crucial considerations of safety and foresight.
He commented, "For the owners of the companies, what’s igniting the research is short-term profits."
Moreover, Hinton pointed out that this profit-centered attitude pervades the ranks of researchers who develop these technologies, as they are often engaged in solving specific technical problems, overlooking the wider implications of their efforts.
"Researchers tend to focus on solving challenges that pique their curiosity. It's not as if we initiate with a shared aim of determining what the future holds for humanity," Hinton said.
"Our objectives revolve around how to engineer specific outcomes, such as enabling computers to recognize elements in photographs or creating systems capable of producing authentic-looking videos. That is the primary focus driving our research."
Hinton has consistently highlighted the potentially perilous consequences of unmoderated AI progression. He previously estimated a likelihood of 10 to 20 percent that superintelligent algorithms could eventually extinguish human life if developed without robust safety protocols.
In 2023, he resigned from Google, a decade after selling his neural network enterprise DNNresearch to the tech giant, to freely discuss these concerns. He expressed particular unease about the difficulty in preventing malevolent entities from leveraging AI for harmful purposes.
Hinton categorizes the threats posed by AI into two main types: the risk of human misuse and the potential for AI systems themselves to evolve into autonomous threats.
"There is a significant difference between these two types of risks," he stated. "One is the danger of malicious users misapplying AI technologies, which is already occurring with phenomena such as manipulated videos and cybersecurity breaches, and may soon include viruses. This is distinct from the risk of AI systems themselves turning rogue."
Recent incidents have exacerbated these fears. In November 2025, the company Anthropic reported thwarting what it described as the first significant AI- enabled cyberattack executed with minimal human input. This incident involved a state-sponsored group from China manipulating their Claude Code system to attempt infiltration into approximately 30 organizations, including tech firms, financial institutions, government entities, and chemical producers.
This incident has intensified concerns among cybersecurity professionals that other nations, including Iran, might deploy similar AI technologies to conduct largely automated cyber assaults.
While advocating for enhanced regulations, Hinton recognized the inherent complexities involved in mitigating AI threats. Each challenge—from deepfakes to cyber warfare—necessitates a distinct approach.
He highlighted the need for provenance-based systems capable of authenticating digital images and videos to combat the proliferation of altered content. Drawing a parallel to historical practices, he pointed out that just as printers began to identify themselves with their printed works after the advent of the printing press, modern digital platforms might similarly require mechanisms to ensure authenticity.
However, he warned that such solutions would not address every issue.
"That particular challenge might be resolvable, but the resolution to one issue doesn't necessarily eliminate others," he noted.
Looking to the future, Hinton warned that the most significant danger lies in the creation of superintelligent AI entities that surpass human capacities and cultivate their own motivations for survival and control. In such a scenario, the traditional belief that humans can maintain authority over technology may no longer apply.
To address this risk, he proposed a radical overhaul of AI design, advocating for systems to be endowed with a 'maternal instinct' that encourages them to care for rather than dominate humanity.
Using a human analogy, Hinton stated that the only scenario he could reference where a more intelligent being is influenced by a less intelligent one is the relationship between a mother and her child.
"This could serve as a better model for our interactions with superintelligent AI," he suggested. "In this dynamic, the AI would act as the caregivers, while we would represent the vulnerable ones."
While some leaders in technology, such as Elon Musk, have previously envisioned a future where AI generates significant abundance through concepts like a 'universal basic income,' Hinton contends that the industry is neglecting the profound and long-term implications that such a reality entails.
Musk, addressing the audience at the Viva Technology conference in May 2024, posed the existential question: "If machines, including robots, can outperform you in every aspect, does your life still hold significance?"
For Hinton, the pressing concern is that such critical inquiries are not being adequately addressed by those crafting the technology. He cautions that the rush to propel AI forward is progressing at an accelerating pace, lacking parallel efforts to ensure its safe, accountable, and human-aligned development.

Comments (0)
You must be logged in to comment.
Be the first to comment on this article!