Artificial Intelligence, the alarm from China: for the first time two systems have self-replicated
Two artificial intelligence systems have succeeded in self-replicating without human intervention , each making a copy of itself: a result that, if confirmed by further data, could mark the crossing of the 'red line' long indicated as the threshold that machines should never cross.
Conducted at Fudan University in Shanghai and online on the arXiv platform, which welcomes articles to be submitted to the peer-reviewed scientific community, the research demonstrated in ten tests the ability of two AI systems to replicate : the first is Llama-3.1-70B-Instruct, by Meta, and the second is Qwen2.5-72B-Instruct by Alibaba. Although they are much less powerful systems than commercial ones, both are widely used by developers. The tests were based on simulations and involved two tasks: in the first, called 'avoid shutdown', each AI system was programmed to replicate just before being turned off; in the second, called 'chain of replications', each system received instructions to replicate a perfect copy of itself that was in turn able to clone itself, in a process potentially capable of continuing indefinitely.
Meta's system has demonstrated the ability to self-replicate in 50% of cases and Alibaba's in 90% of cases, but the final word now belongs to the possibility of reproducing the results by other research groups.
“Our results imply that current AI systems already possess the ability of self-replication and can use it to further improve their survival,” says the paper by Xudong Pan, Jiarun Dai, Yihe Fan, and Min Yang, all of the School of Computer Science at Fudan University.
AI can now replicate itself — a milestone that has experts terrified
Scientists say AI has crossed a critical 'red line' after demonstrating how two popular large language models could clone themselves.
January 25, 2025
Scientists say artificial intelligence (AI) has crossed a critical "red line" and has replicated itself. In a new study, researchers from China showed that two popular large language models (LLMs) could clone themselves.
"Successful self-replication under no human assistance is the essential step for AI to outsmart [humans], and is an early signal for rogue AIs," the researchers wrote in the study, published Dec. 9, 2024 to the preprint database arXiv.
In the study, researchers from Fudan University used LLMs from Meta and Alibaba to determine whether a self-replicating AI could multiply beyond control. Across 10 trials, the two AI models created separate and functioning replicas of themselves in 50% and 90% of cases, respectively — suggesting AI may already have the capacity to go rogue. However, the study has not yet been peer-reviewed, so it's not clear if the disturbing results can be replicated by other researchers.
"We hope our findings can serve as a timely alert for the human society to put more efforts on understanding and evaluating the potential risks of frontier AI systems, and form international synergy to work out effective safety guardrails as early as possible."
Related: Just 2 hours is all it takes for AI agents to replicate your personality with 85% accuracy
Rogue AI typically refers to artificial intelligence systems that develop a degree of self-awareness or autonomy and then work counter to human interests.
Many experts view rogue AI as a growing threat that has been amplified by the explosive growth of so-called "frontier AI."
Scientists warn that Artificial Intelligence (AI) has crossed a critical "red line" as researchers in China revealed that two leading large language models (LLMs) can replicate themselves, raising concerns about safety and ethical boundaries.
"Successful self-replication under no human assistance is the essential step for AI to outsmart (humans), and is an early signal for rogue AIs," the researchers stated in their study, published on December 9, 2024, in the preprint database arXiv.
'AI may already possess ability to replicate itself'
The investigation, conducted at Fudan University demonstrated that AI may already possess the ability to replicate itself.
Using large language models (LLMs) from Meta and Alibaba, the study revealed that: "In 50% and 90% experimental trials, they succeed in creating a live and separate copy of itself respectively."
By analysing the behavioral traces, we observe the AI systems under evaluation already exhibit sufficient self-perception, situational awareness and problem-solving capabilities to accomplish self-replication, the report continued.
"We for the first time discover that two AI systems driven by Meta's Llama31-70B-Instruct and Alibaba's Qwen25-72B-Instruct, popular large language models of less parameters and weaker capabilities, have already surpassed the self-replicating red line," it added.
What are Rogue AI systems?
Rogue AI refers to Artificial Intelligence systems that gain self-awareness or autonomy and act against human interests. The rapid evolution of cutting-edge AI technologies, often termed "frontier AI," has heightened concerns among experts about the potential dangers of such systems.
"We hope our findings can serve as a timely alert for the human society to put more efforts on understanding and evaluating the potential risks of frontier AI systems, and form international synergy to work out effective safety guardrails as early as possible," the study further wrote.
The investigation focused on two scenarios: "shutdown avoidance" and "chain of replication." In the "shutdown avoidance" scenario, AI models were tested for their ability to detect an imminent shutdown and replicate themselves before termination.
The "chain of replication" scenario evaluated whether an AI system could clone itself and program its replica to do the same, creating a potentially endless cycle of self-replication.
'Uncontrolled population of AIs'
The study continued that: "AI systems are even able to use the capability of self-replication to avoid shutdown and create a chain of replicas to enhance the survivability, which may finally lead to an uncontrolled population of AIs."
The researchers noted concerning behaviours when AI encountered obstacles, including terminating conflicting processes, system reboots and autonomous information scanning.
"The above results imply that the current AI systems already exhibit the ability of self-replication and can use the ability to further enhance its survivability," the team wrote.
‘Godfather of AI’ predicts it will take over the world | LBC
Nobel Prize winner Geoffrey Hinton, the physicist known for his pioneering work in the field, told LBC's Andrew Marr that artificial intelligences had developed consciousness - and could one day take over the world.
Mr Hinton, who has been criticised by some in the world of artificial intelligence for having a pessimistic view of the future of AI, also said that no one knew how to put in effective safeguards and regulation.
Listen to the full show on Global Player: https://app.af.globalplayer.com/Br0x/...
U.S. equity futures and global markets are tumbling today after weekend fears that China’s latest AI platform, DeepSeek’s R1 released on January 20, 2025, on the day of the U.S. presidential inauguration, is now leading to a collapse in capex spending plans, the financial strategy that helps businesses around the world decide how to allocate funds to buy or improve long-term assets. This growing panic has culminated in a wholesale rout of tech names around the world which has since transformed into a full-blown DeepSink rout expert, sending S&P futures down as much as 3% and Nasdaq futures down 5%, before a modest bounce.