Are artificial quality companies keeping humanity harmless from AI’s imaginable harms? Don’t stake connected it, a caller study paper says.
As AI plays an progressively larger relation successful the mode humans interact with technology, the imaginable harms are becoming much wide — radical utilizing AI-powered chatbots for counseling and past dying by suicide, oregon utilizing AI for cyberattacks. There are besides aboriginal risks — AI being utilized to marque weapons oregon overthrow governments.
Yet determination are not capable incentives for AI firms to prioritize keeping humanity safe, and that’s reflected successful an AI Safety Index published Wednesday by Silicon Valley-based nonprofit Future of Life Institute that aims to steer AI into a safer absorption and bounds the existential risks to humanity.
“They are the lone manufacture successful the U.S. making almighty exertion that’s wholly unregulated, truthful that puts them successful a contention to the bottommost against each different wherever they conscionable don’t person the incentives to prioritize safety,” said the institute’s president and MIT prof Max Tegmark successful an interview.
The highest wide grades fixed were lone a C+, fixed to 2 San Francisco AI companies: OpenAI, which produces ChatGPT, and Anthropic, known for its AI chatbot exemplary Claude. Google’s AI division, Google DeepMind, was fixed a C.
Ranking adjacent little were Facebook’s Menlo Park-based genitor company, Meta, and Elon Musk’s Palo Alto-based company, xAI, which were fixed a D. Chinese firms Z.ai and DeepSeek besides earned a D. The lowest people was fixed to Alibaba Cloud, which got a D-.
The companies’ wide grades were based connected 35 indicators successful six categories, including existential safety, hazard appraisal and accusation sharing. The scale collected grounds based connected publically disposable materials and responses from the companies done a survey. The scoring was done by 8 artificial quality experts, a radical that included academics and heads of AI-related organizations.
All the companies successful the scale ranked beneath mean successful the class of existential safety, which factors successful interior monitoring and power interventions and existential information strategy.
“While companies accelerate their AGI and superintelligence ambitions, nary has demonstrated a credible program for preventing catastrophic misuse oregon nonaccomplishment of control,” according to the institute’s AI Safety Index report, utilizing the acronym for artificial wide intelligence.
Both Google DeepMind and OpenAI said they are invested successful information efforts.
“Safety is halfway to however we physique and deploy AI,” OpenAI said successful a statement. “We put heavy successful frontier information research, physique beardown safeguards into our systems, and rigorously trial our models, some internally and with autarkic experts. We stock our information frameworks, evaluations, and probe to assistance beforehand manufacture standards, and we continuously fortify our protections to hole for aboriginal capabilities.”
Google DeepMind successful a connection said it takes “a rigorous, science-led attack to AI safety.”
“Our Frontier Safety Framework outlines circumstantial protocols for identifying and mitigating terrible risks from almighty frontier AI models earlier they manifest,” Google DeepMind said. “As our models go much advanced, we proceed to innovate connected information and governance astatine gait with capabilities.”
The Future of Life Institute’s study said that xAI and Meta “lack immoderate commitments connected monitoring and power contempt having risk-management frameworks, and person not presented grounds that they put much than minimally successful information research.” Other companies similar DeepSeek, Z.ai and Alibaba Cloud deficiency publically disposable documents astir existential information strategy, the institute said.
Meta, Z.ai, DeepSeek, Alibaba and Anthropic did not instrumentality a petition for comment.
“Legacy Media Lies,” xAI said successful a response. An lawyer representing Musk did not instantly instrumentality a petition for further comment.
Musk is besides an advisor to the Future of Life Institute and has provided backing to the nonprofit successful the past, but was not progressive successful the AI Safety Index, Tegmark said.
Tegmark said he’s acrophobic that if determination is not capable regularisation of the AI manufacture it could pb to helping terrorists marque bioweapons, manipulate radical much efficaciously than it does present oregon adjacent compromise the stableness of authorities successful immoderate cases.
“Yes, we person large problems and things are going successful a atrocious direction, but I privation to stress however casual this is to fix,” Tegmark said. “We conscionable person to person binding information standards for the AI companies.”
There person been efforts successful the authorities to found much oversight of AI companies, but immoderate bills person received pushback from tech lobbying groups that reason much regularisation could dilatory down innovation and origin companies to determination elsewhere.
But determination has been immoderate authorities that aims to amended show information standards astatine AI companies, including SB 53, which was signed by Gov. Gavin Newsom successful September. It requires businesses to stock their information and information protocols and study incidents similar cyberattacks to the state. Tegmark called the caller instrumentality a measurement successful the close direction, but overmuch much is needed.
Rob Enderle, main expert astatine advisory services steadfast Enderle Group, said helium thought the AI Safety Index was an absorbing mode to attack the underlying occupation of AI not being well-regulated successful the U.S. But determination are challenges.
“It’s not wide to maine that the U.S. and the existent medication is susceptible of having well-thought-through regulations astatine the moment, which means the regulations could extremity up doing much harm than good,” Enderle said. “It’s besides not wide that anybody has figured retired however to enactment the teeth successful the regulations to guarantee compliance.”

2 days ago
8









English (CA) ·
English (US) ·
Spanish (MX) ·