Monday, December 4, 2023
HomeTechnologyOpposite to studies, OpenAI most likely is not constructing humanity-threatening AI

Opposite to studies, OpenAI most likely is not constructing humanity-threatening AI


Has OpenAI invented an AI expertise with the potential to “threaten humanity”? From a number of the latest headlines, you may be inclined to suppose so.

Reuters and The Info first reported final week that a number of OpenAI workers members had, in a letter to the AI startup’s board of administrators, flagged the “prowess” and “potential hazard” of an inner analysis venture generally known as “Q*.” This AI venture, in line with the reporting, may clear up sure math issues — albeit solely at grade-school degree — however had within the researchers’ opinion an opportunity of constructing towards an elusive technical breakthrough.

There’s now debate as as to if OpenAI’s board ever acquired such a letter — The Verge cites a supply suggesting that it didn’t. However the framing of Q* apart, Q* in reality may not be as monumental — or threatening — because it sounds. It may not even be new.

AI researchers on X (previously Twitter) together with Yann LeCun, Meta’s chief AI scientist Yann LeCun, had been instantly skeptical that Q* was something greater than an extension of current work at OpenAI — and different AI analysis labs in addition to. In a put up on X, Rick Lamers, who writes the Substack e-newsletter Coding with Intelligence, pointed to an MIT visitor lecture OpenAI co-founder John Schulman gave seven years in the past throughout which he described a mathematical operate referred to as “Q*.”

A number of researchers imagine the “Q” within the title “Q*” refers to “Q-learning,” an AI method that helps a mannequin study and enhance at a selected job by taking — and being rewarded for — particular “right” actions. Researchers say the asterisk, in the meantime, might be a reference to A*, an algorithm for checking the nodes that make up a graph and exploring the routes between these nodes.

Each have been round some time.

Google DeepMind utilized Q-learning to construct an AI algorithm that might play Atari 2600 video games at human degree… in 2014. A* has its origins in an educational paper printed in 1968. And researchers at UC Irvine a number of years in the past explored bettering A* with Q-learning — which may be precisely what OpenAI’s now pursuing.

Nathan Lambert, a analysis scientist on the Allen Institute for AI, instructed TechCrunch he believes that Q* is linked to approaches in AI “principally [for] learning highschool math issues” — not destroying humanity.

“OpenAI even shared work earlier this yr bettering the mathematical reasoning of language fashions with a way referred to as course of reward fashions,” Lambert stated, “however what stays to be seen is how higher math skills do something aside from make [OpenAI’s AI-powered chatbot] ChatGPT a greater code assistant.”

Mark Riedl, a pc science professor at Georgia Tech, was equally crucial of Reuters’ and The Info’s reporting on Q* — and the broader media narrative round OpenAI and its quest towards synthetic common intelligence (i.e. AI that may carry out any job in addition to a human can). Reuters, citing a supply, implied that Q* might be a step towards synthetic common intelligence (AGI). However researchers — together with Riedl — dispute this.

“There’s no proof that means that giant language fashions [like ChatGPT] or some other expertise below growth at OpenAI are on a path to AGI or any of the doom situations,” Riedl instructed TechCrunch. “OpenAI itself has at greatest been a ‘quick follower,’ having taken current concepts … and located methods to scale them up. Whereas OpenAI hires top-rate researchers, a lot of what they’ve accomplished will be accomplished by researchers at different organizations. It may be accomplished if OpenAI researchers had been at a special group.

Riedl, like Lambert, didn’t guess at whether or not Q* would possibly entail Q-learning or A*. But when it concerned both — or a mix of the 2 — it’d be per the present tendencies in AI analysis, he stated.

“These are all concepts being actively pursued by different researchers throughout academia and trade, with dozens of papers on these matters within the final six months or extra,” Riedl added. “It’s unlikely that researchers at OpenAI have had concepts that haven’t additionally been had by the substantial variety of researchers additionally pursuing advances in AI.”

That’s to not recommend that Q* — which reportedly had the involvement of Ilya Sutskever, OpenAI’s chief scientist — may not transfer the needle ahead.

Lamers asserts that, if Q* makes use of a number of the methods described in a paper printed by OpenAI researchers in Could, it may “considerably” enhance the capabilities of language fashions. Based mostly on the paper, OpenAI would possibly’ve found a solution to management the “reasoning chains” of language fashions, Lamers says — enabling them to information fashions to comply with extra fascinating and logically sound “paths” to achieve outcomes.

“This could make it much less probably that fashions comply with ‘international to human pondering’ and spurious-patterns to achieve malicious or fallacious conclusions,” Lamers stated. “I believe that is truly a win for OpenAI by way of alignment … Most AI researchers agree we want higher methods to coach these massive fashions, such that they’ll extra effectively devour info

However no matter emerges of Q*, it — and the comparatively basic math equations it solves — received’t spell doom for humanity.



RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments