I'll gather the most up-to-date definitions of AGI (Artificial General Intelligence) and ASI (Artificial Superintelligence) from a broad range of sources, including AI research labs, academic institutions, government agencies, and independent experts. Additionally, I will compile a comparison table of different predictions regarding the timeline for transitioning from AGI to ASI, along with the reasoning behind each prediction. I'll update you as soon as I have the findings.
Artificial General Intelligence (AGI) vs. Artificial Superintelligence (ASI)
Comparative Definitions from Various Sources
The table below compiles up-to-date definitions of AGI and ASI from a range of sources, including AI research labs, academic/government institutions, and independent experts. These definitions highlight how AGI is generally seen as human-level general intelligence in AI, whereas ASI denotes intelligence far beyond human capabilities.
Source | Definition of AGI | Definition of ASI |
---|---|---|
OpenAI (Charter, 2018) | “Highly autonomous systems that outperform humans at most economically valuable work.”arxiv.org | Not explicitly defined by OpenAI (implies any future AI far beyond human-level intelligence). |
DeepMind (Google), 2023 | “Artificial general intelligence is commonly defined as AI that can perform any intellectual task a human can.”www.deeplearning.ai (i.e. human-level broad capability) | “Level 5 ‘superhuman’ AI (artificial superintelligence) outperforms 100% of skilled humans”www.deeplearning.ai. |
IBM (Think Blog), 2023 | “An AGI is a next-generation AI system that can understand the world and learn and apply problem-solving intelligence as broadly and flexibly as a human can.”www.ibm.com | “ASI is a hypothetical AI system with an intellectual scope beyond human intelligence, having cognitive skills more advanced than any human.”www.ibm.com |
UK House of Commons Library, 2023 (citing Stanford research) | “AGI… is an AI system that can undertake any intellectual task or problem that a human can. It can reason, analyze, and achieve a level of understanding on par with humans – something that has yet to be achieved by AI.”lordslibrary.parliament.uk | No existing AI qualifies as “superintelligent.” In general, ASI refers to a hypothetical AI that greatly surpasses human cognitive abilities (beyond AGI level). |
Nick Bostrom (Oxford), 1998/2014 (Independent expert) | Defines “human-level AI” similarly to AGI: an AI that can carry out most human professions at least as well as a typical humanemerj.com. | “By ‘superintelligence’ we mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom, and social skills.”nickbostrom.com |
Predictions on the Timeline from AGI to ASI
Experts and leaders differ widely on how quickly AGI might transition to ASI. The table below compares various predictions – from surveys of AI researchers to statements by futurists and tech leaders – about the timeline for an AGI to evolve into an ASI. It includes the predicted time frame, the source, and the basis or reasoning (e.g. expert survey, trend extrapolation, etc.) behind each prediction:
Source (Role, Year) | Predicted Timeline: AGI → ASI | Basis for Prediction |
---|---|---|
Müller & Bostrom (Oxford survey), 2014 | Once AGI is achieved, superintelligence could follow in as little as 2 years (10% chance), and very likely within 30 years (≈75% probability)emerj.com. | Survey of AI researchersemerj.com– aggregated expert opinions on how quickly an AI that’s at human level might surpass human intelligence. |
Sam Altman (OpenAI CEO), 2023 | AGI may arrive soon, and “it is possible that we will have superintelligence in a few thousand days” (roughly by 2034)venturebeat.com. | CEO of OpenAI – based on rapid progress in AI capabilities and internal forecasts (Altman’s essay “The Intelligence Age”). |
Elon Musk (Tech CEO), 2023 | AI could do “anything any human can do” by ~2025, and “do what all humans combined can do” (a true ASI) by 2028–2029venturebeat.com. | Tech leader known for urgency about AI – extrapolates from current AI advancements and believes in a fast “intelligence explosion”venturebeat.com. |
Ray Kurzweil (Futurist), 2005 & 2023 | Predicted AGI by 2029 and a technological “Singularity” (ASI) by 2045venturebeat.comwww.govtech.com. (He continues to stand by these timelines.) | Extrapolation of exponential growth in computing (Moore’s Law) and technologywww.govtech.com– a long-held futurist forecast of human-level AI followed by rapidly self-improving AI. |
Dr. Geoffrey Hinton (AI Pioneer), 2023 | Recently revised his view: AI could be “smarter than us” within ~20 years (≈50% chance)analyticsindiamag.com, possibly as soon as 5–20 years (with uncertainty)analyticsindiamag.com. | Tuning earlier estimates to faster timelines – cites the unexpected speed of recent AI progress (Hinton had shifted from 30–50 years down to <20 years)analyticsindiamag.comanalyticsindiamag.com. |
Yann LeCun (Meta AI Chief), 2023 | Believes superintelligent AI is still decades away (no clear timeline, “years, if not decades”)analyticsindiamag.com. | Skeptical perspective – argues current AI lacks fundamental elements of general intelligence, so no imminent AGI/ASIanalyticsindiamag.com. (Prefers the term “human-level AI” and doubts a sudden jump in capability.) |