
I plan on refining/adding content before sharing it more widely.
TL; DR. Segmenting arguments about ASI outcomes into isolated claims allows us to identify sub-areas with broad consensus and provide Lego-like building blocks one can use to construct the range of most plausible outcomes.
Artificial Super Intelligence (ASI) is arguably the most transformative advancement humanity will ever make. Predictions about outcomes from researchers in the field range from visions of utopia to utter destruction. There are too many unknowns to reason our way to a consensus conclusion; still, when talking with AI researchers, I find many areas of broad agreement.
This suggests we identify and isolate areas of relative consensus, then combine subsets of these to envision a couple of plausible futures. This approach allows us to agree on a collection of plausible futures and highlight the specific sub-claims that separate the divergent thinking on this pressing existential question.
Below is a starter hierarchy of about a dozen plausible claims. The aim is to frame as many “Obvious” claims as possible to serve as the building blocks for more contentious conclusions. Ultimately, I would like an open wiki-style framework where researchers might post other claims and some mechanism to record consensus and dissent for each posited claim. For now, the focus is simply on capturing a handful of claims that tie together to form interesting conclusions. Feedback is most welcome at public@oblinger.us.
| Abbreviation | CULMINATING CLAIMS |
| Can’t Stop | Claim: Humanity cannot stop its creation of AGI. |
| Loose Control | Claim: Humanity cannot maintain indefinite control over the ASI systems it creates (even if alignment is solved). |
| Our Best Hope | Claim: Our best hope is to shape our progressive loss of control in ways least likely to result in catastrophic harm. |
| Competition Implies | CONSEQUENCES OF COMPETITION Claim: The nature of competitive systems causes agents embedded within them to have many predictable properties. |
| Natural Selection | Claim: People, organizations, or AI-agents embedded within a process of natural selection cannot individually halt or escape this process of natural selection. |
| Corp AI | Claim: Nations, Corporations, and Individuals are, to a first order of approximation, profit/power maximization mechanisms. |
| Civilizing Tendency | Claim: Competing agents will have “civilizing” others as an instrumental purpose that they could adopt as a core part of their agenda. |
| NARROW SUB-CLAIMS | |
| Federated Minds |
Claim: Physical and logical constraints will cause ASIs to have federated minds. |
| AGI Implies ASI | Claim: Achieving AGI will likely afford a specific kind of ASI at nearly the same time. |
| Loop Closing | Claim: Once loop is closed over an entire job function, performance on that job function will jump discontinuously in cost and productivity. |
| Not If But When | Claim: Execution lapses will eventually occur. Thus, safety guarantees are only meaningful if robust to those expected lapses. |
| Hiding Beats Finding | Claim: Finding grows dramatically more difficult than hiding as complexity increases. |
| Agentic Decompositionality | Claim: While not required of all AGI systems, knowledge, ability, and agency can be encoded as separable concerns. |
| Machine Centric Coms | Claim: The fraction of machine-interpretable information will grow to eclipse human-interpretable communications. This will be true both between and within intelligent agents. |
| [[Productive Capacity Framework]] | PRODUCTIVE CAPACITY FRAMEWORK |
| HALF BAKED CLAIMS | |
| Alignment Won’t Work | Claim: Because of a layered combination of reasons, humanity will fail to align the AIs we build. |
| Apex Predator | Claim: We are in the final stages of a planet-wide transition from one apex predator to another. |
| AGI Will Come | |
| Nature of AI Progress | Claim: There are specific aspects of successful AI systems that we can use to make informed predictions about the nature of future AI systems and our probable progress in developing these systems. |
| No Fast Takeoff | Claim: A unilateral fast take-off scenario might be less likely while risks from a slow takeoff might be greater than often considered. |