
Thesis:
- Humanity’s predicament is far more dire than most imagine.
- In these [[EOCK Last stages AGI development]] there is a remote possibility that we can [[EOCK Shut it all down]].
- Failing that, however, our last best hope is that we can shape outcomes in a way that is not catastrophic for humanity.
Thesis:
- It is possible to advance our understanding of how to shape the application of AGI faster than our understanding of how to achieve AGI in a way that we can avoid catastrophic outcomes for humanity.
Antecedents:
- [[Cant_Stop]] and [[Apex_Predator]]
Justification:
Discussion:
Some worry about a “fast takeoff” scenario where AI takes over so fast, in hours, days, or weeks, that we don’t have time to react. I am personally doubtful about the plausibility of this scenerio, nonetheless, I feel we have little chance to
~~ TWO PROCESSES NATURALLY IN TENSION: The knowledge required to
- BUILD our replacement
- GUIDE safe management of our replacement
If the former happens before the latter the way AGI unfolds is unlikely to happen in a way that is auspicious for humanity, while the latter first is our best hope for a good outcome.
since we can see where we are going
~~ Underlying this is an assumption that knowledge is monotonic and progressive…. it rarely goes backwards.
The outcome of having ever greater knowledge is a foregone conclusion. not only the having of it, but also the application of that knowledge. (we have never know a thing as a species, yet never used it at least once).
Thus the only thing we can really shape is the way that knowledge is used.
~~