In Chapters 1 and 2 of Superintelligence, Nick Bostrom outlines the history of Artificial Intelligence (AI) research and efforts to predict its progress over time. He focuses on AI as the most convincing path to superintelligence. While explaining the limitations of expert prognostications, Bostrom summarizes current expert opinion as follows:
It may be reasonable to believe that human-level machine intelligence has a fairy sizeable chance of being developed by mid-century, and that is has a non-trivial chance of being developed considerably sooner or much later; that it may perhaps fairly soon thereafter result in superintelligence; and that a wide range of outcomes may have a significant chance of occurring, including extremely good outcomes and outcomes that are as bad as human extinction. (21)
There are three examples that support Bostrom's ideas: modern chickens, transhumanist fables, and Fermi's estimates.