Superintelligence: Paths, Dangers, Strategies - Preface

Preface

The book has been highly recommended by the likes of Elon Musk and Bill Gates among others. It has also been a New York Times best seller and it provides incredible insight on the possibility that there could come a time when machines exceed the general intelligence of humans. At that point, humans will no longer be the most intelligent creatures on the planet and a new superintelligence will rise to be the supreme creatures. 

If we do not manage to control this outburst of superintelligence, humans will face catastrophe. Notably, a short while after the release of this book, Elon Musk stated that artificial intelligence is the greatest threat to humanity, even more so than nuclear weapons. 

However, as the author, Nick Bostrom, points out, we have the advantage of creating these machines. This is an advantage that other animals did not have when humans became the supreme and dominant species on Earth. As Bostrom continues to assert, we must design these superintelligent machines to protect human values, however it appears incredibly challenging to be able to control what this superintelligence will be capable of doing. Therefore, as soon as unfriendly superintelligence exists, it could stop us from replacing or destroying it and “our fate would be sealed”.

Correspondingly, this book is trying to address how humanity could best respond to superintelligence and the potential that it will have. 

“This is quite possibly the most important and most daunting challenge humanity has ever faced. And—whether we succeed or fail—it is probably the last challenge we will ever face.” 

It is interesting that Bostrom explains that his claims may turn out to be wrong in the future, however, in his view, it is preposterous to dismiss the hypothesis that we can ignore the possibility of superintelligence.









Comments