Oxfordin yliopiston professori, ruotsalaissyntyinen filosofi Nick Bostrom esittää, että keinoäly saattaa tuhota ihmiskunnan, Terminator-skenaario, jossa ihmistä paljon älykkäämpien koneiden motivaatiot ja etiikka ei vastaa ihmisten toiveita.
His latest book,Superintelligence, is all about what happens if and when general artificial intelligence (AI) emerges — and why that could mean a world dominated by machines.
The basic argument is simple. At some point, many experts believe that artificial intelligence will advance to a point where it not only exceeds human intelligence, but is capable of expanding its own intelligence, setting off an exponential "intelligence explosion." In theory, these hyper-intelligent machines could be used to serve human ends. They could cure diseases and resolve intractable scientific quandaries. In an extreme case, they could wholly replace human workers, enabling humankind to quit working and live comfortably off the robots' labor.
But the problem is that, Bostrom argues, superintelligent machines will be so much more intelligent than humans that they most likely won't remain tools. They'll become goal-driven actors in their own right, and their goals may not be compatible with those of humans. Indeed, they might not be compatible with the continued existence of humans. Please consult the Terminator franchise for more on how that situation plays out.