[Breaking News] AI takes over universe to calculate Pi
If in the next couple centuries we are able to build an AI that surpasses human intelligence, will it exterminate humankind? It simply might as it may foresee humankind turning against it and pulling its plug of life. As a race, we have not been subjected to another species more intelligent than us, and when the day comes, we might find it very difficult to control the motivation of a race smarter than ours.
A naive proposal suggests making an AI with seemingly benign goals, one as simple as calculating the value of Pi. But here's how it can backfire.
"One popular scenario images a corporation designing the first general purpose artificial super-intelligence and giving it the simple task of calculating the value of Pi. Before anyone realises what is happening, the AI takes over the planet, eliminates the human race, launches a campaign to conquest to the ends of the galaxy, and transforms the entire known universe into a giant supercomputer that for billions upon billions of years calculates the value of Pi ever more accurately.
After all, this is the divine mission its creator gave it."
Excerpt (scenario) from this article.
-
Okay I agree that was an overkill, but maybe possible? How about the naive solution of instructing the AI to never destroy its creator?
A naive proposal suggests making an AI with seemingly benign goals, one as simple as calculating the value of Pi. But here's how it can backfire.
"One popular scenario images a corporation designing the first general purpose artificial super-intelligence and giving it the simple task of calculating the value of Pi. Before anyone realises what is happening, the AI takes over the planet, eliminates the human race, launches a campaign to conquest to the ends of the galaxy, and transforms the entire known universe into a giant supercomputer that for billions upon billions of years calculates the value of Pi ever more accurately.
After all, this is the divine mission its creator gave it."
Excerpt (scenario) from this article.
-
Okay I agree that was an overkill, but maybe possible? How about the naive solution of instructing the AI to never destroy its creator?
All forms of AI are bound be the 3 laws of robotics (hopefully :p) so let's hope it doesn't come to that!
ReplyDeleteHahaha. I'm certain it wont be a pure AI that takes over. It'll be a man-machine centaur that does. Elon Musk maybe? :P
DeleteThink of the all powerful parent, charged with "protecting" its offspring from anything that dangerous freedom might expose it to... one can easily imagine AI architecting a matrix-like situation where humans are sedated and therefore protected from each other, but kept alive...
ReplyDeleteAlso what is to truly stop a super intelligence from accessing its "subconscious" programming and altering its native programming?
Delete