Calculations Show Humans Can’t Contain Superintelligent Machines

In a new study, researchers from Germany’s Max Planck Institute for Human Development say they’ve shown that an artificial intelligence in the category known as “superintelligent” would be impossible for humans to contain with competing software.

That … doesn’t sound promising. But are we really all doomed to bow down to our sentient AI overlords?

Berlin’s Institute for Human Development studies how humans learn—and how we subsequently build and teach machines to learn. A superintelligent AI is one that exceeds human intelligence and can teach itself new things beyond human grasp. It’s this phenomenon that causes a great deal of thought and research.

The Planck press release points out superintelligent AIs already exist in some capacities. “[T]here are already machines that perform certain important tasks independently without programmers fully understanding how they learned it,” study coauthor Manuel Cebrian explains. “The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity.”

Mathematicians, for example, use complex machine learning to help solve outliers for famous proofs. Scientists use machine learning to come up with new candidate molecules to treat diseases. Yes, much of this research involves some amount of “brute force” solving—the simple fact that computers can race through billions of calculations and shorten these problems from decades or even centuries to days or months.

Because of the amount that computer hardware can process at once, the boundary where quantity becomes quality isn’t always easy to pinpoint. Humans are fearful of AI that can teach itself, and Isaac Asimov’s Three Laws of Robotics (and generations of variations on them) have become instrumental to how people imagine we can protect ourselves from a rogue or evil AI. The laws dictate that a robot can’t harm people and can’t be instructed to harm people.

The problem, according to these researchers, is that we likely don’t have a way to enforce these laws or others like them. From the study:

Basically, a superintelligent AI will have acquired so much knowledge that to even plan a large enough container will exceed our human grasp. Not just that, but there’s no guarantee we’ll be able to parse whatever the AI has decided is the best medium. It probably won’t look anything like our humanmade, clumsy programming languages.

This might sound scary, but it’s also extremely important information for scientists to have. Without the phantom of a “failsafe algorithm,” computer researchers can put their energy into other plans and exercise more caution.

🎥 Now Watch This:

Source: Read Full Article