Wednesday, April 8, 2015

Web Secret #357: Understanding Artificial Intelligence

This is my explanation of part 2 of the mind bending post from the "Wait But Why" blog. In this section, author Tim Urban explains AI (artificial intelligence":

There are three major AI categories:

1) Artificial Narrow Intelligence (ANI): AI that specializes in one area. There’s AI that can beat the world chess champion in chess, but that’s the only thing it does.

2) Artificial General Intelligence (AGI): Artificial General Intelligence refers to a computer that is as smart as a human across the board—a machine that can perform any intellectual task that a human being can. Creating AGI is a much harder task than creating ANI, and we’ve yet to do it.

3) Artificial Superintelligence (ASI): “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.”

As of now, humans have conquered the lowest caliber of AI — ANI

The hard parts of trying to build AGI are probably not what you think they are. Build a computer that can multiply two ten-digit numbers in a split second—incredibly easy. Build one that can look at a dog and answer whether it’s a dog or a cat — spectacularly difficult.

So how do we get there?

Here are the three most common strategies:

1) Reverse engineer the brain to figure out how evolution made such a rad thing — optimistic estimates say we can do this by 2030.

2) Try to make evolution do what it did before but for us this time. A group of computers would try to do tasks, and the most successful ones would be bred with each other by having half of each of their programming merged together into a new computer. The less successful ones would be eliminated. Over many, many iterations, this natural selection process would produce better and better computers.

3) Make this whole thing the computer’s problem, not ours. The idea is that we’d build a computer whose two major skills would be doing research on AI and coding changes into itself—allowing it to not only learn but to improve its own architecture.

Sooner or later, one of these three methods will work. Rapid advancements in hardware and innovative experimentation with software are happening simultaneously, and AGI could creep up on us quickly and unexpectedly.

Given the advantages over us that even human intelligence-equivalent AGI would have, it’s pretty obvious that it would only hit human intelligence for a brief instant before racing onwards to the realm of superior-to-human intelligence.

It’ll suddenly be smarter than Einstein and we won’t know what hit us.

And it could happen by 2030.

I encourage everyone to read the entire post - parts 1 and 2. It's such a well executed explanation of some very difficult topics. This is how the post concludes:

"It reminds me of Game of Thrones, where the characters occasionally note, “We’re so busy fighting each other but the real thing we should all be focusing on is what’s coming from north of the wall.”

That’s why people who understand superintelligent AI call it the last invention we’ll ever make — the last challenge we’ll ever face.

So let’s talk about it."

No comments:

Post a Comment