Open links in new window
PURETICS...

PURETICS...


Interesting Findings And World Unfolding Through My Eyes.

Monday, July 9, 2007

What,If Artificial Intelligence Took Over All Of Us?

“Machine Intelligence,” will pass the so-called Turing test by 2029. The Turing test, a challenge to see if a computer can fool a human judge into thinking it is human, is a traditional benchmark for the point when true Artificial Intelligence can be said to have been achieved - a historic moment, by any measure.

But with recent discussion of AI taking place in the context of a wager, debates have tended to focus on the difficulty of the problem rather than the implications - as though the arrival of true Artificial Intelligence would only mean the difference between a robot making your coffee and brewing it yourself.

What are the stakes, really? Why should this wager matter to you personally? And what, exactly, are the odds?

First Scenario: Kapor Wins. (No true AI by 2029)

Between now and 2029, the steady march of progress will continue; worker productivity will climb as technological innovation improves efficiency in most industries. Genetic engineering will make new headway in combating disease and improving food supplies. Nanotechnology - the engineering of materials and devices at the molecular level - will steadily mature, accelerating economic development.

As a consequence of these conditions, your standard of living will improve, your life expectancy will increase, and you will enjoy new leisure activities made possible by faster computers and richer interfaces (i.e. Virtual Reality). But during this time you will also endure the usual misfortunes of illness and injury, and one or more persons close to you will suffer a disease, accident, or age-related death. There is also a good chance that somewhere in the world, an intentional or accidental use of genetically engineered bio-weapons or self-replicating nanotechnology will cause casualties numbering in the millions. And there is a small but non-zero chance that such a disaster will bloom out of control and wipe out the human race.

Second Scenario: Kurzweil Wins. (True AI before 2029)

Between now and 2029, scientists will work out a functional design for true AI that possesses a core desire to understand and assist humanity (a characteristic called Friendliness by some researchers). While unimpressive at first, the new AI will learn quickly and receive extra computing capacity to increase its capabilities. Once mature, it will assist its programmers in the design of a next-generation AI. This process will be repeated a number of times with considerable improvements in both intelligence and Friendliness, and before too long will produce one or more minds that can only be called superintelligent. Applying phenomenal brilliance to the betterment of the human condition, Friendly superintelligence will ensure that nanotechnology and genetic engineering are quickly mastered to an extent that human scientists alone could never have reached. Technological progress will be so rapid as to fundamentally change our perception of civilization itself.

As a consequence of these conditions, you (and everyone else) will enjoy unconditional material prosperity and indefinite life-expectancy - with the resulting time and means for pursuits that may include increasing your own intelligence and exploring the galaxy. You will be free to forgo most of the usual misfortunes of illness and injury, and no person close to you will suffer death from disease or old age unless they choose to. The same intelligence that allows for the mastery of genetic engineering and nanotechnology will also work to prevent the possibility of cataclysmic disasters stemming from these technologies. And other potential threats to our planet, such as asteroid strikes and climate change, will be averted or remedied with surprising ease.

You may feel that this second scenario sounds too good to be true; indeed, this is one reason why many people bet against it. It does, admittedly, depend on a number of things going right. But the chief requirement for a positive outcome is reasonably straightforward: namely, that the first AI to begin the spiraling cycle of increasing intelligence be engineered to share human compassion and values, despite any changes incurred through successive redesigns. Given success in this area, the huge and positive contribution that could be made by superintelligence is generally accepted by futurists; in fact, they even have a name for the point at which greater-than-human intelligence starts changing the world: the Singularity.

It must be said, then, that the stakes in the Kurzweil/Kapor wager are, in fact, awesome. But what are the actual odds that AI will be developed anytime soon? Gambling metaphors fail, for predicting the Singularity is not like forecasting the weather or winning the lottery. The answer to the question of when true AI will be born depends entirely on the actions of real people, like you, who are free to participate in this discussion and support the causes they care about.

Will AI be possible in the near future? Yes. The human brain is extremely complicated and not yet fully understood, but AI engineers do not need to simulate the entire brain in silicon - only the patterns and features that give rise to general intelligence. And if all else fails, the brain can eventually be modeled in close detail. Though mysterious, the brain is tangible proof that intelligence can come in small packages.

AI naysayers would have us believe that the disappointing failure of AI projects over the last fifty years means that we cannot hope to achieve true Artificial Intelligence in the next fifty. However, as investment advertisements must always warn, past performance is no guarantee of future results - an axiom that applies to failure as well as success. Forward-looking individuals realize that, barring our own extinction, AI will eventually be created. But when and how AI comes into being will not depend on a roll of the dice or a spin of the wheel, but on how aggressively and responsibly we set about solving the problem. Think back to the above scenarios for a moment. Kapor and Kurzweil have each bet $10,000. But given the enormous qualitative difference between life before and after the Singularity, how much would it be worth to you to see Friendly AI happen sooner - whether by a few decades, a few years, or even just one day?

We are all participants in this wager, with the chips already down and the stakes astronomically high. But what are the odds?

The odds are whatever we choose to make them.

Posted by Ajay :: 9:49 AM :: 0 comments

Post a Comment

---------------oOo---------------

 

http:// googlea0b0123eb86e02a9.html