Pages

Saturday 27 July 2013

90. The Future of Intelligence



'Technology, hailed as the means of bringing nature under the control of our intelligence, is enabling nature to exercise intelligence over us' (Dyson 1997).

We are orders of magnitude more intelligent than any other species around. But can we become even more intelligent as a species? Stephen Hawking made an interesting suggestion. To the extent that our typical intelligence has a correlation with our typical brain size, there is scope for manipulation and improvement. Our present average brain size is limited by the size of the orifice through which a newborn baby has to pass at birth. This limitation can be got over by going for conception, followed by the entire growth of the foetus, outside the womb.


 But it is more likely that the future of our intelligence will be influenced by developments underway in robotics. Progress in machine intelligence will have absolutely mind-boggling effects on our intelligence. Why?

At present we make a distinction between two kinds of intelligence: biological or organic intelligence, and machine or inorganic intelligence. A composite (organic-inorganic, or man-machine) intelligence will evolve in the near future (cf. Part 89). It appears inevitable that, aided by human beings, an empire of inorganic life (intelligent robots) will evolve, just as biological or organic life has evolved. We are about to enter a post-biological world, in which machine intelligence, once it has crossed a certain threshold, will not only undergo Darwinian and Lamarckian evolution on its own, but will do so millions of times faster than the biological evolution we are familiar with so far. The result will be intelligent structures with a composite, i.e. organic-inorganic or man-machine, intelligence.


Moravec's (1999) book, 'Robot: Mere Machine to Transcendent Mind', sets out a possible scenario. He expects robots to model themselves on successful biological forms. One such form - used by trees, the human circulatory system, and basket starfish - is a network of ever-finer branches: the starfish robot.

A 'bush robot' is another likely development in Moravec's scheme of things: 'Twenty-five branchings would connect a meter-long stem to a trillion fingers, each a thousand atoms long and able to move about a million times per second.'


Medical applications are one among the many likely uses of the bush robot listed by Moravec: 'The most complicated procedures could be completed almost instantaneously by a trillion-fingered robot, able, if necessary, to simultaneously work on almost every cell of a human body.'

When will all this happen, and what will be its possible bearing on our intelligence? Estimates vary widely. The most optimistic ones are those of Kurzweil. This is what he wrote in 1999: 'Sometime early in the next century, the intelligence of machines will exceed that of humans. Within several decades, machines will exhibit the full range of human intellect, emotions and skills, ranging from musical and other creative aptitudes to physical movement. They will claim to have feelings and, unlike today’s virtual personalities, will be very convincing when they tell us so. By 2019 a $1,000 computer will at least match the processing power of the human brain. By 2029 the software for intelligence will have been largely mastered, and the average personal computer will be equivalent to 1,000 brains.'

There were reasons for the optimism exuded by Kurzweil: 'We are already putting computers –- neural implants -– directly into people’s brains to counteract Parkinson’s disease and tremors from multiple sclerosis. We have cochlear implants that restore hearing. A retinal implant is being developed in the U.S. that is intended to provide at least some visual perception for some blind individuals, basically by replacing certain visual-processing circuits of the brain. Recently scientists from Emory University implanted a chip in the brain of a paralyzed stroke victim that allows him to use his brainpower to move a cursor across a computer screen.'


In his book The Age of Spiritual Machines (1999), Kurzweil enunciated his Law of Accelerating Returns, which simply paraphrases the occurrence of positive feedback in the evolution of complex adaptive systems in general, and biological and artificial evolution in particular; the law embodies exponential growth of evolutionary complexity and sophistication: 'advances build on one another and progress erupts at an increasingly furious pace. . . .As order exponentially increases (which reflects the essence of evolution), the time between salient events grows shorter. Advancement speeds up. The returns –- the valuable products of the process –- accelerate at a nonlinear rate. The escalating growth in the price performance of computing is one important example of such accelerating returns. . . The Law of Accelerating Returns shows that by 2019 a $1,000 personal computer will have the processing power of the human brain –- 20 million billion calculations per second. . . Neuroscientists came up with this figure by taking an estimation of the number of neurons in the brain, 100 billion, and multiplying it by 1,000 connections per neuron and 200 calculations per second per connection. By 2055, $1,000 worth of computing will equal the processing power of all human brains on Earth (of course, I may be off by a year or two).'

The law is similar to Moore's law, except that it is applicable to all 'human technological advancement, the billions of years of terrestrial evolution' and even 'the entire history of the universe'.


Kurzweil's latest book How to Create a Mind (2012) has been summarized in considerable detail at http://newbooksinbrief.com/2012/11/27/25-a-summary-of-how-to-create-a-mind-the-secret-of-human-thought-revealed-by-ray-kurzweil/. He puts forward a 'pattern recognition theory' for how the brain functions, similar to Jeff Hawkins' theory published in his famous book On Intelligence: How a New Understanding of the Brain will Lead to the Creation of Truly Intelligent Machines (2004). According to Kurzweil, our neocortex contains 300 million very general pattern-recognition circuits which are responsible for most aspects of human thought, and a computer version of this design can be used to create artificial intelligence more capable than the human brain. As computational power grows, machine intelligence would represent an ever increasing percentage of total intelligence on the planet. Ultimately it will lead (by 2045) to the 'Singularity', a merger between biology and technology. 'There will be no distinction, post-Singularity, between human and machine intelligence . . .'. 

It is only a matter of time before we merge with the intelligent machines we are creating.

Stephen Hawking expressed the fear that humanity may destroy itself if there is a nuclear holocaust, and suggested the escape of at least a few individuals into outer space as a way for preserving the human race. But, for all our bravado, our bodies are delicate stuff which can survive only in a narrow range of temperatures and other environmental conditions. But our robots will not suffer from that handicap, and will be able to withstand high radiation fields, extreme temperatures, near-vacuum conditions, etc. Such robots (or even cyborgs) will be able to communicate with one another, with the inevitable possibility of developing distributed intelligence. And when each such robot is already way ahead of us in intelligence, a distributed superintelligence will emerge, capable of further evolution, of course. The ever evolving superintelligence and knowledge will benefit each agent in the network, leading to a snowballing effect.

Further, the superintelligent agents may organize themselves into a hierarchy, rather like what occurs in the human neocortex. Such an assembly would be able to see incredibly complex patterns and analogies which escape our comprehension, leading to a dramatic increase in our knowledge and understanding of the universe. Moravec expressed the view that this superintelligence will advance to a level where it is more mind than matter, suffusing the entire universe. We humans will be left far behind, and may even disappear altogether from the cosmic scene.


An alternative though similar picture was painted by Kurzweil (2005), envisioning a coevolution of humans and machines via neural implants that will enable an uploading of the human carbon-based neural circuitry into the prevailing hardware of the intelligent machines. Humans will simply merge with the intelligent machines. The inevitable habitation of outer space and the further evolution of distributed intelligence will occur concomitantly. Widely separated intelligences will communicate with one another, leading to the emergence of an omnipresent superintelligence.

You want to call that 'God'?  Don't. That omnipresent superintelligence would be our creation; a triumph of our science and technology; a result of what we humans can achieve by adopting the scientific method of interpreting data and information.
'We are the brothers and sisters of our machines' (Dyson 1997).

Saturday 20 July 2013

89. Evolution of Machine Intelligence


Our intelligence, as a tool, should allow us to follow the path to intelligence, as a goal, in bigger strides than those originally taken by the awesomely patient, but blind, processes of Darwinian evolution. By setting up experimental conditions analogous to those encountered by animals in the course of evolution, we hope to retrace the steps by which human intelligence evolved. That animals started with small nervous systems gives confidence that today’s small computers can emulate the first steps toward humanlike performance’ (Moravec 1999).

Moravec (2000) estimated that artificial smart structures in general, and robots in particular, will evolve millions of times faster than biological creatures, and will surpass the humans in intelligence long before the present century is over (cf. Part 88). At present this evolution in machines is being assisted by us, and this makes two big differences, both of which can accelerate its speed: (i) unlike ‘natural’ Darwinian evolution (which is not goal-directed), artificial evolution has goals set by us; (ii) Lamarckian evolution is not at all taboo in artificial evolution (in biological evolution it is banned by the central dogma of biology).

Sophisticated machine intelligence, successfully modelled on the human neocortex, should be only a few decades away. This optimism stems partly from the observed trends in computational science. As I mentioned in Part 88, the megabytes-to-MIPS ratio has remained remarkably constant (~1 second) during much of the history of universal, general-purpose computers. Extrapolation of this trend, as also projections about what lies in store after Moore’s law has run its course, tell us that cost per unit computing power in universal computers will continue to fall rapidly. This will have a direct bearing on the rate of progress in the field of computational intelligence.

As Moravec (1999) pointed out, already machines read text, recognize speech, and even translate languages. Robots drive cross-country, crawl across Mars, and trundle down office corridors. He also discussed how the music composition program EMI’s classical creations have pleased audiences who rate it above most human composers. The chess program Deep Blue, in a first for machinekind, won the first game of the 1996 match against Gary Kasparov.

It appears certain to many that we shall indeed be able to evolve machine intelligence comparable to human intelligence in sophistication. As argued by Moravec, a fact of life is that biological intelligence has evolved from, say, insects to humans. There is a strong parallel between the evolution of robot intelligence and biological intelligence that preceded it. The largest nervous systems doubled in size every fifteen million years or so, since the Cambrian explosion 550 million years ago. Robot controllers double in complexity (processing power) every year or two. They are now barely at the lower range of vertebrate complexity, but should (hopefully) catch up with us within half a century. This will happen so fast because artificial evolution is being assisted by the intelligence of humans, and not just determined by the blind processes of Darwinian evolution.

In due course, probably within this century, intelligent robots would have evolved to such an extent that they would take their further evolution into their own hands. The scenario beyond this crossover stage has been the subject of much debate. For example, the books by Moravec (1999) and Kurzweil (2005) continue to invite strong reactions. I mentioned motes in Part 87. Perceptive networks using motes will be increasingly used,and not just for spying. Such distributed supersensory systems will not only have swarm intelligence, they will also undergo evolution with the passage of time. Like in the rapid evolution of the human brain, both the gene pool and the meme pool will be instrumental in this evolution of distributed intelligence. This ever-evolving superintelligence and knowledge-sharing will be available to each agent of the network, leading to a snowballing effect.


However, estimates about the speed with which machine intelligence will evolve continue to be uncertain. For example, this is what Moravec wrote in 2003: 'Before mid century, fourth-generation universal robots with humanlike mental power will be able to abstract and generalize. The first ever AI programs reasoned abstractly almost as well as people, albeit in very narrow domains, and many existing expert systems outperform us. But the symbols these programs manipulate are meaningless unless interpreted by humans. For instance, a medical diagnosis program needs a human practitioner to enter a patient's symptoms, and to implement a recommended therapy. Not so a third-generation robot, whose simulator provides a two-way conduit between symbolic descriptions and physical reality. Fourth-generation machines result from melding powerful reasoning programs to third-generation machines. They may reason about everyday actions with the help of their simulators (as did one of the first AI programs, the geometry theorem prover written in 1959 at IBM by Herbert Gelernter. This program avoided enormous wasted effort by testing analytic-geometry "diagram" examples before trying to prove general geometric statements. It managed to prove most of theorems in Euclid’s “Elements,” and even improved on one). Properly educated, the resulting robots are likely to become intellectually formidable, besides being soccer stars'. But there are some who believe that things may not move that fast. There is a reason for this pessimism, as best expressed by Nolfi and Floreano 2000:

'The main reason why mobile robots are difficult to design is that their behaviour is an emergent property of their motor interaction with the environment. The robot and the environment can be described as a dynamical system because the sensory state of the robot at any given time is a function of both the environment and of the robot previous actions. The fact that behaviour is an emergent property of the interaction between the robot and the environment has the nice consequence that simple robots can produce complex behaviour. However it also has the consequence that, as in all dynamical systems, the properties of the emergent behaviour cannot easily be predicted or inferred from a knowledge of the rules governing the interactions. The reverse is also true: it is difficult to predict which rules will produce a given behaviour, since behaviour is the emergent result of the dynamical interaction between the robot and the environment'.



But there are some inveterate enthusiasts as well, who are expecting an incredibly rapid narrowing of the distinction between humans and robots, and even a merging of the two identities. Here is what Kurzweil wrote (in 2012): 'My sense is we're making computers in our own image and we'll be merging -- we already have -- with that technology. We're going to use those tools to make ourselves more intelligent'. The title of his latest book, How to Create a Mind: The Secret of Human Thought Revealed, published in November 2012, says it all. The intelligence of machines and humans will now evolve together, in a symbiotic way.


Saturday 13 July 2013

88. Evolution of Computer Power per Unit Cost


Continuing our discussion of robots, let us now take a look at their control centres, namely the computers. There are three parameters to consider for the computational underpinnings of robotic action: the processing power (or speed) of the computer; its memory size; and the price for a given combination of processing power and memory size.

The processing power or computing power can be quantified in terms of ‘million instructions per second’ or MIPS. And the size of the memory is specified in megabytes. By and large, the MIPS and the megabytes for a computer cannot be chosen independently: Barring some special applications, they should have a certain degree of compatibility (per unit cost), for reasons of optimal performance in a variety of applications.

An analysis by Hans Moravec (1999) revealed that, for general-purpose or 'universal computers', the ratio of memory (the megabytes) to speed (the MIPS) has remained remarkably constant during the entire history of computers. A ‘time constant’ can be defined here as roughly the time it takes for a computer to scan its own memory once. One megabyte per MIPS gives one second as the value of this time constant. This value has remained amazingly constant as progressively better universal computing machines have been developed over the decades, as depicted in Moravec's (1999) 'evolution slide':


Machines having too much memory for their speed are too slow (for their price), even though they can handle large programs. Similarly, lower-memory higher-speed computers are not able to handle large programs, in spite of being fast. Therefore, special jobs require special computers (rather than universal computers), entailing higher costs, as also a departure from the above universal time constant. For example, IBM’s Deep Blue computer, developed for competing with the chess legend Garry Kasparov (in 1996-97) had more speed than memory (~3 million MIPS and ~1000 megabytes, instead of the universally optimal combination of, say, 1000 MIPS and 1000 megabytes). Similarly, the MIPS-to-megabytes ratio for running certain aircraft is also skewed in favour of MIPS.

Examples of the other kind, namely slow machines (less MIPS than megabytes), include time-lapse security cameras and automatic data libraries.

Moravec estimated in 1999 that the most advanced supercomputersavailable at that time were within a factor of 100 of having the power to mimic the human brain. But then such supercomputers
come at a prohibitive cost. Costs must fall if machine intelligenceis to make much headway. Although this has indeed been happening for a whole century, what about the future? How long can this go on?The answer is: For quite a while, provided technological breakthroughsor new ideas for exploiting technologies keep coming.

An example of the latter is the use of multicore processors. Multicore chips, even for PCs, are already in the market. Nvidia introduced a chip, GeForce 8800, capable of a million MIPS speed, and low-cost enough for use in commonplace applications like displaying high-resolution videos. It has 128 processors (on a single chip) for specific functions including high-resolution video display. In a multicore processor, two or more processors on a chip process data in tandem. For example, one core may handle a calculation, a second one may input data, while a third one sends instructions to an operating system. Such load-sharing and parallel functioning improves speed and performance, and reduces energy consumption and heat generation.


Nanotechnology holds further promise for the next-generation solutions to faster and cheaper computation. DNA computing is an alternative approach being investigated; this technique has the potential for massive parallelism.
 

Quantum computing is another exciting possibility.


A factor hindering rapid progress in robotics has been the high costs incurred on sensors and actuators. Progress in nanotechnology (e.g. the development of MEMS) is resulting in continuously falling costs of sensors and actuators. It is now far less expensive to incorporate GPS (Global Positioning System) chips, video cameras and array microphones, etc., into robots.

Bill Gates announced the development of universally applicable software packages by his company Microsoft that would further facilitate the use of ordinary PCs for controlling and developing robots of ever-increasing sophistication. Many robots already have PC-based controllers. It is anticipated that a large-scale move of robotics towards universally applicable PC-based architecture will cut costs and reduce the time needed for developing new configurations of autonomous robots. In the 1970s the development of Microsoft BASIC provided a common foundation that made it possible to use software written for one set of hardware to run on another set. Something similar has been happening in robotics.

One of the challenging problems faced in robotics was that of concurrency, namely how to process simultaneously the large amount of data coming in from a variety of sensors, and send suitable commands to the actuators of the robot. The approach adopted till recently was to write a ‘single-threaded’ program that first processes all the input data and then decides on the course of action, before starting the long loop all over again. This was not a happy situation because the action taken on the basis of one set of input data may be too late for safety etc., even though subsequent input data indicates a drastically different course of action.

To solve this problem, one must write ‘multi-threaded’ programs that can allow data to travel along many paths. This tough problem has been tackled by Microsoft by developing what is called CCR (‘concurrence and coordination run-time’). Although CCR was originally meant to exploit the advantages of multicore and multiprocessor systems, it may well be just the right thing needed for robots also. The CCR is basically a sequence of libraryfunctions that can perform specific tasks. It helps in developing multithread applications quickly, for coordinating a number of simultaneous activities of a robot. Of course, competing approaches already exist, not only to CCR,but also to the so-called ‘decentralized software services’ (DSS) I mentioned in Part 87.

Although low-cost universal robots will be run by universal computers, their proliferation will have more profound consequences than those engendered by low-cost universal computers alone. Computers only manipulate symbols; i.e. they basically do ‘paperwork’ only, although the end results of such paperwork can indeed be used for, say, automation. A sophisticated universal robot goes far beyond mere paperwork. It goes into perception and action in real-life situations. There is a far greater diversity of situations in the real world than just paperwork, and in far greater numbers. There would thus be a much larger number of universal robots in action than universal computers. This, of course, will happen only when the cost per unit capability falls to low levels.

It has been questioned why Moravec has never updated his 1999 'evolution slide' I showed in the beginning of this post: 'If the graph's projections were correct and if we had the software available today, we would be able to purchase a computer to simulate the human brain in 2010 for $1000'. But this has not happened.

Perhaps Moravec's estimate for the number of MIPS needed to simulate the human brain was off by a factor between 1000 and 10,000,000. Taking the factor of 1000 as a better estimate, the modified 'evolution slide' looks as follows:



When will robotic intelligence go past human intelligence? Perhaps by 2050. Perhaps a little later than that.