Pages

Saturday 28 April 2012

25. Laplace's Demon



Reductionism is the philosophy that the explanation of all phenomena can be reduced to their simplest components. The basic premise is that everything can be ultimately explained in terms of the bottom-level laws of physics. The spirit behind this approach is that the universe is governed by natural laws which are fixed and comprehensible at the fundamental level.

Reductionism has excellent validity where it is applicable, namely for simple or simplifiable (rather than complex) systems. Reductionistic science has had remarkable successes in predicting, for example, the occurrence of solar eclipses with a very high degree of precision in both space and time.


An approach related to reductionism is constructionism, which says that we can start from the laws of physics and predict all that we see in the universe. However, both reductionism and constructionism assume the availability of data of infinite precision and accuracy, as well as unlimited time and computing power at our disposal. Moreover, random events at critical junctures in the evolution of complex systems can make it impossible for us to make meaningful predictions always.


The philosophy of scientific determinism flourished before the advent of quantum physics, and was first articulated in print by Pierre-Simon Laplace in 1814. Building on the work of Gottfried Leibniz, he said:
We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.
Imagine such a superintelligent and superhuman creature (the 'Laplace demon'), who knows at one instant of time the position and momentum of every particle in the universe, as also the forces acting on the particles, and all the initial conditions. Assuming the availability of a good enough supercomputer, is it possible for the Laplace demon to predict the future in every detail? The answer may be 'yes' (for some classical systems) if unlimited computational power and time are available. But in reality, there are limits on the speed of computation, as well as on the extent of computation one can do. These limits are set by the laws of physics, and by the limited nature of the resources available in the universe. Here are some of the reasons for this:
  • The bit is the basic unit of information, and the bit-flip the basic operation of information processing. It costs energy to process information. Energy and time uncertainties are related through the Heisenberg principle of quantum mechanics. This principle puts a lower limit on the time needed for processing a given amount of energy and information.
  • The finite speed of light puts an upper limit on the speed at which information can be exchanged among the constituents of a processor.
  • A third limit is imposed by entropy which, being 'missing information' (cf. Part 22),  is the opposite of available information: One cannot store more bits of information in a system than permitted by its entropy.
  • Most natural phenomena or interactions are nonlinear, rather than linear. This can make the dynamics of even deterministic systems unpredictable.
  • In the time-evolution of many systems, there are events which are necessarily random, and therefore cannot be predicted, except in probabilistic terms.
Thus there are limits on available computational power. Predictions based on the known laws of physics, but requiring larger computational power than the limits stated above are not possible. In any case, predictions (which are always based on data of finite precision) cannot have unlimited precision, not even for otherwise deterministic situations.

The implication is that, beyond a certain level of 'computational complexity', new, unexpected, organizing principles may arise: If the known fundamental physical laws cannot completely determine the future states of a complex system, then higher-level laws of emergence may come into operation.


A striking example of this type of 'strong emergence' is the origin and evolution of life. Contrary to what David Chalmers says, it is a computationally intractable problem. Therefore new, higher-level laws, different from the bottom-level laws of physics and chemistry, may have played a role in giving to the genes and the proteins the functionality they possess.

Complex systems usually have a hierarchical structure. The new principles and features observed at a given level of complexity may sometimes be deducible from those operating at the previous lower level of complexity (local constructionism). Similarly, starting from a given observed level of complexity, one can sometimes work backwards and infer the previous lower level of complexity (local reductionism). Such reductionism and constructionism has only limited, local, ranges of applicability for complex systems. 'Chaotic systems' provide a particularly striking example of this. The Laplace demon cannot predict, on a long-term basis, for example the weather of a chosen region, nor can he start from the observed weather pattern at a given instant of time and work out the positions and momenta of all the molecules.

A common thread running through the behaviour of all complex systems is the breakdown of the principle of linear superposition: Because of the nonlinearities involved, a linear superposition of two solutions of an equation describing a complex system is not necessarily a solution. This fact lies at the heart of the failure of the reductionistic approach when it comes to understanding complex systems.

THE WHOLE CAN BE MORE THAN (AND DIFFERENT FROM) THE SUM OF ITS PARTS.

No comments:

Post a Comment