Pages

Saturday 30 November 2013

108. 'Irreducible Complexity' Deconstructed, or Why There Can be Art without an Artist


Humans can be funny. Given two situations of comparable complexity, they have no difficulty in accepting one as having arisen spontaneously (with no designer or controller or creator involved), but they often insist on invoking a designer or creator ('God') for explaining the other. Consider the world economy. We all agree that it is extremely complex, and also that there is no central authority which designed what we see today. It has just evolved in complexity over time, and is self-regulating.

There are sovereign nations, each with an economy of its own. There is trade among the nations, and within every nation. There are independent stock markets, various currencies, and exchange rates which vary on a daily basis, with no central authority controlling the rates at will. Things look so well regulated that Adam Smith made his famous (but figurative) statement about there being an 'invisible hand' behind all the order and complexity of modern economy.



But come to the spontaneous evolution of biological complexity, and suddenly many people stop being rational, and postulate a God who must have created the complex life forms. This teleological argument was advanced, among others, by the British theologian William Paley. Suppose you are on a beach or an uncultivated field, and you come across a piece of rock. You find nothing striking about that (i.e., do not think of a creator of the rock), and move on. Next you see a watch. You are quite clear in your head that the watch must have been made by a watchmaker: Since there is a watch there must be a watchmaker, because there is evidence of design in the watch. Paley argued that, similarly, all the biological complexity we see around us points to the existence of a creator and designer, namely God.



People who subscribe to this line of reasoning have hijacked the phrase 'irreducible complexity' (IC) for making the point that, for example, a DNA molecule has complexity which cannot be explained or reduced to evolutionary causes involving evolution from simpler molecules. DNA controls the synthesis of proteins and yet, proteins must have pre-existed for the creation of DNA. Therefore, so the argument goes, God must be invoked for explaining such IC.

[I say 'hijacked' because the scientific meaning of IC actually pertains to logical randomness (cf. Part 67): An irreducibly complex system is one for which no compression of information is possible, meaning that its apparent information content is not substantially more than that of the algorithm or theory or mathematical formula needed for expressing or explaining it; the apparent complexity cannot be reduced by discovering a rule or law that generated it, because none exists.]

Richard Dawkins demolished the if-there-is-a-watch-there-must-be-a-watchmaker argument at length in his 1986 book The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe without Design. If God created the irreducibly complex life forms, he must have been endowed with a still greater amount of complexity, information, and intelligence. Who created that? How can you explain an observed complexity by invoking an even greater first-cause complexity? It makes far better sense to assert (and prove) that the degree of complexity rises incrementally, starting from less complex entities, aided by Darwinian natural selection and the availability of low-entropy or 'high-quality' energy from the Sun, all along remembering that we are dealing with thermodynamically open systems here (cf. Part 6).

Darwinian natural selection is not a totally random process. It acts on the genetic variation produced by random mutations and genetic drift, and helps the emergence of those individuals who have more of the adaptive traits useful for survival and reproduction.

The probability for the spontaneous evolution of the existing life forms is indeed very low. BUT A HUGE HUGE NUMBER OF ALTERNATIVE LIFE FORMS COULD HAVE ALSO EVOLVED. There are an enormous number of evolutionary pathways that could have been taken by organisms, and any of them could have been taken. This huge number should be multiplied with the very low probability of evolution of the existing life forms; then you get a much higher overall probability.

To understand this better, consider an analogy. You go out to the market place and see many people. Let us focus on one of them. Nothing miraculous that you saw that person. Now wind back in time a little bit, so that you are back home and, before going out, calculate the probability of meeting that particular person. Very low probability indeed. And yet you saw that person. It looks 'miraculous' only if you calculate the probability after the event, i.e. after singling out a particular person! It is downright silly to think that cosmic forces conspired and somebody pre-ordained that you would see that particular stranger today.

Feynman used to make fun of such tendencies. Here is what Bill Bryson (2003) wrote about him:

The physicist Richard Feynman used to make a joke about a posteriori conclusions - reasoning from known facts back to possible causes. ‘You know, the most amazing thing happened to me tonight,’ he would say. ‘I saw a car with the license plate ARW 357. Can you imagine? Of all the millions of license plates in the state, what was the chance that I would see that particular one tonight? Amazing!’ His point, of course, is that it is easy to make any banal situation seem extraordinary if you treat it as fateful.

Another analogy will be helpful. Consider a deck of 52 playing cards. Any of the cards can be at the bottom. For each such possibility, there are 51 ways of choosing the second card for placing on top of the first. There are 50 possibilities for what the third card can be, and so on. The total number of possibilities is 52 x 51 x 50 x . . .3 x 2 x 1. This works out to ~1068 ways in which the cards can be stacked one above the other, an enormous number indeed. There is only a 1 in 1068 chance that any particular stacking sequence will occur. And yet it would be stupid to think that a miracle has occurred because we have observed a particular configuration and not another.

I recommend the book Irreligion: A Mathematician Explains Why the Arguments for God Just Don't Add Up by John Paulos (2008) for becoming alert about the logical fallacies of the probabilistic kind that people tend to commit often.

A flower is a work of art, but there is no artist involved. The flower evolved from lesser things, which in turn evolved from still lesser things, and so on.



For more on debunking the IC argument, please click HERE.

The following video ('Dawkins makes an eye') will also be helpful in convincing you about the main theme of this post. The existence of the human eye had been used by the Creationists as strong proof in favour of the irreducible-complexity argument: The eye has to be there as a whole for being of benefit to the creature (so the Creator must have created it in one go); you cannot have half an eye for the faculty of vision, so went the argument. Dawkins demolishes the argument very patiently in this video.




Friday 22 November 2013

107. Emergence of Symmetry in Complex Networks


'The ubiquity of symmetry in disparate real-world systems suggests that it may be related to generic self-organizational principles' (MacArthur & Anderson 2006).



Symmetry is supreme, and real-life complex networks are no exception to this aspect of natural phenomena. Several of them are rich in symmetry, and exploring the origins of that symmetry can provide insights useful for modelling the dynamics and topology of the network (see my book Wadhawan (2011) for a literature survey on this topic. If you would like to have a free soft copy, please write to me at vkw1412@gmail.com).



In network theory, the ‘symmetry transformations’ under which a network remains ‘invariant’ are usually taken to be the appropriate permutations of vertices. And by ‘invariance’ we mean the invariance of the adjacency of the vertices of the network. Thus a network structure is said to possess symmetry if certain permutations of its vertices do not change the adjacency of the vertices.

Symmetry analyses of networks have led to an important result that 'similar linkage pattern' is the underlying factor responsible for the manifestation of symmetry in networks (Xiao et al. 2008).

I now introduce some formal definitions in the context of symmetry of graphs or networks. Let G be the graph underlying a network. A one-to-one mapping from the vertices of G onto itself is called a permutation. Let S(V) be the set of all the permutations possible for a set V of vertices comprising the graph G. In the set S(V) there can be some permutations which preserve the adjacency of each vertex of the set V. These permutations are called the automorphisms, acting on the set V of vertices of the graph G. The set of all automorphisms of the graph G is denoted by Aut(G).

A network is said to be symmetric if its underlying graph G has at least one automorphism which is not an identity or trivial permutation. If the identity permutation is the only automorphism of G, then the graph and the network are asymmetric.

Since permutations are mappings, one can define successive permutations or products of permutations. A product of two permutations f and g (written as fg) is another permutation h (h = fg). h is the mapping which takes the vertex set V onto itself, and this mapping has the same effect on a vertex x as if we first applied f on x (getting xf), and then applied g on xf (getting (xf)g). Thus, xh = (xf)g. It can be shown that the set of automorphisms under the product of permutations forms a group, called the automorphism group.

Given the automorphism group Aut(G) for the vertex set V, we can create a partition P = {V1, V2, ..., Vk} such that a vertex x is equivalent to a vertex y if and only if Aut(g) contains a mapping g such that xg = y. Such a partition is called an automorphism partition. And each cell of this partition is called an orbit of Aut(g). A trivial orbit contains only a single vertex. Otherwise it is nontrivial orbit.

The above figure is an example of a symmetric graph (Xiao et al. 2008). There are 7 vertices, so the possible number of permutations is 7!, or 5040. Clearly, the adjacency of vertex 3 is different from that of 4. So a permutation that interchanges 3 and 4, and does nothing else, is not a symmetry transformation. So it is not an automorphism of this graph. A permutation that is an example of an automorphism is that in which only vertices 1 and 2 are interchanged. Thus, vertices 1 and 2 belong to the same orbit. Similarly, the vertices 5, 6 and 7 form a subset of the graph in which any interchange is an automorphism, so these three vertices all belong to another orbit. We can find all the orbits, and write the automorphism partition as

P = {{1,2}, {3}, {4}, {5,6,7}}

Measures of symmetry of networks

If a graph is shown to be symmetric, the next question is: How symmetric is it? I describe two measures of such symmetry here (Xiao etal. 2008).

The order |Aut(G)| or αG of the automorphism group of a graph is clearly a measure of the extent of symmetry in the graph. We should normalize it to be able to compare the symmetries of graphs of different orders N. The following is one such normalized measure of symmetry (MacArthur and Anderson 2006):

βG = (αG)1/N

It is intuitively clear from the figure above that a network or graph with more nontrivial orbits is more symmetric. Xiao et al. (2008) have used a measure of symmetry which accounts for this:

γG = [∑1≤i≤k, |Vi|≥1 |Vi|] / N

Here Vi is the ith orbit in the automorphism partition, and k is the number of cells in the partition.

I have reviewed the symmetry aspect of complex networks in substantial detail in my book LATENT, MANIFEST, AND BROKEN SYMMETRY: A Bottom-up Approach toSymmetry, with Implications for Complex Networks (2011). I have argued that symmetry emerges in complex networks for the same reasons for which it arises in, say, crystals. In crystals it is 'equal placement of equal parts', and in nets it is ' similar linkage patterns'. The second law of thermodynamics for open systems is the mother of all symmetrizing and other organizing principles governing natural phenomena (cf. Part 6).




Thursday 14 November 2013

106. Scale-Free Networks


'This unparalleled license of expression, coupled with diminished publishing costs, makes the Web the ultimate forum of democracy; everybody’s voice can be heard with equal opportunity. Or so insist constitutional lawyers and glossy business magazines. If the web were a random network, they would be right. But it is not. The most intriguing result of our Web-mapping project was the complete absence of democracy, fairness, and egalitarian values on the Web. We learned that the topology of the Web prevents us from seeing anything but a mere handful of the billion documents out there' (Barabási, Linked).



A scale-free network is one in which a few of the nodes have a much larger number of connections than others. For regular networks and for random networks the number of edges per node has an approximately Gaussian distribution, with a mean value that is a measure of the scale of the network. In a scale-free network, by contrast, there are a few strongly connected nodes and a large number of weakly connected ones. Typically, the degree distribution follows a power law:

P(k) ~ k-γ

The exponent γ generally lies between 2 and 3. Most of the complex networks in Nature have a power-law degree distribution (cf. Part 83).

Since the distribution function P(k) does not show a characteristic peak (unlike the Gaussian peak for random networks and regular networks), the network is described as scale-free. For it, the average distance between any two nodes is almost as small as for a random network, and yet its clustering coefficient is practically as large as for a regular network. In other words, scale-free networks exhibit cliquishness, like small-world networks (cf. Part 105).

Power-law degree-distribution models for random networks

Often one can bring a random-network model closer to reality by introducing a power-law degree distribution into it. Such a topologically modified network is still random in the sense that the edges still connect randomly selected nodes. But a constraint is introduced that the degree distribution function P(k) must follow a power law (Albert and Barabási 2002).

Real-life networks evolve with time. It has been observed in a wide variety of complex networked systems that their topology evolves to achieve higher and higher degrees of robustness (see below) against attacks and failures (Albert and Barabási 2002; Barabási 2003). How does this happen? How does the power-law degree distribution arise? What mechanisms result in the coexistence of large clustering coefficients and small path lengths?

We have seen in Part 105 that the Watts-Strogatz model captures some of the important features of many real networks, even though it does not reproduce the power-law or scale-free feature characteristic of many real networks. The generalized random-graph model, on the other hand, is able to introduce power-law degree-distribution behaviour.

The ad hoc ways in which these models are introduced do not shed light on the underlying mechanisms operative in the evolution of the topology of real networks. What is needed is that, instead of modelling the network topology, we should try to model the assembly of the network and its evolution. This was first done by Barabási and Albert (1999).

The Barabási-Albert model

The BA model captures some essential features of network dynamics, and can reproduce the scale-free connectivity exhibited by many real networks, by doing away with the two assumptions made in the models we have discussed so far. We have assumed that the order N of a network is constant. We have also assumed that the connection probability p does not depend on which two nodes are selected for connection. What usually happens in the real world, however, is that, firstly, networks grow with time by the addition of new nodes (e.g. the World Wide Web grows exponentially with time by the continuous addition of new web pages); and secondly, there is likelihood of preferential attachment to certain nodes (e.g. when a new manuscript is prepared for publication, there is a greater chance that well-known papers will be cited, rather than less frequently cited average-quality papers).

The growth feature in the BA model is introduced as follows. One starts with a small number m0 of nodes, and at every time step a new node having m (≤m0) edges is added. Thus there are N = m0 + t nodes after t time steps.

And the preferential attachment of new nodes is carried out as follows: It is assumed that the probability Π that the new node will be connected to an existing node i is linearly related to the degree ki of the node i:

Π(ki) = ki / ∑i ki

Thus there are mt edges in the network after t time steps.

Numerical simulations demonstrate that the BA network evolves to be a scale-free network, with the probability that a node has k edges given by a power law with exponent γBA = 3.

The properties of real networks can differ substantially from what is predicted by the BA model. For example, the power-law exponent for the actual degree distribution may lie anywhere between 1 and 3. Even non-power-law features like an exponential cutoff, or saturation for small k, are observed sometimes. Evolving networks in Nature can develop both power-law and exponential degree distributions. Certain preferential attachments, aging effects, and growth constraints can make a network cross over from power-law to exponential-degree distribution. Various refinements of the BA model have been investigated (cf. Albert and Barabási 2002).

Robustness of networks

Networked complex adaptive systems display a remarkable degree of error tolerance. Naturally, this robustness aspect of networks has been investigated widely by computer simulation, as well as by analytical studies. The error tolerance of real networks has a dynamic aspect (Garlaschelli, Capocci and Caldarelli 2007) and a topological aspect. The topological aspect is easier to simulate on a computer. One may focus either on edges or on vertices, or on both. A network is defined as robust if it contains a giant cluster consisting of most of the nodes even after a fraction of its nodes are removed.

The progressive removal of edges randomly is like an inverse edge-percolation problem (cf. Part 104). The removal of a node, on the other hand, has a more damaging effect on the network topology, since all the edges adjacent with that node also get removed.

Simulation studies have shown a strong correlation between robustness and topology. Scale-free networks are found to be generally more robust than random networks. However, if the most connected nodes of a scale-free network are the targets of attack, then they are much more vulnerable than random networks (Albert, Jeong and Barabási 2000).




Saturday 9 November 2013

105. Small-World Networks and the Six Degrees of Separation



'If we want to understand life – and ultimately cure disease – we must think networks' (Barabási, Linked).



It is observed that in most of the large networks, the distance or path-length between any two nodes does not increase in proportion to the number of nodes in the network. Perhaps the best known example of this is the 'six degrees of separation' feature discovered by Stanley Milgram in 1967. He found that between most pairs of people in the USA there is typically a network distance of just six acquaintances. This is commonly referred to as a small-world feature in a network. It can occur even in random networks (cf. Part 104), for which Erdős and Rényi had shown that the typical distance between any two nodes scales as the logarithm of the total number N of nodes.



Milgram’s idea stood discredited for some time, but got extensive endorsement later. A Microsoft team (E. Horvitz and J. Leskovec) studied the addresses of 30 billion Microsoft Messenger instant messages sent during a single month (June) in 2006. Two people were taken to be acquaintances if they had sent each other an instant message. It was found that any two persons on average are linked by seven or fewer acquaintances.

Another feature of social and many other networks is the occurrence of cliques. In social networks, cliques are groups of friends or acquaintances in which everyone knows every other member of the group. The clustering coefficient of a network serves as a good measure of this tendency to form clusters or cliques (Wasserman and Faust 1994; Watts 1999). Consider a particular node i. Suppose it is connected to ki other nodes. If these other nodes are part of a clique (which naturally includes the node i also), there would be ki(ki - 1)/2 edges among them. Suppose the actual number of edges among them is Ei. Then the clustering coefficient of the node i is defined as

Ci = [2Ei]/[ki(ki - 1)].

And the clustering coefficient C for the entire network is the average of all the Ci’s.

As we have seen in Part 104, for a random network, C = p. For practically all real networks, C is much larger than p, indicating that the small-world characteristic is indeed very common in complex networks.

Small-world networks have short path lengths like random networks, and have clustering coefficients that are quite independent of network size, like regular networks. This last feature of regular networks or graphs is well illustrated by a crystal lattice, in which the clustering coefficient is fixed, no matter what the size of the crystal is. Consider, for simplicity, a ring lattice, i.e. a 1-dimensional lattice with periodic boundary conditions. Let each node be connected to k other nodes nearest to it. It has a high clustering coefficient because most of the immediate neighbours of any lattice point are also neighbours of one another. Its clustering coefficient can be shown to be

C = [3(k-2)]/[4(k-1)]

A low-dimensional lattice like this does not have short path lengths. For a d-dimensional cubic lattice, the average distance between two nodes scales as N1/d; this is a much faster increase with N than the logarithmic increase typical of random networks, as also of most of the real-life networks.

The Watts-Strogatz model

As stated above, real small-world graphs or networks have short path lengths unlike ordered or regular networks, and have clustering coefficients that are quite independent of network size, unlike random networks. Watts and Strogatz (WS) (1998) proposed a one-parameter model that explored the regime intermediate between random graphs and regular graphs. They started with a ring lattice of order N, in which each node is connected to its nearest k nodes, k/2 on either side. For ensuring that they were dealing with a sparse but connected network, they assumed that N>>k>>ln(N)>>1. Then they introduced randomness by rewiring each conceivable edge with a certain probability p. Thus even distant nodes had a chance of being connected. In fact, the procedure introduced pNk/2 long-distance edges connecting nodes which would have been parts of different neighbourhoods otherwise.



Connection probability p = 0 corresponds to perfect order or regularity, and p = 1 to randomness. Somewhere between these two limits a transition occurs from order to randomness (or chaos). This transition is of great interest in the context of complex systems, because this ‘edge of chaos’ is where complexity thrives. It is in the vicinity of the edge of chaos that the network exhibits a coexistence of clustering and short path lengths.

For a ring lattice, i.e. when p = 0, the average path length is

L(0) ≈ N/(2k) >> 1

And the average clustering coefficient is

C(0) ≈ 3/4

Thus the average path-length scales linearly with order, and the average clustering coefficient is large.

At the other extreme in the WS model, i.e. when p 1, the model describes a random graph, for which

C(1) ≈ Crandom ~ k/N << 1

and

L(1) ≈ Lrandom ~ ln(N)/ln(k)

Thus the clustering coefficient decreases with increasing order of the net, and the path length increases logarithmically with order.

The WS model has the feature that there is a substantial range of probabilities p for which large C values coexist with small L values, in agreement with what is observed in a variety of real networks. However, it is not able to reproduce the power-law degree-distribution exhibited by many real networks for large k. I shall deal with this aspect of networks in the next post.