Thinking as a Hobby


Home
Get Email Updates
LINKS
JournalScan
Email Me

Admin Password

Remember Me

3478654 Curiosities served
Share on Facebook

In Which I Still Don't Understand Information Theory
Previous Entry :: Next Entry

Read/Post Comments (4)

I was thumbing through Kenneth Miller's new book Only a Theory: Evolution and the Battle for America's Soul in the bookstore yesterday, and I came across a chapter where he references this work on simulations of evolution that supposedly demonstrate how the mechanisms of evolution lead to a gain in information.

Click over there and look at the figure. It shows the information in the population near zero at the beginning of the run, and increasing throughout the run where selection is present. Results from another run are superimposed, showing how information decreases in the absence of selection pressure.

I don't get it.

I guess I'm just dense when it comes to information theory, because I've tried to understand it, but I'm still hitting a wall.

From what I understand, this string

00000000001111111111

has less information than this one

00101011010110001001

Why? Because the first string has more redundancy, is more compressible, requires less description, and would require a simpler algorithm to replicate it.

Does that then mean that the binary code for Minesweeper contains less information than a random bitstring of the same length? This is where I get messed up.

Is there a difference when looking at bitstrings without any context, and bitstrings that are being interpreted by some sort of processor? Another example, the sentence:

The moon orbits the earth.

Does this string have more or less information than the string:

Xnpuq semzwrgj alkh oytcb.

The same logic for the first string having less information still applies as in the bitstring example above. So is there a difference in the information content depending on whether or not it is being interpreted by a processor (i.e. a computer, a person, or the cell machinery that builds proteins)?

Because I can see how mutations would increase information, by increasing the length of a genome, or by producing a unique sequence of the same length by inverting pieces of code. But selection, in general, should reduce the amount of information, because it should impose greater regularity and redundancy. Random code is not going to be functional.

What am I getting wrong here?


Read/Post Comments (4)

Previous Entry :: Next Entry

Back to Top

Powered by JournalScape © 2001-2010 JournalScape.com. All rights reserved.
All content rights reserved by the author.
custsupport@journalscape.com