Thinking as a Hobby


Home
Get Email Updates
LINKS
JournalScan
Email Me

Admin Password

Remember Me

3478563 Curiosities served
Share on Facebook

Neuroevolution: Building and/or Training Artificial Neural Networks with the Power of Evolution
Previous Entry :: Next Entry

Read/Post Comments (0)

ResearchBlogging.orgSpringer just launched a new journal called Evolutionary Intelligence. The first issue is freely available on-line, and it contains a nice review article on neuroevolution, which is the term for approaches to building and/or training artificial neural networks, which are networks of computing elements (usually simulated on a computer) that are inspired by biological neurons. The paper is called Neuroevolution: from architectures to learning by Dario Floreano, Peter Dürr, and Claudio Mattiussi.

This is my area of research (I'm currently using evolutionary algorithms and artificial neural networks to model aspects of the neocortex), so I was happy to see this paper in the inaugural issue of the journal. In 1999, Xin Yao wrote what is probably still the most comprehensive review of the subject, in a paper called Evolving artificial neural networks.

This new paper classifies such approaches along slightly different lines and includes some more recent approaches, so it's a nice companion paper to Yao's.

By way of brief review, an evolutionary algorithm is an optimization method that uses dynamics from biological evolution. The algorithm is usually initialized with populations that contain variation. Some subset of the population is selected based on fitness criteria. The rest of the population is killed off. The survivors reproduce via mutation and sometimes crossover, and then the whole cycle starts all over again, until satisfactory individuals are found or you get tired of running the algorithm.

When applied to artificial neural networks (ANNs), evolutionary algorithms are used to either modify the connections weights (which are analogous to biological synapses) and/or find good topologies (arrangements of artificial neurons and connections).

Floreano et al. use three categories to describe this approach:

1) Direct Representations

These approaches use a 1-to-1 mapping between the genotype and the phenotype. This means that each gene in the genotype (which can be thought of as the instructions for building) makes exactly one trait in the phenotype (in this case the ANN).

A good analogy might be the instructions for building a fence. In a direct encoding, the instructions might tell you to cut a piece of wood with certain dimensions, and put it at a particular place. Then for the second piece of wood, again the exact dimensions and exact placement would be given, and so on. This is in contrast to an indirect encoding, in which the instructions might contain a procedural rule (e.g. cut 40 pieces of wood, place the first one at the starting point, and the space them 10 cm apart along the following line). In other words, indirect encodings are more compact representations that either contain rules or require information from sources other than the genotype (such as the environment) to construct the final product.

Direct encodings are implausible for modeling and optimizing large networks. The human genome has on the order of 20,000 genes, and yet the human neocortex is made up of around 20 billion neurons and trillions of connections. There is obviously an indirect mapping.

Floreano et al.'s other two categories fall under indirect encodings:

2) Developmental Representations

These are indirect encodings which specify rules for building the network, rather than explicitly encoding every single feature of the phenotype. Genes can encode for all sorts of procedural rules for constructing the network, such as rules governing the growth of connections in tree-like paths (For more examples, have a look at the paper.)

3) Implicit Encoding

The authors describe this approach as analogous to the mechanisms of biological gene regulatory networks (GRNs):


In biological gene networks, the interaction between the genes is not explicitly encoded in the genome, but follows implicitly from the physical and chemical environment in which the genome is immersed.


I think the conceptual distinction here is that genomes in implicit encodings do not contain all the information necessary to build the resulting network. Some percentage of that information is in the environment in which the developing network is placed.

Floreano et al. then go on to talk about other attributes of ANNs, such as the level of detail in the artificial neurons (e.g., whether or not the neurons "spike", generating something like the action potential of real neurons). They also discuss the evolution of learning rules and learning architectures.

I'm not sure if the categories they describe are the best way to classify neuroevolutionary approaches. The distinction between developmental representations and implicit encodings seems like a blurry one. I'm personally designing an approach that includes rules about where to locate traits in space and which constrain how artificial neurons are connected. Once the network is constructed, the topology is modified based on input, so it seems to straddle developmental and implicit encodings.

Still, if you're at all interested in this area, you should give the paper a read.

Floreano, D., Dürr, P., Mattiussi, C. (2008). Neuroevolution: from architectures to learning. Evolutionary Intelligence, 1(1), 47-62. DOI: 10.1007/s12065-007-0002-4


Read/Post Comments (0)

Previous Entry :: Next Entry

Back to Top

Powered by JournalScape © 2001-2010 JournalScape.com. All rights reserved.
All content rights reserved by the author.
custsupport@journalscape.com