The Genetic Algorithm is an algorithm that attempts to find an optimal solution to a given problem by generating a random population of candidate solutions and evolving it towards a “best” solution. This evolution process is closely inspired by the work of Mother Nature that lead, over the last few billions of years, to the current panorama of living creatures.
The process of Evolution by Natural Selection was first described by Charles Darwin in his book “On the Origin of Species.”; one of the most inspiring scientific masterpiece that ever existed.
If you are still wondering why I chose to implement my own version of the Genetic Algorithm, then read my previous article about Darwinian Evolution.
Otherwise, let’s go ahead and dissect the details of the Genetic Algorithms.
The Genetic Algorithm consist of the following steps:
- Given a model representing the set of possible solutions to a given problem, start by creating a random population of different instances of the model. Each of these instances is referred to as an individual.
- Given a fitness function that quantifies how good a given individual is at solving the problem at hand, rank the population from the best to the least fitted individuals.
- Construct the next generation of the population by crossing over individuals from the current generation. Each cross-over works by selecting two parents with a probability that increases with their fitness and combining them in a certain way to form a new individual. The details of the combination depends on the model at hand. For example, for a model described by a set of parameters, the combination might consist of selecting the value of each parameter at random from one or the other parent.
- Introduce occasional mutations in the next generation by selecting a few individuals from the next generation and modify them slightly. Again, the details of this process depends of the model at hand. For a model described by a set of parameter, this step might consist of adding a random noise to the parameters of individuals selected at random with some small probability.
- Evaluate the fitness of the next generation: if a candidate solution passes a predefined criteria, select it as the solution to our problem. Otherwise, repeat the above steps starting from step 2.
This sound simple and to a certain extent very generic with respect to the type of problems that can be solved by such an algorithm. There are however few things to keep in mind:
Statement of the problem: One of the intriguing features of the Genetic Algorithm is that it does not care about the details of the problem it is trying to solve. All it cares about is the fitness function. For this reason, my implementation uses an abstract interface to represent the model, the details of which are implemented separately.
Variability: this principle requires of the initial population to be as diverse as possible. Variability is what allows to form new solutions based on the ones that have already been explored. It is supposed to be maintained through out the steps of the algorithm to ensure that the full phase space of possible solutions is explored. The mutation step helps maintain variability throughout the steps of the algorithm.
Continuity of the fitness function: this is maybe the only constraint on the type of problems that can be solved by the Genetic Algorithm. The algorithm assumes that a small change to the model at hand would yield a small change in the fitness function. To illustrate the type of problems that cannot be solved by the Genetic Algorithm, let’s take an example where we are trying to guess a text from its encrypted version. We can proceed by making a guess, encrypting it and compare to the encrypted target; the fitness function being the count of correct characters in the encrypted string. The Genetic Algorithm would fail to solve this problem because a change of a single character in the original guess would yield in a completely different encrypted string.
Finally, I would like to list the parameters that control the behavior of the Genetic Algorithm (or some of them at least):
- Size of the population: the optimal value for the size of the population depends on the problem being solved. As a rule of thumb, a larger population size yield a better accuracy of the algorithm, up to a certain limit, beyond which the accuracy reaches a maximum. Increasing the size of the population beyond this limit would simply make the algorithm slower.
- Mutation rate: rate at which individuals are subjected to mutation. If this is too low, then the chance increase for the Genetic Algorithm to fall into a local optimal solution (i.e. a solution which if changed slightly yield a lower fitness, but that is not the absolutely best solution for a problem). On the other hand, if this rate is too high, the algorithm turns into a random search algorithm.
- Mutation size: controls how large is the modification that is made to an individual during a mutation. Similar to the mutation rate, this parameter should be large enough to prevent the algorithm from falling into a local optimum and small enough to preserve the evolution nature of the algorithm.
- Criteria to stop: this can be either a threshold on the fitness function or a maximum number of generation or a combination of both. The optimal choice depends on the problem being solved.
You can find my C++ implementation here:
And the corresponding doxygen documentation:
Unlike the other pieces that constitute the largely incomplete knowledge baggage that I carry, understanding Evolution was not a graduate process. It was literally a click; a sudden revelation before which everything about this theory was ridiculous, dark and even sinister.
And then one day: Eureka! If there is a single man that could be described as the greatest scientist of all times, it would have to be Charles Darwin.
As a child growing in a non-secular society where superstition still rules over people’s intellect, Charles Darwin was introduced to me in an atmosphere of a rather bad publicity: there was the deluded man who believed that once a monkey gave birth to a human baby from whom we all descend. Dinosaurs, like unicorns, where mythical creatures that only existed in cartoons and movies. Phrased like this, these ideas sounded of the type that could only be preached by a crazy, ignorant or wicked individual.
Until I was old enough to read about them myself.
Then I understood the three simple elements that when put together, can elegantly solve the greatest mystery of all times; Life:
- Growth with Reproduction and Inheritance: it’s the process by which a certain form of life creates copies of itself to ensure its own growth and continuity to at least one more generation.
- Variability: it’s the imperfections in reproduction through which inaccurate copies of a given form of life (mutations) are introduced into the next generation, thus creating diversity.
- Struggle for Life and Natural selection: it’s the process through which those forms of life that are better equipped to succeed at reproduction are the ones that get preserved through generations at the expense of those that are the least.
A very simple principle yet powerful enough to explain that from star dust consciousness can arise!
The many elaborate and diverse ways of adaptation that can be witnessed in Fauna and Flora, were for so long mistaken for being carefully and purposely and thoughtfully designed by a supreme being. And for a good reason; almost every feature of every living creature is useful in some way to its bearer. As amazing and mysterious as it may be to think of a higher order that pulls the knobs and put the clockwork into action, it is by far more inspiring to ponder the fact that the higher order is nothing but a set of simple laws of Nature.
And not only Life is explained, but also Death. No form of life could have ever been made so perfect from the first attempt: that would indeed be very unlikely and highly improbable. Death has to occur because it is Nature’s way of coming up with the best design through an endless cycle of blind trials and errors.
The most brilliant engineers on the planet can certainly appreciate the power of this technique!
To allude that the sheer complexity of life is a proof of an unnatural aspect in the process of creation is to seriously misunderstand Nature.
And not only biological life is concerned. Think of life as an abstract concept, a dynamic system that can exist in different levels of complexity; as long as its “design” is constantly subjected to some form of a “goodness” test, natural selection is bound to guide its evolution. Once you grasp this, you may start to imagine the wide range of disciplines to which the Darwinian principles may apply.
Our human language, our sense of beauty, our political, economic and social systems, our culture in general and even our religions, if examined closely, all exhibit aspects that can most rationally be explained by Darwinian-like evolution.
I recently read an article describing how local groups of people come up with their own law systems in order to organize common interests among each other. The author rightly points out that in order to explain why neighbors ignore the (state) law in favor of acting ‘neighborly’, one can realize that under certain conditions a ‘tit-for-tat’ type strategy of cooperation is one that maximizes the welfare of group members. The question is how this has become intuitive to us humans? Some of it is encoded in our DNA and constitute our natural inclination to altruism, but most of it can be understood through a tradition of trial and errors or the so called “lessons from those who came before us”. People learn, through their interactions with each other, what works and what doesn’t, and those ideas that work are passed along and become part of the traditions.
It is awe inspiring to contemplate how such a principle is so simply formulated and yet so powerful in its consequences and so general in its capacity to explain a wide range of phenomena spanning across many disciplines.
As Charles Darwin said in the closing statements of his famous book that started it all – “On the Origin of Species”:
“It is interesting to contemplate an entangled bank, clothed with many plants of many kinds, birds singing on the bushes, with various insect flitting about, and with worms crawling through the damp earth, and to reflect that these elaborately constructed forms, so different from each other, and dependent on each other in so complex a manner, have all been produced by laws acting around us. […] Thus from the war of nature, from famine and death, the most exalted object we are capable of conceiving, namely, the production of higher animals, directly follows. There is grandeur in this view of life, […], from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved”.
There is no artist’s brain behind this picture, only a soulless computer… In fact, the Mandelbrot set is a pure mathematical object. It’s a fractal object with infinite complexity: it means that no matter how far you zoom in, the structure looks as complex as the full set. New shapes and figures appear at each zoom level, and sometimes the entire structure is replicated in miniature within the details of the original structure.
This very complex structure is generated by one of the simplest mathematical operations that can be. And that to me is fascinating!
Nature is full of such examples of complex and beautiful pieces of art. Things you look at, and your brain fails to imagine how they came to be. And yet here they are!
Life is the perfect example.
Complex forms of chemical reactions so beautifully designed, and so perfectly adapted to their way of being. So complex that consciousness arises in creatures aware of their own existence. And yet, the whole process that creates life out of star dust is nothing but a succession of simple laws of probability. And that to me is fascinating!
If you are curious to learn more about the mathematics or the software pieces involved in generating the above pictures, then get technical and checkout my code here:
and it’s detailed documentation here:
When I created this blog, an automatically generated post, intended to explain how to edit and make posts, was on my home page under the title “Hello world!“. Of course, the contents themselves were of little interest to me, however I thought that the title was exactly what I wanted to say in my very first post: so I kept it!
It’s an exclamation of joy and wonderment of being born into a new world full of secrets yet to be unveiled. It’s a way to announce to the world that a new born is here. And to the new born, it’s the first step in a long journey of exploration.
Since I am here and I want to say hello, I will break my own rule of not talking about myself and talk a little about myself. Namely about how I became a scientist and what does the word scientist mean to me.
It all started before I can even remember. My mother often tells the story that since my speaking skills reached a point where I could formulate complete sentences, I started asking about stuff like where does the Sun go at night and why does it rain in spring and not only in winter. My mother could very well be bragging about her lovely and fantastic first born, as all mothers do, but I do have many vague and some vivid memories of such episodes. Maybe my most vivid such memory is one of my father holding a tennis ball (representing planet Earth) and rotating it around itself and around a lamp (representing the Sun) and showing me the movement of the shadow on the dark side of the ball with respect to the drawing of a small boy (representing me) he had depicted somewhere on the northern hemisphere.
I would like to believe – and there seems to be scientific studies favoring this – that all children ask these questions at some point in their early childhood. We are all born scientists to some extent. Some parents are too lazy or lack some education to answer, instead, they find it easier to tell their children to go and play with the other kids, allowing them to slowly and steadily let go of their wonderment. Later at school, some teachers – who are overly worried about success rate – will provide them with algorithms to solve problems mechanically, preventing them from using the innovation skills that would otherwise allow them to solve new problems of types never encountered before.
Luckily for me, my early childhood was different. I was given explanations whenever I needed to, so somehow I came to always expect an explanation whenever presented with a claim. I remember my kindergarten teacher telling us once that God created everything, and I remember being furious in my mind because I knew that my uncle worked in a company that repairs televisions and I knew that televisions were made by humans in a television factory. God could not have crated televisions: I knew that my teacher was up to something!
For me, being a scientist is not to work in a science lab. Being a scientist is being curious, imaginative and critical about the world. We are all born with those qualities but tend to loose them whenever we don’t use them enough. So the best gift parents can give to their children is to teach them how to think and learn for themselves.
So here I am, embarking on this new journey, hoping to meet new interesting people, to learn about their ideas and to share my ideas with them. Because I have always wondered what other people think when they think about the same issues as I do!