MSc Data Analytics

  1. Mario A. Hevia Fajardo. 2019. Comparison and modification of self-adjusting evolutionary algorithms.
  2. Evolutionary algorithms (EA) are population-based optimization algorithms that mimic evolution to search for an (near-)optimal solution. They are often used when a problem is hard to solve using other algorithms or a good approximation is sufficient (Dyer, 2008). Even though there exist a wide range of problems and applications with these characteristics, EAs have two requirements to work. Firstly there needs to be a way to encode the possible solutions of a problem (a common encoding used is a bit string) and secondly there need to be a way to evaluate these solutions (fitness function) and such evaluation should return a score.
    For some real-world problems the evaluation of a solution is computationally-intensive, therefore a key topic in EA research is the amount of evaluations or generations used to solve a problem. The research is generally focused on parameter optimization (tuning) since it has been observed that this can improve the performance of EAs. Most of the research focuses on static parameters, but lately started to focus on dynamic parameters that depend directly on the fitness of the current best individual because it has been proved to lead to faster optimizations, but for this strategies to work a good understanding of the problem is needed in order to create a suitable dependence of the parameters. This is often not feasible, therefore other reaserchers have focused on dynamic parameters that are success based and does not need information about the problem.
    The aim of the project is to make empirical evaluations of some of these success based parameter optimizitations in the (1 + λ) EA and compare them on common toy problems. The main algorithms that are tested in the project are the self-adjusting mutation rate (1 + λ) EA (Doerr et al., 2017), self-adjusting offspring population size (1 + λ) EA (Jansen et al., 2005) and the (1 + (λ, λ)) GA (Doerr and Doerr, 2015). In the first part of the document a description of relevant research is given, afterwards a description of the optimization algorithms that are studied is shown, these include the algorithms mentioned above with modifications and combinations of them. Then they are compared and refined using the most simple toy problem (OneMax), since it is the most studied problem for all the algorithms, subsequently three other toy problems are compared (LeadingOnes and Makespan Scheduling). Finally a comparison with the SufSamp problem is made, which is a more complex problem; in this last comparison the algorithms are not only evaluated on the optimization time but in their ability to get to the best result possible too.