Conference Articles

  1. Mario A. Hevia Fajardo and Dirk Sudholt. 2021:
    Self-Adjusting Offspring Population Sizes Outperform Fixed Parameters on the Cliff Function.
    In Proceedings of the 16th Workshop on Foundations of Genetic Algorithms (FOGA ’21). ACM. (To appear)

    In the discrete domain, self-adjusting parameters of evolutionary algorithms (EAs) has emerged as a fruitful research area with many runtime analyses showing that self-adjusting parameters can outperform the best fixed parameters. Most existing runtime analyses focus on elitist EAs on simple problems, for which moderate performance gains were shown. Here we consider a much more challenging scenario: the multimodal function Cliff, defined as an example where a (1, λ) EA is effective, and for which the best known upper runtime bound for standard evolutionary algorithms is O(n 25).

    We prove that a (1, λ) EA self-adjusting the offspring population size λ using success-based rules optimises Cliff in O(n) expected generations and O(n log n) expected evaluations. Along the way, we prove tight upper and lower bounds on the runtime for fixed λ (up to a logarithmic factor) and identify the runtime for the best fixed λ as nη for η≈3.9767 (up to sub-polynomial factors). Hence, the self-adjusting (1, λ) EA outperforms the best fixed parameter by a factor of at least n2.9767 (up to sub-polynomial factors).

  2. Mario A. Hevia Fajardo and Dirk Sudholt. 2021:
    Self-adjusting population sizes for non-elitist evolutionary algorithms: why success rates matter.
    In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO ’21). ACM, Lille, France.

    Recent theoretical studies have shown that self-adjusting mechanisms can provably outperform the best static parameters in evolutionary algorithms on discrete problems. However, the majority of these studies concerned elitist algorithms and we do not have a clear answer on whether the same mechanisms can be applied for non-elitist algorithms.

    We study one of the best-known parameter control mechanisms, the one-fifth success rule, to control the offspring population size λ in the non-elitist (1, λ) EA. It is known that the (1, λ) EA has a sharp threshold with respect to the choice of λ where the runtime on OneMax changes from polynomial to exponential time. Hence, it is not clear whether parameter control mechanisms are able to find and maintain suitable values of λ.

    We show that the answer crucially depends on the success rate s (i. e. a one-(s + 1)-th success rule). We prove that, if the success rate is appropriately small, the self-adjusting (1, λ) EA optimises OneMax in O(n) expected generations and O(n log n) expected evaluations. A small success rate is crucial: we also show that if the success rate is too large, the algorithm has an exponential runtime on OneMax.

  3. Mario A. Hevia Fajardo and Dirk Sudholt. 2020:
    On the Choice of the Parameter Control Mechanism in the (1+(λ, λ)) Genetic Algorithm.
    In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO ’20). ACM, Cancún, Mexico.

    The self-adjusting (1 + (λ, λ)) GA is the best known genetic algorithm for problems with a good fitness-distance correlation as in OneMax. It uses a parameter control mechanism for the parameter λ that governs the mutation strength and the number of offspring. However, on multimodal problems, the parameter control mechanism tends to increase λ uncontrollably.

    We study this problem and possible solutions to it using rigorous runtime analysis for the standard Jumpk benchmark problem class. The original algorithm behaves like a (1+n) EA whenever the maximum value λ = n is reached. This is ineffective for problems where large jumps are required. Capping λ at smaller values is beneficial for such problems. Finally, resetting λ to 1 allows the parameter to cycle through the parameter space. We show that this strategy is effective for all Jumpk problems: the (1 + (λ, λ)) GA performs as well as the (1 + 1) EA with the optimal mutation rate and fast evolutionary algorithms, apart from a small polynomial overhead.

    Along the way, we present new general methods for bounding the runtime of the (1 + (λ, λ)) GA that allows to translate existing runtime bounds from the (1 + 1) EA to the self-adjusting (1 + (λ, λ)) GA. Our methods are easy to use and give upper bounds for novel classes of functions.

  4. Mario A. Hevia Fajardo. 2019:
    An Empirical Evaluation of Success-Based Parameter Control Mechanisms for Evolutionary Algorithms.
    In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO ’19). ACM, Prague, Czech Republic.

    Success-based parameter control mechanisms for Evolutionary Algorithms (EA) change the parameters every generation based on the success of the previous generation and the current parameter value. In the last years there have been proposed several mechanisms of success-based parameter control in the literature. The purpose of this paper is to evaluate and compare their sequential optimisation time and parallelisation on different types of problems. The geometric mean of the sequential and parallel optimisation times is used as a new metric to evaluate the parallelisation of the EAs capturing the trade off between both optimisation times. We perform an empirical study comprising of 9 different algorithms on four benchmark functions. From the 9 algorithms eight algorithms were taken from the literature and one is a modification proposed here.

    We show that the modified algorithms has a 20% faster sequential optimisation time than the fastest known GA on OneMax. Additionally we show the benefits of success-based parameter control mechanisms for NP-hard problems and using the proposed metric we also show that success-based offspring population size mechanisms are outperformed by static choices in parallel EAs.

    Supplementary material: