Rigorous Research Foundations of Bio-Inspired Computing

Black Box Discrete Optimization Benchmarking Workshop

This workshop takes place at GECCO'18 on July 15th, 2018 in Kyoto, Japan.

Organisers

  • Pietro S. Oliveto – University of Sheffield, UK
  • Markus Wagner – University of Adelaide, Australia
  • Thomas Weise – Hefei University, China
  • Ales Zamuda – University of Maribor, Slovenia

Important Dates

Submissions Open27 February 2018
Submissions Close27 March 2018
Decisions Made10 April 2018
Camera-Ready Material Due24 April 2018

Content

Quantifying and comparing performance of optimization algorithms is one important aspect of research in search and optimization. However, this task turns out to be tedious and difficult to realize, at least, if one is willing to accomplish it in a scientifically rigorous way.

The Black-Box-Optimization Benchmarking (BBOB) methodology associated to the BBOB-GECCO workshops has become a well-established standard for benchmarking stochastic and deterministic continuous optimization algorithms. The aim of our workshop is to set up a process that will allow to achieve a similar standard methodology for the benchmarking of black box optimisation algorithms in discrete and combinatorial search spaces.

In a similar fashion to BBOB our long-term aim is to produce:

  1. a well-motivated benchmark function testbed,
  2. an experimental set-up,
  3. generation of data output for post-processing,
  4. presentation of the results in graphs and tables.

The main aim of this first workshop is to encourage a discussion concerning which functions should be included in the benchmarking testbed (i.e., point (1) above). The challenge is that the benchmark functions should capture the difficulties of combinatorial optimization problems in practice but at the same time be comprehensible such that algorithm behaviours can be understood or interpreted according to the performance on a given benchmark problem. The goal is that a desired search behaviour can be pictured and algorithm deficiencies can be understood in depth. Furthermore, this understanding will lead to the design of improved algorithms.

Ideally (not necessarily for all), we would like the benchmark functions to be:

  1. scalable with the problem size;
  2. to be non-trivial in the black box optimisation sense (the function may be shifted such that the global optimum may be any point).

While the challenge may be significant, especially for classical combinatorial optimisation problems (not so much for toy problems), achieving this goal would help greatly in bridging the gap between theoreticians and experimentalists.

Expected Contribution

This workshop wants to bring together experts on benchmarking of optimization algorithms. It will provide a common forum for discussions and exchange of opinions. Interested participants are encouraged to submit a paper related to black-box optimization benchmarking of discrete optimizers in the widest sense. In particular,

  • suggest function classes that should be included in the function collection and motivate the reasons for inclusion
  • suggest benchmark function properties that allow to capture difficulties which occur in real-world applications (e.g., deception, separability, etc.)
  • suggest which classes of standard combinatorial optimisation problems should be included and how to select significant instances
  • suggest which classes of toy problems should be included and motivate why
  • any other aspect of benchmarking methodology for discrete optimizers such as design of experiments, performance measures, presentation methods, benchmarking frameworks, etc.

This workshop is organised as part of the ImAppNIO Cost Action 15140.