Benchmark Functions for the CEC'2017 Competition on Many-Objective Optimization
In the real world, it is not uncommon to face an optimization problem with more than three objectives. Such problems, called many-objective optimization problems (MaOPs), pose great challenges to the area of evolutionary computation. The failure of conventional Pareto-based multi-objective evolutionary algorithms in dealing with MaOPs motivates various new approaches. However, in contrast to the rapid development of algorithm design, performance investigation and comparison of algorithms have received little attention. Several test problem suites which were designed for multi-objective optimization have still been dominantly used in many-objective optimization. In this competition, we carefully selects/designs 15 test problems with diverse properties, aiming to promote the research of evolutionary many-objective optimization (EMaO) via suggesting a set of test problems with a good representation of various real-world scenarios. Also, an open-source software platform with a user-friendly GUI is provided to facilitate the experimental execution and data observation.
Citation : Cheng, R. et al. (2017) Benchmark Functions for the CEC'2017 Competition on Many-Objective Optimization. Technical Report No. CSR-17-01, School of Computer Science, University of Birmingham, U.K., January 2017
Research Group : Centre for Computational Intelligence
Research Institute : Institute of Artificial Intelligence (IAI)
Peer Reviewed : No