A Machine-Learning Approach to Select Important Variables for Recombination on Many-objective Evolutionary Optimization
Abstract
There exist numerous many-objective real-world optimization problems in various application domains for which it is difficult or time-consuming to derive Pareto optimal solutions. In an evolutionary algorithm, variation operators such as recombination and mutation are extremely important to obtain an effective solution search. In this paper, we study a machine learning-enhanced recombination that incorporates an intelligent variable selection method. The method is based on the importance of variables with respect to the ranking of solutions in objective space that express convergence to the Pareto front. We verify the performance of the enhanced recombination on benchmark test problems with three or more objectives using the many-objective evolutionary algorithm AεSεH as a baseline algorithm. Our experimental analysis reveals that variable importance can effectively enhance the performance of many-objective evolutionary algorithms.
References
K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” Trans. Evol. Comp, vol. 6, no. 2, pp. 182–197, Apr. 2002.
E. Zitzler, M. Laumanns, and L. Thiele, “SPEA2: Improving the strength pareto evolutionary algorithm for multiobjective optimization,” in Evolutionary Methods for Design Optimization and Control with Applications to Industrial Problems, K. C. Giannakoglou, D. T. Tsahalis, J. Periaux, K. D. Papail- ´ iou, and T. Fogarty, Eds. International Center for Numerical Methods in Engineering, 2001, pp. 95–100.
S. Watanabe, T. Hiroyasu, and M. Miki, “NCGA: Neighborhood cultivation genetic algorithm for multi-objective optimization problems.” in GECCO Late Breaking Papers. AAAI, 2002, pp. 458–465.
Q. Zhang and H. Li, “MOEA/D: A multiobjective evolutionary algorithm based on decomposition.” IEEE Trans. Evolutionary Computation, vol. 11, no. 6, pp. 712–731, 2007.
K. Deb and H. Jain, “An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part i: Solving problems with box constraints.” IEEE Trans. Evolutionary Computation, vol. 18, no. 4, pp. 577–601, 2014.
H. Aguirre, A. Oyama, and K. Tanaka, “Adaptive ε-sampling and ε-hood for evolutionary many-objective optimization,” in Evolutionary Multi-Criterion Optimization, ser. Lecture Notes in Computer Science, vol. 7811, 2013, pp. 322–336.
R. Santana, “Gray-box optimization and factorized distribution algorithms: where two worlds collide,” CoRR, vol. abs/1707.03093, 2017.
H. Muhlenbein and G. Paaß, “From recombination of genes to the estimation ¨ of distributions i. binary parameters,” in Parallel Problem Solving from Nature — PPSN IV, H.-M. Voigt, W. Ebeling, I. Rechenberg, and H.-P. Schwefel, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996, pp. 178–187.
J. A. Lozano, P. Larranaga, I. n. Inza, and E. Bengoetxea, ˜ Towards a New Evolutionary Computation: Advances on Estimation of Distribution Algorithms (Studies in Fuzziness and Soft Computing). Secaucus, NJ, USA: SpringerVerlag New York, Inc., 2006.
P. Larranaga, H. Karshenas, C. Bielza, and R. Santana, “A review on proba- ˜ bilistic graphical models in evolutionary computation,” Journal of Heuristics, vol. 18, no. 5, pp. 795–819, Oct. 2012.
H. Karshenas, R. Santana, C. Bielza, and P. Larranaga, “Multiobjective es- ˜ timation of distribution algorithm based on joint modeling of objectives and variables,” IEEE Trans. Evolutionary Computation, vol. 18, no. 4, pp. 519–542, 2014.
X. Ma, F. Liu, Y. Qi, L. Li, L. Jiao, X. Deng, X. Wang, B. Dong, Z. Hou, Y. Zhang, and J. Wu, “MOEA/D with biased weight adjustment inspired by user preference and its application on multi-objective reservoir flood control problem,” Soft Comput., vol. 20, no. 12, pp. 4999–5023, 2016.
M. Sagawa, H. Aguirre, F. Daolio, A. Liefooghe, B. Derbel, S. Verel, and K. Tanaka, “Learning variable importance to guide recombination,” in IEEE SSCI, 2016.
S. Huband, P. Hingston, L. Barone, and R. While, “A review of multi-objective test problems and a scalable test problem toolkit,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 5, pp. 477–506, 2007.
L. Breiman, J. Friedman, C.Stone, and R.Olshen, Classification and regression trees. CRC press, 1984.
L. Breiman, “Random forests,” Machine learning, vol. 45, no. 1, pp. 5–32, 2001.
A. Liaw and M. Wiener, “Classification and regression by randomforest,” R News, vol. 2, no. 3, pp. 18–22, 2002.
C. Strobl and A. Zeilei, “Why and how to use random forest variable importance measures,” useR! 2008, 2008.
L. Breiman, “Manual on setting up, using, and understanding random forests v3. 1,” Statistics Department University of California Berkeley, CA, USA, 2002.
H. Aguirre, Y. Yazawa, A. Oyama, and K. Tanaka, “Extending AεEεH from many-objective to multi-objective optimization,” in Conference on Simulated Evolution and Learning, ser. Lecture Notes in Computer Science, vol. 8886, 2014, pp. 239–250.
K. Deb and R. B. Agrawal, “Simulated binary crossover for continuous search space,” Complex Systems, vol. 9, pp. 115–148, 1995.
K. Deb, L. Thiele, M. Laumanns, and E. Zitzler, “Scalable test problems for evolutionary multi-objective optimization,” Evolutionary Multiobjective Optimization, pp. 105–145, 2005.
D. A. V. Veldhuizen, “Multiobjective evolutionary algorithms: Classifications, analyses, and new innovations.” Proceedings of the 1999 ACM symposium on Applied computing, pp. 351–357, 1999.
E. Zitzler, L. Thiele, M. Laumanns, C. M. Fonseca, and V. G. da Fonseca, “Performance assessment of multiobjective optimizers: An analysis and review,” IEEE Transactions on Evolutionary Computation, vol. 7, pp. 117–132, 2003.