﻿ 目标空间映射策略的高维多目标粒子群优化算法
«上一篇
 文章快速检索 高级检索

 智能系统学报  2021, Vol. 16 Issue (2): 362-370  DOI: 10.11992/tis.202006042 0

### 引用本文

CHEN Qiang, WANG Yujia, LIANG Haina, et al. Multi-objective particle swarm optimization algorithm based on an objective space papping strategy[J]. CAAI Transactions on Intelligent Systems, 2021, 16(2): 362-370. DOI: 10.11992/tis.202006042.

### 文章历史

Multi-objective particle swarm optimization algorithm based on an objective space papping strategy
CHEN Qiang , WANG Yujia , LIANG Haina , SUN Xin
School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China
Abstract: To balance the relationship between the convergence and diversity of the optimization algorithm in the multi-objective problem, the selection pressure of the algorithm is increased. A high-dimensional MOPSO-OSM (multi-objective particle swarm optimization algorithm based on objective space mapping strategy) is proposed in this paper. When solving high-dimensional multi-objective optimization problems, the Pareto based criterion cannot identify the best compromise solutions from many nondominated solutions. Therefore, the high-dimensional multi-objective optimization space is mapped into two-dimensional space based on indexes of convergence and diversity. Then, the two-dimensional space is divided into four regions according to the performance index. Simultaneously, the ability of the jumping local optimal solution is improved using the opposition learning strategy. The experimental results show that MOPSO-OSM can balance the relationship between convergence and diversity and solve complex problems.
Key words: objective space mapping strategy    performance index    opposition learning    particle swarm optimization    high-dimensional multi-objective optimization    Pareto based criterion    convergence    diversity

1 粒子群优化算法 1.1 基本粒子群优化算法

 $\begin{split} {v_{id}}(t + 1) =& \omega {v_{id}}(t) + {c_1}{r_1}({p_{id}}(t) - {x_{id}}(t)) + \\ & {c_2}{r_2}({p_{gd}}(t) - {x_{id}}(t)) \end{split}$ (1)
 ${x_{id}}(t + 1) = {x_{id}}(t) + {v_{id}}(t + 1)$ (2)

1.2 改进粒子群优化算法

1)调整算法的参数。为了平衡种群勘探与开采的能力，Shi等[19]ω引入粒子群算法中，ω值越大，勘探未知区域的能力越强，ω值越小，小范围内的开采能力越强，Clerc等[20]建议ω的取值为0.729，Venter等[21]则采用非线性递减的策略来更新ω。此外还有部分研究者通过调整c1c2的取值来增强算法的搜索能力[22-23]

2)设计不同类型的拓扑结构。Kennedy[24]通过分析种群拓扑结构与交互概率的关系，提出了一种bare bones particle swarm (BBPS)的模型。Yue等[25]提出了一种基于环形拓扑结构的粒子群算法并将其用于求解多模态的多目标问题。

3)与其他策略相结合，形成混合粒子群算法。侯翔等[26]为了提高算法求解问题的能力，对所有粒子的最优位置进行降维处理，形成一个参考全局最优解，同时使用该解来更新群体当前的最优位置。Lin 等[27]将 MOPSO 同分解算法相结合，采用两种不同的搜索策略来更新粒子的速度，结果显示，算法对复杂问题的解决性能得到了加强。Zain等[28]为了降低算法在约束问题上求解的难度，在标准 MOPSO[29]算法的基础上提出了一种基于动态搜索边界策略的 MOPSO。

2 目标空间映射策略的高维多目标粒子群算法 2.1 高维多目标优化问题

 $\begin{array}{l} \min Y({{x}}) = {\rm{\{ }}{f_1}({{x}}), \,{f_2}({{x}}),\, \cdots,\,{f_m}({{x}}){\rm{\} }} \\ {\rm{subject\; to \;}}{{x}} \in {{D}} \end{array}$ (3)

2.2 目标空间映射策略

 ${F_i} = \frac{{{d_{i,o}}}}{{\sqrt m }},\quad i = 1,2, \cdots,n$ (4)

 ${\rm{Di}}{{\rm{s}}_i} = \dfrac{{\Bigg(\displaystyle\sum\limits_{s = 1}^m {\frac{{f_s^{i + 1}(x) - f_s^{i - 1}(x)}}{{\max ({f_s}(x)) - \min ({f_s}(x))}}} \Bigg)}}{m},\quad i = 1,2, \cdots,n$ (5)

 ${F_{{\rm{ave}}}} = \dfrac{{\displaystyle\sum\limits_{i = 1}^n {{F_i}} }}{n}$ (6)
 ${\rm{Di}}{{\rm{s}}_{{\rm{ave}}}} = \dfrac{{\displaystyle\sum\limits_{i = 1}^n {\Bigg(\dfrac{1}{{{\rm{Di}}{{\rm{s}}_i}}}\Bigg)} }}{n}$ (7)

 Download: 图 1 映射后的2维空间划分结果 Fig. 1 2-dimensinal space division result after mapping

A $\{ {Y_i}|{F_i} \leqslant {F_{{\rm{ave}}}} \cap \dfrac{1}{{{\rm{Di}}{{\rm{s}}_i}}} \leqslant {\rm{Di}}{{\rm{s}}_{{\rm{ave}}}},\;{Y_i} \in A\}$ ，处于A区域的个体，其收敛性和分布性均是最优。

B $\{ {Y_i}|{F_i} \leqslant {F_{{\rm{ave}}}} \cap \dfrac{1}{{{\rm{Di}}{{\rm{s}}_i}}} > {\rm{Di}}{{\rm{s}}_{{\rm{ave}}}},\;{Y_i} \in A\}$ ，处于B区域的个体，其收敛性较好，分布性较差。

C $\{ {Y_i}|{F_i} > {F_{{\rm{ave}}}} \cap \dfrac{1}{{{\rm{Di}}{{\rm{s}}_i}}} \leqslant {\rm{Di}}{{\rm{s}}_{{\rm{ave}}}},\;{Y_i} \in A\}$ ，处于C区域的个体，其收敛性较差，分布性较好。

D $\{ {Y_i}|{F_i} > {F_{{\rm{ave}}}} \cap \dfrac{1}{{{\rm{Di}}{{\rm{s}}_i}}} > {\rm{Di}}{{\rm{s}}_{{\rm{ave}}}},\;{Y_i} \in A\}$ ，处于D区域的个体，其收敛性和分布性都较差。

 ${\rm{Valu}}{{\rm{e}}_i} = \alpha \left( {\frac{1}{{{\rm{Di}}{{\rm{s}}_i}}}} \right) + \beta {F_i}$ (8)

 Download: 图 2 目标映射策略的流程图 Fig. 2 Flowchart of the objective space mapping strategy
2.3 反向学习策略

 $\left\{ {\begin{array}{*{20}{l}} x_{id}^* = k(\min ({x_d}) + \max ({x_d})) - {x_{id}} \\ x_{id}^* = k({a_d} + {b_d}) - {x_{id}} \\ {x_{id}^* = {\rm{rand}}({a_d},\;{b_d}),\;{\rm{if}}\;x_{id}^* < {x_{d\min }}||x_{id}^* > {x_{d\max }}} \end{array}} \right.$ (9)

 $x_{id}^* = {x_{d\min }} + {x_{d\max }} - {x_{id}}$ (10)

 $\left\{ {\begin{array}{*{20}{l}} \dfrac{{|\min (f_i^t) - \min (f_i^{t - 10})|}}{{|\min (f_i^t)|}} < 0.005\quad ,\;t > 10 \\ {\dfrac{{|\max (f_i^t) - \max (f_i^{t - 10})|}}{{|\max (f_i^t)|}} < 0.005\quad ,\;t > 10} \end{array}} \right.$ (11)

2.4 MOPSO-OSM流程

MOPSO-OSM算法具体流程如下：

1)算法初始化；

2)判断是否满足停止条件，若条件满足，算法停止迭代，否则转到3)；

3)判断种群是否陷入局部最优，执行反向学习策略；否则，直接转到4)；

4)利用式(1)和(2)更新个体的速度和位置；

5)计算个体的适应度值；

6)对个体当前适应度值和前代适应度值进行比较来更新个体最优;

7)选择非支配解；

8)利用目标空间映射策略更新外部档案文件；

9)从外部档案中随机选择一个个体来更新种群最优，并转到2)；

3 实验结果与分析 3.1 测试函数

3.2 性能指标

 ${\rm{GD}} = \frac{{\sqrt {\displaystyle\sum\limits_{i = 1}^n {d_i^2} } }}{n}$ (12)

SP指标通常被用来评价种群的分布性，其计算如式(13)所示，分布性好坏与其计算值成反比。

 $SP = \sqrt {\frac{1}{{n - 1}}\displaystyle\sum\limits_{i = 1}^n {{{(\overline d - {d_i})}^2}} }$ (13)

IGD指标被用来同时评价种群分布性和收敛性，IGD越小，算法展现出的性能越好，其计算公式为

 ${\rm{IGD}} = \frac{{\displaystyle\sum\limits_{{{{x}}^*} \in {{{P}}^*}} {d({{{x}}^*},\;{{P}})} }}{{|{{{P}}^*}|}}$ (14)

3.3 对比算法及其设置

3.4 结果与分析

3.4.1 收敛性分析

3.4.2 分布性分析

3.4.3 整体性分析

 Download: 图 3 6种算法在5目标WFG3测试函数所得Pareto前沿 Fig. 3 Six algorithms obtain Pareto Front in 5 objective WFG3 test function
 Download: 图 4 MOPSO-OSM在5目标WFG3和WFG4上的GD曲线 Fig. 4 GD curves of MOPSO-OSM for WFG3 and WFG4 with five objectives

4 结束语

 [1] ISHIBUCHI H, TSUKAMOTO N, NOJIMA Y. Evolutionary many-objective optimization: a short review[C]//2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence). Hong Kong, China, 2008: 266−271. (0) [2] 刘建昌, 李飞, 王洪海, 等. 进化高维多目标优化算法研究综述[J]. 控制与决策, 2018, 33(5): 879-887. LIU Jianchang, LI Fei, WANG Honghai, et al. Survey on evolutionary many-objective optimization algorithms[J]. Control and decision, 2018, 33(5): 879-887. (0) [3] DEB K, PRATAP A, AGARWAL S, et al. A fast and elitist multiobjective genetic algorithm: NSGA-II[J]. IEEE transactions on evolutionary computation, 2002, 6(2): 182-197. DOI:10.1109/4235.996017 (0) [4] DEB K, JAIN H. An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, Part I: solving problems with box constraints[J]. IEEE transactions on evolutionary computation, 2014, 18(4): 577-601. DOI:10.1109/TEVC.2013.2281535 (0) [5] 汤恺祥, 许峰. 基于大数据聚类的改进NSGA-Ⅲ算法[J]. 信息记录材料, 2020, 21(5): 109-112. TANG Kaixiang, XU Feng. Improved NSGA-III algorithm based on big data clustering[J]. Information recording materials, 2020, 21(5): 109-112. (0) [6] ZOU Juan, FU Liuwei, ZHENG Jinhua, et al. A many-objective evolutionary algorithm based on rotated grid[J]. Applied soft computing, 2018, 67: 596-609. DOI:10.1016/j.asoc.2018.02.031 (0) [7] LIN Qiuzhen, LIU Songbai, TANG Chaoyu, et al. Particle swarm optimization with a balanceable fitness estimation for many-objective optimization problems[J]. IEEE transactions on evolutionary computation, 2018, 22(1): 32-46. DOI:10.1109/TEVC.2016.2631279 (0) [8] ZHANG Qingfu, LI Hui. MOEA/D: a multiobjective evolutionary algorithm based on decomposition[J]. IEEE transactions on evolutionary computation, 2008, 11(6): 712-731. (0) [9] LIU Hailin, GU Fangqing, ZHANG Qingfu. Decomposition of a multiobjective optimization problem into a number of simple multiobjective subproblems[J]. IEEE transactions on evolutionary computation, 2014, 18(3): 450-455. DOI:10.1109/TEVC.2013.2281533 (0) [10] LI Ke, DEB K, ZHANG Qingfu, et al. An evolutionary many-objective optimization algorithm based on dominance and decomposition[J]. IEEE transactions on evolutionary computation, 2015, 19(5): 694-716. DOI:10.1109/TEVC.2014.2373386 (0) [11] LIU Songbai, LIN Qiuzhen, TAN K C, et al. A fuzzy decomposition-based Multi/many-objective evolutionary algorithm[J]. IEEE transactions on cybernetics, 2020: 1-15. DOI:10.1109/TCYB.2020.3008697 (0) [12] ZITZLER E, KÜNZLI S. Indicator-based selection in multiobjective search[C]//International Conference on Parallel Problem Solving from Nature. Berlin, Heidelberg: Springer, 2004: 832−842. (0) [13] BADER J, ZITZLER E. HypE: an algorithm for fast hypervolume-based many-objective optimization[J]. Evolutionary computation, 2011, 19(1): 45-76. DOI:10.1162/EVCO_a_00009 (0) [14] MENCHACA-MENDEZ A, COELLO C A C. GDE-MOEA: a new MOEA based on the generational distance indicator and ε-dominance[C]//2015 IEEE Congress on Evolutionary Computation (CEC). Sendai, Japan: IEEE, 2015: 947−955. (0) [15] TIAN Ye, CHENG Ran, ZHANG Xingyi, et al. An indicator-based multiobjective evolutionary algorithm with reference point adaptation for better versatility[J]. IEEE transactions on evolutionary computation, 2018, 22(4): 609-622. DOI:10.1109/TEVC.2017.2749619 (0) [16] LI Fei, CHENG Ran, LIU Jianchang, et al. A two-stage R2 indicator based evolutionary algorithm for many-objective optimization[J]. Applied soft computing, 2018, 67: 245-260. DOI:10.1016/j.asoc.2018.02.048 (0) [17] LI Miqing, YANG Shengxiang, LIU Xiaohui. Bi-goal evolution for many-objective optimization problems[J]. Artificial intelligence, 2015, 228: 45-65. DOI:10.1016/j.artint.2015.06.007 (0) [18] KENNEDY J, EBERHART R. Particle swarm optimization[C]//Proceedings of ICNN'95 - International Conference on Neural Network. Perth, Australia, 1995: 1942−1948. (0) [19] SHI Y, EBERHART R C. A modified particle swarm optimizer[C]//1998 IEEE International Conference on Evolutionary Computation Proceedings. IEEE World Congress on Computational Intelligence (Cat. No. 98TH8360). Anchorage, USA, 1998: 73−79. (0) [20] CLERC M, KENNEDY J. The particle swarm - explosion, stability, and convergence in a multidimensional complex space[J]. IEEE transactions on evolutionary computation, 2002, 6(1): 58-73. DOI:10.1109/4235.985692 (0) [21] VENTER G, SOBIESZCZANSKI-SOBIESKI J. Multidisciplinary optimization of a transport aircraft wing using particle swarm optimization[J]. Structural and multidisciplinary optimization, 2004, 26: 121-131. DOI:10.1007/s00158-003-0318-3 (0) [22] MA Borong, HUA Jun, MA Zhixin, et al. IMOPSO: an improved multi-objective particle swarm optimization algorithm[C]//2016 5th International Conference on Computer Science and Network Technology (ICCSNT). Changchun, China, 2016: 376−380. (0) [23] QI Changxing, BI Yiming, HAN Huihua, et al. A hybrid particle swarm optimization algorithm[C]//2017 3rd IEEE International Conference on Computer and Communications (ICCC). Chengdu, China, 2017: 2187−2190. (0) [24] KENNEDY J. Bare bones particle swarms[C]//Proceedings of the 2003 IEEE Swarm Intelligence Symposium. SIS'03 (Cat. No. 03EX706). Indianapolis, USA, 2003: 80−87. (0) [25] YUE Caitong, QU Boyang, LIANG Jing. A multiobjective particle swarm optimizer using ring topology for Solving multimodal multiobjective problems[J]. IEEE transactions on evolutionary computation, 2018, 22(5): 805-817. DOI:10.1109/TEVC.2017.2754271 (0) [26] 侯翔, 蒲国林. 协同粒子群优化算法的改进与仿真[J]. 计算机工程与设计, 2015, 36(6): 1530-1534. HOU Xiang, PU Guolin. Improvement of its cooperative particle swarm optimization algorithm and simulation[J]. Computer engineering and design, 2015, 36(6): 1530-1534. (0) [27] LIN Qiuzhen, LI Jianqiang, DU Zhihua, et al. A novel multi-objective particle swarm optimization with multiple search strategies[J]. European Journal of operational research, 2015, 247(3): 732-744. DOI:10.1016/j.ejor.2015.06.071 (0) [28] ZAIN M Z B M, KANESAN J, CHUAH J H, et al. A multi-objective particle swarm optimization algorithm based on dynamic boundary search for constrained optimization[J]. Applied soft computing, 2018, 70: 680-700. DOI:10.1016/j.asoc.2018.06.022 (0) [29] COELLO C A C, LECHUGA M S. MOPSO: a proposal for multiple objective particle swarm optimization[C]//Proceedings of the 2002 Congress on Evolutionary Computation. CEC'02 (Cat. No. 02TH8600). Honolulu, USA, 2002, 1051−1056. (0) [30] WANG Hui, WU Zhijian, RAHNAMAYAN S, et al. Enhancing particle swarm optimization using generalized opposition-based learning[J]. Information sciences, 2011, 181(20): 4699-4714. DOI:10.1016/j.ins.2011.03.016 (0) [31] 马灿, 刘坚, 余方平. 混合模拟退火的布谷鸟算法研究[J]. 小型微型计算机系统, 2016, 37(9): 2029-2034. MA Can, LIU Jian, YU Fangping. Research on cuckoo algorithm with simulated annealing[J]. Journal of Chinese computer systems, 2016, 37(9): 2029-2034. DOI:10.3969/j.issn.1000-1220.2016.09.026 (0) [32] CHENG Ran, JIN Yaochu, OLHOFER M, et al. A reference vector guided evolutionary algorithm for many-objective optimization[J]. IEEE transactions on evolutionary computation, 2016, 20(5): 773-791. DOI:10.1109/TEVC.2016.2519378 (0) [33] CORNE D W, JERRAM N R, KNOWLES J D, et al. PESA-II: region-based selection in evolutionary multiobjective optimization[C]//Proceedings of the 3rd Annual Conference on Genetic and Evolutionary Computation. Morgan Kaufmann Publishers Inc, 2001: 283−290. (0)