﻿ 基于高斯过程的机器人自适应抓取策略<sup>*</sup>
 文章快速检索 高级检索

Adaptive grasping strategy of robot based on Gaussian process
CHEN Youdong, GUO Jiaxin, TAO Yong
School of Mechanical Engineering and Automation, Beijing University of Aeronautics and Astronautics, Beijing 100083, China
Received: 2016-08-10; Accepted: 2016-12-09; Published online: 2016-12-23 09:42
Foundation item: National High-tech Research and Development Program of China (2014AA041601); Beijing Science and Technology Plan (D161100003116002)
Corresponding author. CHEN Youdong, E-mail:chenyd@buaa.edu.cn
Abstract: When robot grasps an object, the pose of the object maybe change frequently. In order to make the robot adapt to the change of the pose of the object in the process of motion, an adaptive grasping strategy of robot based on Gaussian process was proposed. The proposed method maps the observation variables to the joint angles, which makes robot learn from samples and eliminates the calibration process of robot vision system and the robot inverse kinematics computation. First, the robot was dragged to grasp object. The observation variables of object and corresponding robot joint angles were recorded. Second, Gaussian process model was trained with the recorded samples, which correlates the observation variables and joint angles. Finally, after new observation variables were acquired, joint angles for grasping operation can be obtained by the trained Gaussian process model. The experiments show that UR3 robot can successfully grasp objects after training.
Key words: Gaussian process     adaptive grasping     robot control     robot vision     learning from demonstration

1 机器人的自适应抓取

 (1)

2 基于高斯过程建模

 图 1 基于高斯过程的自适应抓取 Fig. 1 Adaptive grasping based on Gaussian process
2.1 模型的训练

 (2)

 (3)

 (4)

 (5)

 (6)

 (7)
 (8)
2.2 关节变量的预测

 (9)

 (10)

 (11)

3 实验 3.1 实验配置

 图 2 实验平台 Fig. 2 Experimental platform

3.1.1 UR3机器人

 图 3 UR3机器人及其关节轴 Fig. 3 UR3 robot and its joint axis

3.1.2 相机和观测变量

 图 4 目标物体的位姿 Fig. 4 Pose of target object

3.1.3 末端执行器

 图 5 末端执行器的结构 Fig. 5 Structure of end effector

3.1.4 实验任务

3.2 数据与验证

3.2.1 数据的获取

 图 6 人工拖动示教编程 Fig. 6 Manual drag teaching programming

 编号 观测变量 对应的关节角度/(°) x/像素 y/像素 θ/(°) 基座 肩关节 肘关节 腕关节1 腕关节2 腕关节3 1 805 647 87 3.2 -120.5 -78.5 -63.2 115.9 -78.4 2 964 700 70 14.7 -121.4 -80.8 -50.2 107.6 -62.7 3 919 557 80 13.0 -110.5 -96.2 -50.2 107.6 -62.7 4 951 632 32 16.7 -116.1 -91.4 -39.5 99.7 -15.4 5 895 577 9 19.3 -112.3 -95.9 -39.4 89.2 7.24 6 969 483 12 26.5 -105.3 -104.6 -39.4 90.0 7.7 7 809 516 16 15.7 -108.8 -101.2 -39.5 91.5 -2.2 8 873 593 31 18.2 -111.5 -102.2 -28.9 91.6 -12.4 9 975 584 52 21.87 -111.2 -101.6 -31.1 95.9 -26.1 10 820 711 63 10.2 -120.4 -83.9 -45.4 103.6 -49.8 11 1 044 655 29 5.9 -113.2 -99.8 -32.3 109.0 -31.6 12 803 594 39 5.9 -113.2 -99.8 -32.3 109.0 -31.6

3.2.2 对新目标的自适应抓取

 图 7 目标物体的观测变量 Fig. 7 Observation variables of target object

 编号 预测的关节角度/(°) 基座 肩关节 肘关节 腕关节1 腕关节2 腕关节3 1 16.3 -104.4 -109.1 -33.9 89.9 5.4 2 12.6 -121.3 -81.5 -50.7 110.3 -65.5

 图 8 机器人的自适应抓取 Fig. 8 Adaptive grasping of robot

 编号 观测变量 对应的关节角度/(°) x/像素 y/像素 θ/(°) 基座 肩关节 肘关节 腕关节1 腕关节2 腕关节3 1 831 630 6 15.0 -115.6 -94.4 -32.6 91.9 -32.6 2 966 646 16 21.9 -116.6 -92.5 -33.5 90.6 5.2 3 906 558 28 18.5 -110.7 -99.3 -37.7 94.0 -10.5 4 774 472 18 13.1 -104.8 -107.7 -37.2 94.1 -8.6 5 712 637 52 2.5 -117.0 -89.4 -45.5 108.1 -47.5 6 903 666 30 16.1 -118.3 -89.2 -37.6 96.7 -13.1 7 1 012 616 54 21.1 -114.9 -92.0 -42.7 98.6 -30.2 8 1 020 489 77 21.4 -106.2 -101.7 -49.1 102.2 -52.5 9 722 429 33 9.0 -102.1 -110.1 -41.8 98.9 -27.0 10 649 610 27 1.9 -114.9 -93.9 -40.1 103.0 -26.2 11 908 749 51 12.6 -124.4 -79.8 -42.4 103.6 -34.2 12 637 780 15 -0.7 -126.7 -79.4 -36.3 103.3 -15.3

 图 9 训练样本和测试样本在三维空间的分布 Fig. 9 Distribution of training and testing samples on 3D space
 图 10 训练样本和测试样本在像素平面上的分布 Fig. 10 Distribution of training and testing samples on pixel plane

 编号 观测变量 x/像素 y/像素 θ/(°) 1 1 063 968 9 2 968 692 3 3 526 813 59 4 840 853 43 5 852 453 25 6 519 671 11

 编号 观测变量 x/像素 y/像素 θ/(°) 1 813 653 88 2 966 701 69 3 906 547 80 4 941 629 31 5 884 569 9 6 977 485 13 7 816 524 14 8 870 589 32 9 983 593 51 10 818 709 63 11 1 045 655 29 12 806 594 39

4 结论

1) 本文提出的机器人自适应抓取策略，通过高斯过程直接关联目标物体的观测变量与机器人的关节角度，在不需要机器人视觉系统的标定和运动学逆解的情形下，实现了机器人对目标物体的自适应抓取。

2) 从示教样本中学习，省去了视觉系统的标定，减少了使用者对专业知识的需求。机器人在关节空间的调整过程不需要进行运动学求逆，不需要限制关节轴的运动，能够充分发挥机器人的工作能力。

3) 在确定的场景下，利用较少的训练样本实现了机器人的关节坐标与观测变量的关联，使机器人具备从样本中学习的能力，减少了使用者的编程负担，有利于机器人系统的快速部署并投入应用。

 [1] AHRARY A, LUDENA R D A.A novel approach to design of an under-actuated mechanism for grasping in agriculture application[M]//LEE R.Applied computing and information technology.Berlin:Springer, 2014:31-45. [2] MANTI M, HASSAN T, PASSETTI G, et al.An under-actuated and adaptable soft robotic gripper[M]//PRESCOTT T J, LEPORA N F, MURA A, et al.Biomimetic and biohybrid systems.Berlin:Springer, 2015:64-74. [3] BELZILE B, BIRGLEN L. A compliant self-adaptive gripper with proprioceptive haptic feedback[J]. Autonomous Robots, 2014, 36 (1): 79–91. [4] PETKOVIC'D, ISSA M, PAVLOVIC'N D, et al. Adaptive neuro fuzzy controller for adaptive compliant robotic gripper[J]. Expert Systems with Applications, 2012, 39 (18): 13295–13304. DOI:10.1016/j.eswa.2012.05.072 [5] HOFFMANN H, SCHENCK W, MÖLLER R. Learning visuomotor transformations for gaze-control and grasping[J]. Biological Cybernetics, 2005, 93 (2): 119–130. DOI:10.1007/s00422-005-0575-x [6] SAXENA A, DRIEMEYER J, NG A Y. Robotic grasping of novel objects using vision[J]. International Journal of Robotics Research, 2008, 27 (2): 157–173. DOI:10.1177/0278364907087172 [7] LIPPIELLO V, RUGGIERO F, SICILIANO B, et al. Visual grasp planning for unknown objects using a multifingered robotic hand[J]. IEEE/ASME Transactions on Mechatronics, 2013, 18 (3): 1050–1059. DOI:10.1109/TMECH.2012.2195500 [8] ZHANG Z Y. A flexible new technique for camera calibration[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2000, 22 (11): 1330–1334. [9] 王一, 刘常杰, 杨学友, 等. 工业机器人视觉测量系统的在线校准技术[J]. 机器人, 2011, 33 (3): 299–302. WANG Y, LIU C J, YANG X Y, et al. Online calibration of visual measurement system based on industrial robot[J]. Robot, 2011, 33 (3): 299–302. (in Chinese) [10] 张李俊, 黄学祥, 冯渭春, 等. 基于运动路径靶标的空间机器人视觉标定方法[J]. 机器人, 2016, 38 (2): 193–199. ZHANG L J, HUANG X X, FENG W C, et al. Space robot vision calibration with reference objects from motion trajectories[J]. Robot, 2016, 38 (2): 193–199. (in Chinese) [11] CORKE P I. Visual control of robots:High-performance visual serving[M]. New York: Wiley, 1997. [12] SIRADJUDDIN I, BEHERA L, MCGINNITY T M, et al.A position based visual tracking system for a 7 DOF robot manipulator using a kinect camera[C]//International Joint Conference on Neural Networks.Piscataway, NJ:IEEE Press, 2012:1-7. https://www.iitk.ac.in/ee/publications-stream-wise-list [13] THOMAS J, LOIANNO G, SREENATH K, et al.Toward image based visual servoing for aerial grasping and perching[C]//2014 IEEE International Conference on Robotics and Automation (ICRA).Piscataway, NJ:IEEE Press, 2014:2113-2118. [14] NIE L, HUANG Q.Inverse kinematics for 6-DOF manipulator by the method of sequential retrieval[C]//Proceedings of the International Conference on Mechanical Engineering and Material Science, 2012:255-258. [15] CHAN T F, DUBEY R V. A weighted least-norm solution based scheme for avoiding joint limits for redundant joint manipulators[J]. IEEE Transactions on Robotics & Automation, 1995, 11 (2): 286–292. [16] SHIMIZU M, KAKUYA H, YOON W K, et al. Analytical inverse kinematic computation for 7-DOF redundant manipulators with joint limits and its application to redundancy resolution[J]. IEEE Transactions on Robotics, 2008, 24 (5): 1131–1142. DOI:10.1109/TRO.2008.2003266 [17] LUO R C, LIN T W, TSAI Y H.Analytical inverse kinematic solution for modularized 7-DoF redundant manipulators with offsets at shoulder and wrist[C]//International Conference on Intelligent Robots and Systems.Piscataway, NJ:IEEE Press, 2014:516-521. [18] EWERTON M, NEUMANN G, LIOUTIKOV R, et al.Learning multiple collaborative tasks with a mixture of interaction primitives[C]//2015 IEEE International Conference on Robotics and Automation (ICRA).Piscataway, NJ:IEEE Press, 2015:1535-1542. [19] RASMUSSEN C E. Gaussian processes for machine learning[M]. Cambridge: MIT Press, 2006. [20] PARASCHOS A, DANIEL C, PETERS J, et al.Probabilistic movement primitives[C]//Advances in Neural Information Processing Systems(NIPS), 2013:2616-2624. [21] CALANDRA R, SEYFARTH A, PETERS J, et al.An experimental comparison of Bayesian optimization for bipedal locomotion[C]//IEEE International Conference on Robotics and Automation.Piscataway, NJ:IEEE Press, 2014:1951-1958. [22] CULLY A, CLUNE J, TARAPORE D, et al. Robots that can adapt like animals[J]. Nature, 2015, 521 (7553): 503–507. DOI:10.1038/nature14422

#### 文章信息

CHEN Youdong, GUO Jiaxin, TAO Yong

Adaptive grasping strategy of robot based on Gaussian process

Journal of Beijing University of Aeronautics and Astronsutics, 2017, 43(9): 1738-1745
http://dx.doi.org/10.13700/j.bh.1001-5965.2016.0660