IEEE/CAA Journal of Automatica Sinica  2018, Vol. 5 Issue(3): 670-682   PDF    
A Mode-Switching Motion Control System for Reactive Interaction and Surface Following Using Industrial Robots
Danial Nakhaeinia, Pierre Payeur, Robert Laganière     
School of Electrical Engineering and Computer Science, University of Ottawa, Ontario K1N6N5, Canada
Abstract: This work proposes a sensor-based control system for fully automated object detection and exploration (surface following) with a redundant industrial robot. The control system utilizes both offline and online trajectory planning for reactive interaction with objects of different shapes and color using RGBD vision and proximity/contact sensors feedback where no prior knowledge of the objects is available. The RGB-D sensor is used to collect raw 3D information of the environment. The data is then processed to segment an object of interest in the scene. In order to completely explore the object, a coverage path planning technique is proposed using a dynamic 3D occupancy grid method to generate a primary (offline) trajectory. However, RGB-D sensors are very sensitive to lighting and provide only limited accuracy on the depth measurements. Therefore, the coverage path planning is then further assisted by a real-time adaptive path planning using a fuzzy self-tuning proportional integral derivative (PID) controller. The latter allows the robot to dynamically update the 3D model by a specially designed instrumented compliant wrist and adapt to the surfaces it approaches or touches. A modeswitching scheme is also proposed to efficiently integrate and smoothly switch between the interaction modes under certain conditions. Experimental results using a CRS-F3 manipulator equipped with a custom-built compliant wrist demonstrate the feasibility and performance of the proposed method.
Key words: Adaptive control     fuzzy-PID controller     manipulator control     motion planning     surface following    

Sensor-based control in robotics can contribute to the ever growing industrial demand to improve speed and precision, especially in an uncalibrated working environment. But beside the kinematic capabilities of common industrial robots, the sensing abilities are still underdeveloped. Force sensors are mainly used in the context of the classical constrained hybrid force/position control [1]-[3] where force control is added for tasks that involve dealing with contact on surfaces. Tactile sensors determine different physical properties of objects and allow the assessment of object properties such as deformation, shape, surface normal, curvature measurement, and slip detection through their contact with the world [4], [5]. However, these sensors are only applicable where prior knowledge of the environment is available and there is a physical interaction between the robot and objects. In addition, the execution speed of the task is limited due to the limited bandwidth of the sensors, and to prevent loss of contact and information some planned trajectories must be prescribed [6].

Using vision sensors in order to control a robot is commonly called visual servoing where visual features such as points, lines and regions are used in various object manipulation tasks such as alignment of a robot manipulator with an object [7]. In contrast to force/tactile control, vision-based control systems require no contact with the object, allow non-contact measurement of the environment and do not need geometric models. The vision measurement is usually combined with force/tactile measurement as hybrid visual/force control [8], [9] for further manipulation. 3D profiling cameras, scanners, sonars or combinations of them have been used for manipulation of objects, which often result in lengthy acquisition and slow processing of massive amounts of information [10]. An alternative to obtain 3D data is replacing the high-cost sensors by an affordable Kinect sensor [11], [12]. The Kinect technology is used to rapidly acquire RGB-D data over the surface of objects.

Beyond selecting proper sensors, reliable interaction control strategy and reactive planning algorithms are required to direct the robot motion under consideration of the kinematic constraints and environment uncertainties [13]. Although conventional proportional derivative/proportional integral derivative (PD/PID) controllers are the most applicable controller for robot manipulators, they are not suitable for nonlinear systems and lack robustness to disturbances and uncertainties. A general solution to this problem is to design and develop adaptive controllers where adaptive laws are devised based on the kinematics and dynamic models of the robot systems [14]. Furthermore, to achieve a better performance and smoothly switch between different motion modes, switched control systems were proposed [15] to dictate switching law and determine which motion mode(s) should be active.

Aiming at bringing dexterous manipulation at a higher level of performance, dexterity and operational capabilities, this work proposes a unique mode-switching motion control system. It details the development of a dexterous manipulation framework and algorithms using a specially designed compliant wrist for complete exploration and live surface following of objects with arbitrary surface shapes endowed with adaptive motion planning and control approach coupled with vision, proximity and contact sensing.

In order to enable industrial robots to interact with the environment smoothly and flexibly, adaptable compliance should be incorporated to industrial robots. A review of the traditional and recent robotics technologies shows that in many cases the modifications are technically challenging, not feasible and require hardware modifications, retrofitting, and new components which result in high costs. In this work, the flexibility problem is addressed by using a low cost specially designed compliant wrist mounted on the robot end-effector to increase the robot mechanical compliance and reduce the risk of damages. In contrast to other works, the adaptive controllers solve the force control problem in form of a position control problem where infrared sensors provide the required information for the interaction instead of touch sensors (force/torque, tactile and haptic). The proposed adaptive controllers require no learning procedure, no precise mathematical model of the robot and the environment and do not rely on force/torque calculation. As such the controllers enable the robot to interact with objects with and without contact.

The paper is organized as follows: Section Ⅱ gives an overview of the control system design. Section Ⅲ presents the offline trajectory planning, and Section Ⅳ details the design of the online proximity and contact path planning. Section Ⅴ describes the proposed switching scheme. Experimental results are presented in Section Ⅵ, and a final discussion concludes the paper in Section Ⅶ.


The robotic interaction with objects or surface following problem consists of a complete exploration and alignment of a robot's end-effector with a surface while accommodating its location and curvature. This work proposes a switched control approach for fast and accurate interaction with objects of different shapes and colors. In order to guide the robot's end-effector towards the surface, and then control its motion to execute the desired interaction with the surface, the problem is divided into 3 stages of free, proximity and contact motion. Each of the three control modes uses a specific sensory information to guide the robot in different regions of the workspace.

The main components and interconnections, as well as how they interact with each other are shown in Fig. 1.

Fig. 1 Block diagram of the proposed mode switching control system

The first step to achieve interaction with an object is the detection and localization of that object in the robot workspace. For that matter, a Kinect sensor is used for rapidly acquiring color and 3D data of the environment and to generate a primary (offline) trajectory as a general guidance for navigation and interaction with the object (free motion mode). The proximity control and contact control modes are also designed and integrated to the system. They dynamically update the 3D model through using a specially designed instrumented compliant wrist and modify the position and orientation of the robot based on the online sensory information when it is in close proximity (proximity interaction) and in contact with the object (contact interaction). The following sections detail the design and implementation of the system and its main components.


In the free motion phase, which refers to the robot movement in an unconstrained workspace, a 3D model of the object of interest is constructed using 3D data collected by a Kinect sensor. For that matter, the Kinect sensor is positioned behind the robot with its viewing axis pointing towards the surface of the object for collecting data over the object as shown in Fig. 2. Based on the surface shape acquired by this peripheral vision stage, a unique coverage path planning (CPP) strategy is developed using a dynamic 3D occupancy grid to plan a global trajectory and support the early robotic exploration stage of the object's surface.

Fig. 2 The RGB-D sensor positioning
A. Object Detection and Segmentation

In order to efficiently navigate a robotic manipulator and interact with objects of various shapes, identification and localization of the object of interest in the robot workspace must first be achieved. For that matter, a Kinect sensor is used to acquire RGB images of the workspace along with depth information. When multiple objects are present in the working environment, objects of interest must be identified and discriminated from the scene. Three filters are used to extract and segment an object of interest from the workspace: a depth filter, a color filter and a distance filter. The data acquired by the Kinect sensor contains an RGB image and a collection of vertices ($PointCloud$ ). Each point, $p_i$ , with ($x, y, z$ ) coordinates is defined with respect to the Kinect ($K$ ) reference frame and supports corresponding ($r, g, b$ ) color values, defined as follows:

$ \begin{eqnarray} PointCloud=P^K=\bigcup\limits_{i=0}^{\nu-1}p_i \\ \end{eqnarray} $
$ p_i=[p_{ix}, p_{iy}, p_{iz}, p_{ir}, p_{ig}, p_{ib}] $ (1)

where $v$ is the number of vertices in the point cloud. The depth of field covered by the Kinect sensor is typically between 0.5 m to 6 m and the error in depth considerably rises with the distance [16]. Therefore, the noisy background is discarded and the maximum depth range is limited to that of the robot workspace. To filter out the vertices that are not in the desired depth range (Dz), a thresholding filter is applied (see Algorithm 1) which eliminates all the vertices with a depth value (Z-coordinate) over a pre-set desired range (Fig. 3).

Algorithm 1: Object of Interest Segmentation
1: Inputs: $PointCloud= [p_0, p_1, \ldots, p_{v-1}], ~ObjRGB = [\alpha_k, \beta_k, \gamma_k], ~ObjCoordinate = [X_k, Y_k, Z_k], k\in[0, \mu-1]$
2: Output: $ObjCloud^{\rm Robot}$
3: Parameters: $\nu, \mu, \mathit{£}_\max, \lambda$
4: for $(k=0, \ldots, \mu-1)$
5:  for $(i=0, \ldots, \nu-1)$
6:   $\Delta Z_k$ =$Dz-p_{iz}$
7:   $\Delta E_i$ =$\sqrt{(\alpha_k-p_{ir})^2+(\beta_k-p_{ig})^2+(\gamma_k-p_{ib})^2}$
8:   $\mathit{£}_k$ =$\sqrt{(X_k-p_{ix})^2+(Y_k-p_{iy})^2+(Z_k-p_{iz})^2}$
9:   if $(\Delta Z_k\geq 0\;\&\;\Delta E_i \leq \lambda\;\&\;\mathit{£}_k \leq \mathit{£}_{\rm max})$ then
10:    $O^K [i]= p_i$
11:   end if
12:  end for
13: end for
14: $O^R=T_K^R\cdot O^K$
15: return $ObjCloud^{\rm Robot}=O^R$

Fig. 3 Depth filter: (a) original data from Kinect, (b) after applying the depth filter

In order to optimize the robot movement, the desired object information should be extracted from the RGB-D data using color and distance segmentation. After acquiring the data, the RGB image (Fig. 4(a)) is presented to the operator. A mouse event callback function waits for the operator to click on any points of the desired object and returns the RGB-D values of the corresponding pixels. The RGB values of the points selected over the desired object (Fig. 4(b)) are used as a filter for the color segmentation. To increase the accuracy of the color segmentation, the operator can choose a number of points ($\mu$ ) over the object surface according to the object's size and shape. The RGB values of the operator selected points of interest (2) are used to define a range of color for the object of interest. The vertices in the RGB image that are within the same color range and for which the Euclidean color distance ($\Delta E$ ) is less than a specific pre-set threshold ($\lambda$ ) are extracted from the point cloud, while the other vertices are eliminated.

$ ObjectRGB = [\alpha_k, \beta_k, \gamma_k], \;\; k\in[0, \mu-1] $ (2)
$ ObjectCoordinate = [X_k, Y_k, Z_k], \;\; k\in[0, \mu-1]. $ (3)
Fig. 4 (a) RGB image of the working environment, and (b) RGB values related to the selected points over the object of interest (the white square board)

The color segmentation allows to group parts of similar color. However, if there are multiple objects of similar color in the scene, a single object of interest cannot be identified only using color segmentation. Therefore, a distance filter is also applied on the point cloud to extract only the object of interest. Since each pixel in the RGB image corresponds to a particular point in the RGB-D point cloud, a primary estimate of the object of interest's position (3) in the workspace is extracted from the vertices corresponding to the points selected by the operator. By clicking on points respectively close to the corners and to the center of the object of interest, the maximum distance ($\mathit{£}_\max$ ) between the center and the corners is calculated by (4). Then, the distance between each vertex in the point cloud and the center point ($\mathit{£}$ ) selected by the user is compared with the maximum distance ($\mathit{£}_\max$ ) and the vertices for which that distance is higher than $\mathit{£}_\max$ are dropped.

$ \begin{align} \mathit{£}_\max&= {\rm max}\sqrt{(X_c-p_{kx})^2+(Y_c-p_{ky})^2+(Z_c-p_{kz})^2}\\ &~~~~~~~~~~~~~~~~~~~k=0, \ldots, \mu-1 \end{align} $ (4)

where $[X_c, Y_c, Z_c]$ are the coordinates of the object center and $\mu$ is the number of selected points by the operator. This filter ensures that objects sharing similar color will be segmented individually, unless they overlap in space.

Eventually, the remaining set of $m$ vertices ($O_i$ ) forms the object of interest RGB-D point cloud (5). However, the vertices are defined with respect to the Kinect reference frame and must be transformed to the robot's ($R$ ) base frame (6)

$ \begin{eqnarray} ObjCloud^{\rm Kinect}=O^{K}=\bigcup\limits_{i=0}^{m-1}O_i \end{eqnarray} $ (5)

where $O_i=[o_{ix}, o_{iy}, o_{iz}, o_{ir}, o_{ig}, o_{ib}]$ , and

$ \begin{eqnarray} ObjCloud^{\rm Robot}=O^R=T_K^R\cdot O^R \end{eqnarray} $ (6)

where $T_K^R$ is the homogeneous transformation matrix of the Kinect frame with respect to the robot base frame. The transformation matrix is estimated using the method proposed in [17].

B. Coverage Path Planning

The key task of coverage path planning (CPP) is to guide the robot over the object surface and guarantee a complete coverage using the preprocessed RGB-D data. The proposed CPP is developed using the dynamic 3D occupancy grid method. The occupancy grid is constant when the environment (object) is static. In most of the previously proposed methods, the occupancy grid is built in an offline phase and it is not updated during operation, which requires very accurate localization. However, in the proposed method the occupancy grid is initially formed using the information provided by the Kinect sensor and it is dynamically refined during interaction using the data acquired by proximity sensors embedded in the compliant wrist mounted on the robot.

1) Occupancy Grid

As shown in Fig. 5(a), once the object of interest is identified and modeled in 3D, the object surface is discretized to a set of uniform cubic cells (7). The resolution of the occupancy grid is adapted according to the volume of the object of interest:

$ (Z_\max-Z_\min){\rm mm}\times(X_\max-X_\min){\rm mm}\times(Y_\max-Y_\min){\rm mm} $
Fig. 5 (a) Object of interest 3D model, and (b) object discretized to a set of uniform cubic cells

where $Y_\min$ , $Y_\max$ , $X_\min$ , $X_\max$ , $Z_\min$ and $Z_\max$ correspond respectively to the minimum and maximum values of the $Y, X$ and $Z$ coordinates in the point cloud of the object of interest. Size of the cells can be selected based on the end-effector size or the desired resolution of the grid for a particular application. Therefore, initially the object of interest is mapped into an occupancy grid with cube cells of the same size as the robot's tool plate size, in the present case 130 mm $\times$ 130 mm $\times$ 130 mm.

The major issue arises when cells are partially occupied. That usually happens where the object has acute edges and also near the borders of the object of interest.

$ \begin{eqnarray} Obj=\bigcup\limits_{I=0}^{H-1}\bigcup\limits_{J=0}^{V-1}\bigcup\limits_{K=0}^{P-1}C_{I, J, K} \end{eqnarray} $ (7)

where $C_{I, J, K}$ represents a cell in the object model and

$ \begin{eqnarray} P= ||(X_\max - X_\min)/130||\\ H= ||(Y_\max - Y_\min)/130||\\ V= ||(Z_\max - Z_\min)/130||. \end{eqnarray} $ (8)

In order to solve the problem, the grid resolution is increased and two different cells (larger and smaller) sizes are used (Fig. 5(b)). The smaller cells ($C^s$ ) are cubes with size of 65 mm $\times$ 65 mm $\times$ 65 mm with each larger cell ($C^B$ ) containing 8 smaller cells in two layers of four (Fig. 6(a)). Another problem with 3D occupancy grids is that they require large memory space, especially when fine resolution is required [18]. Considering that the RGB-D sensor is perceiving the scene from a point of view that is perpendicular to the surface of the object, the regions in the back are essentially occluded. Therefore, to achieve computational efficiency and decrease complexity and the number of cells, the front and back layers of the larger cells are merged into one layer and the smaller cells are replaced with cuboid cells of size 65 mm $\times$ 65 mm $\times$ 130 mm (Fig. 6(b)), leading to a 2.5D occupancy grid model. The occupancy grid is formed by determining the state of each cell. The smaller cells ($C_{i, j, k}^s$ ) are classified with two states as occupied or free. A cell is called occupied if there exist at least some vertices in the cell. If the smaller cell is occupied, the cell value is 1, otherwise it is 0. The state variables associated with the larger cells ($C_{I, J, K}^B$ ) are: occupied, unknown and free which are identified based on the smaller cell's value. If more than two smaller cells of a larger cell are occupied, the value of the associated larger cell is 1, else if one or two of the small cells are occupied the value is 0.5, and if all the small cells are free then it is 0 (9). The occupancy grid corresponding to the larger cells is represented by a $H \times V \times P$ matrix (10). Initially all cells are considered as unknown. Therefore, the occupancy matrix, $OccMatrix$ is initialized with a value equal to 0.5 where $C_{I, J, K}^B$ contains occupancy information associated with the larger cell ($I, J, K$ ) and cell (0, 0, 0) is located in the upper left corner of the grid.

$ \begin{eqnarray} &C_{i, j, k}^s=S (0={\rm free}; 1={\rm occupied})~~~~~~~~~\\ &C_{I, J, K}^B=S (0={\rm free}; 0.5={\rm unknown}; 1={\rm occupied}) \end{eqnarray} $ (9)
$ \begin{eqnarray} &OccMatrix \\[2mm] &=\!\!\left[\!\!\!\begin{array}{cccccccc}C_{0, 0, 0}^B &\!\! C_{0, 1, 0}^B &\!\! \ldots &\!\! C_{0, V-1, 0}^B &\!\! C_{0, V, 0}^B\\[3mm] C_{1, 0, 0}^B &\!\! C_{1, 1, 0}^B&\!\!\ldots &\!\!C_{1, V-1, 0}^B&\!\!C_{1, V, 0}^B\\ \vdots &\!\! \vdots &\!\! \ddots &\!\! \vdots &\!\!\vdots\\ C_{H-1, 0, 0}^B&\!\!C_{H-1, 1, 0}^B&\!\!\ldots&\!\!C_{H-1, V-1, 0}^B&\!\!C_{H-1, V, 0}^B\\[3mm] C_{H, 0, 0}^B&\!\!C_{H, 1, 0}^B&\!\!\ldots&\!\!C_{H, V-1, 0}^B &\!\! C_{H, V, 0}^B \end{array}\!\!\!\right]\!\!\!. \end{eqnarray} $ (10)
Fig. 6 (a) Smaller cell and larger cell representation, and (b) merging smaller cell to reduce data volume

The occupancy matrix provides the required information to plan an offline trajectory to explore the object surface. Therefore, in the next step the occupancy matrix is used for global path planning. In addition, to preserve the dynamic characteristic of the occupancy grid and achieve a more accurate and efficient exploration, the occupancy grid will be updated using extra sensory information when the robot is in close proximity of the object.

2) Global Trajectory Generation

In order to explore the object completely and optimize the trajectory, the robot should explore all the occupied cells and avoid the free ones. The robot's end-effector pose at each time step is defined by a set of points:

$ \begin{eqnarray} P_G [k]=(pos_G [k], R_G^{\it \Phi} [k], dir_G [k]) \end{eqnarray} $ (11)

where $pos_G=[p_G^x, p_G^y, p_G^z], R_G^{\it \Phi}=[r_k^{\theta}, r_k^{\varphi}, r_k^{\psi}]$ and $dir_G$ determine the robot position, orientation and direction of motion respectively. The concatenation over iterations, $k$ , of the set of points generates a global trajectory for the robot.

$ Trajectory_{\rm Global} \\ =P_G [0]\frown P_G [1] \frown P_G [2] \frown \cdots \frown P_G [occ] $ (12)

where $occ$ is the number of occupied cells in the occupancy grid.

In this work, the start point is always the first occupied cell on the front upper row of the occupancy grid and the initial moving direction is horizontal. The robot position at each moving step ($pos_G [k]$ ) is defined by extracting the closest point in the object point cloud ($ObjCloud^{\rm Robot}$ ) to the center of the occupied larger cell ($centre(C_{I, J, K}^B)$ ) in the robot moving direction. However, if the larger cell is not completely occupied ($C_{I, J, K}^B=0.5$ ) the $pos_G (k)$ is determined based on the new measurements from the proximity sensors (detailed in Section IV-B). As shown in Fig. 7(a), the robot can move in 5 directions [horizontally (left/right), vertically down, and diagonally (left/right) down] and it must move through all the points to cover the target area. The target area is covered using a zigzag (Fig. 7(b)) pattern which is also called Seed Spreader motion [19]. The robot begins from a start point and moves horizontally to the right towards the center of the next occupied cell. If there is no occupied cell on the right, the motion direction is changed and the robot moves vertically (diagonally) down to the next point. Then, the robot again moves horizontally but in the opposite direction until it reaches the last occupied cell on the other side of the object. The process continues until the robot has explored the entire area.

Fig. 7 (a) Accessible directions of motion, (b) global trajectory planning to ensure coverage, and (c) vertex normal calculation at the center of a larger cell

The set of points forming a path for the robot defined in the previous step determines the positions for the end-effector in Cartesian coordinates over the object surface. In addition to the robot position, to ensure proper alignment of the end-effector with the surface, it is required to estimate the surface normal and calculate the end-effector orientation at each point. The local normal at each point is calculated using the neighbor cells (Fig. 7(c)). Given that there are four smaller cells forming a larger cell, five points situated in the center of each cell in three dimensions are obtained. Each set of the three points forms a triangle for a total of four triangles that can be generated. First, the normal to each triangle ($N_i$ ) is calculated and normalized. The normal of each triangle is computed as the cross product between the vectors representing two sides of the triangle. The following equations define the normal vector, $N$ , calculated from a set of three vertices, $E$ , $F$ , $G$ , point coordinates:

$ \begin{eqnarray} N_x=(E_y-G_y\cdot E_z-F_z)-(E_z-G_z\cdot E_y-F_y) \end{eqnarray} $ (13)
$ \begin{eqnarray} N_y=(E_z-G_z\cdot E_x-F_x)-(E_x-G_x\cdot E_z-F_z) \end{eqnarray} $ (14)
$ \begin{eqnarray} N_z=(E_x-G_x\cdot E_y-F_y)-(E_y-G_y\cdot E_x-F_x). \end{eqnarray} $ (15)

The resulting normals are then normalized such that the length of the edges does not come into account. The normalized vector $N_{\rm norm}$ is computed as:

$ \begin{eqnarray} norm= \sqrt{N_x^2+N_y^2+N_z^2} \end{eqnarray} $ (16)
$ \begin{eqnarray} N_{\rm norm}=[N_{nx}~~~N_{ny}~~~N_{nz}] \end{eqnarray} $ (17)

where $N_{nx}\!=\!N_{x}/norm, N_{ny}\!=\!N_{y}/norm, N_{nz}\!=\!N_{z}/norm$ .

Then, taking the average of the four triangle normals provides the estimated object's surface normal at the center of the larger occupied cell and the object's surface orientation is deduced from the normal vector to the surface (Fig. 7(c)).


The acquisition speed of the Kinect and its low cost have been major selection criteria for this sensor to be used for 3D modelling. However, the information from a Kinect sensor is not sufficient and accurate enough for developing a full reactive surface following approach which also requires close and physical interaction with an object [20]. Furthermore, the object may move or deform under the influence of the physical interaction. Therefore, to compensate for errors in the rough 2.5D profile of the surface provided by the RGB-D sensor and to further refine the path planning, a model-free adaptive fuzzy self-tuning PID position controller and an orientation controller are developed using real-time proximity and contact sensory information provided by a custom designed compliant wrist [21] attached to the end-effector of the robot, as shown in Fig. 8(a). The compliant wrist provides a means of detecting objects both in contact and proximity to the end-effector, as well as adding a degree of compliance to the end-effector which enables the latter to slide on the object without damaging it and to adapt to its surface changes when contact happens. As shown in Fig. 8(b), the compliant wrist is equipped with eight infrared distance measurement sensors. The infrared sensors are arranged in two arrays of four sensors each: an external array and an internal array. The external sensor array allows to measure distances at multiple points between the wrist and the object located in front of the end-effector. The internal sensor array is situated between the base of the compliant wrist structure and a moveable plate, as shown in Fig. 8(a), which allows the device to determine surface orientation and distance to an object when the robot is contacting with it. The sensing layers estimate an object pose in the form of a 3D homogeneous transformation matrix. The rotation and translation parameters are obtained using the distance measurements from either array of four infrared (IR) sensors. Equation (18) shows how the transformation matrix is calculated using distances provided by the internal sensors (contact sensing layer). A similar calculation can be made from the external sensors using values $A'$ , $B'$ , $C'$ and $D'$ measured by external sensors (Fig. 8(c)). The transformation matrix determines the object pose with respect to the compliant wrist frame, which is transferred to the robot's base frame (19).

Fig. 8 (a) Compliant wrist, (b) internal and external sensors arrangement, and (c) distance measurements from internal and external sensors
$ \begin{eqnarray} &\!\!\!\!\!\!\!\!\!\!\!\!\!Q_{\rm endeff/object}~~~~\\ &\!\!\!\!\!=\!\! \left[\!\!\begin{array}{cccccccc} {\rm cos} \alpha &\!\! 0 &\!\! {\rm sin} \alpha &\!\! 0\\ {\rm sin} \beta {\rm sin} \alpha &\!\! {\rm cos} \beta &\!\! {\rm -sin} \beta {\rm cos} \alpha &\!\! 0\\ {\rm-cos} \beta {\rm sin} \alpha &\!\! {\rm sin} \beta &\!\! {\rm cos} \beta {\rm cos} \alpha &\!\! \dfrac{A\!\!+\!\!B\!\!+\!\!C\!\!+\!\!D}{4}\\ 0&\!\!0&\!\!0&\!\!1 \end{array}\!\!\right] \end{eqnarray} $ (18)

where $\beta={\rm atan}2((D-B)/W)$ , $\alpha={\rm atan}2((A-C)/W)$

$ \begin{eqnarray} Q_{\rm base/object}=Q_{\rm base/wrist}\cdot Q_{\rm endeff/object}. \end{eqnarray} $ (19)
A. Adaptive Position/Orientation Control

The surface following and tracking of the objects can be accomplished either with contact (for applications such as polishing, welding, or cleaning) or without contact (for applications like painting and inspection). In order to guide the robot under consideration of kinematic motion constraints, an adaptive on-line trajectory generation is proposed to generate continuous command variables and modify the position and orientation of the robot based on the online sensory information when the robot end-effector is in close proximity of (proximity control mode) or in contact with (contact control mode) an object. The robot position over the object's surface is controlled using a fuzzy self-tuning PID controller and its orientation is corrected whenever there is an error between the surface orientation estimated via proximity/contact sensors and the end-effector's orientation to make the system adaptive to the working environment (Fig. 9).

Fig. 9 Block diagram of the real-time position and orientation controller

1) Adaptive Position Control

The typical PID controller has been widely used in industry because of its simplicity and performance to reduce the error and enhance the system response. To design a PID controller it is required to select the PID gains $K_P$ , $K_I$ , and $K_D$ properly (20). However, the PID controller with fixed parameters is inefficient to control systems with uncertain models and parameter variations.

$ \begin{eqnarray} U_k=K_P\cdot E_k+K_I \sum\limits_{i=0}^k E_k +K_D (E_k-E_{k-1}) \end{eqnarray} $ (20)

where $E_k$ is defined in (21).

Fuzzy control is ideal to deal with nonlinear systems where there is a wide range of operating conditions, an inexact model exists and accurate information is not required. It has a better performance in dealing with uncertainty and change in environment than a PID controller but PID has a faster response and it minimizes the error better than fuzzy control. Therefore, to take advantage of both controllers, a fuzzy self-tuning PID controller is designed to control the robot's movement and to smoothly follow an object's surface from a specific distance or alternatively while maintaining contact.

The fuzzy self-tuning PID controller's output is a real-time adjustment value (Æ) of the position based on the error ($E$ ) between the current robot distance from the object ($D$ ) and the desired distance to the object $d\breve{z}$ (this value is zero when contact with the object is required). $D$ is obtained by proximity sensors or by contact sensors on the compliant wrist when the robot is in close proximity of or in contact with the object respectively. The PID gains are tuned using the fuzzy system rather than the traditional approaches. The Æ is then sent to the robot controller to eliminate the error and make the system adaptive to the changes in the working environment.

The fuzzy controller input variables are the distance error ($E_k$ ) and change of error ($\Delta E_k$ ), and the outputs are the PID controller parameters. The standard triangular membership functions and Mamdani type inference are adopted for the fuzzification due to their less computation, which enhances speed of the system reaction

$ \begin{eqnarray} E_k = D_{k}-d\breve{z}, \;\;\;\;\;\; \Delta E_k = E_k - E_{k-1}. \end{eqnarray} $ (21)

The error ($E_k$ ) in distance is represented by five linguistic terms: large negative (LN), small negative (SN), zero (Z), small positive (SP), and large positive (LP). Similarly, the fuzzy set of change of error ($\Delta E$ ) is expressed by negative big (NB), negative (N), zero (Z), positive (P), and positive big (PB), both defined over the interval from -50 mm to 50 mm. When the robot is in close proximity of the object, it can move backward, forward or not change depending on the robot distance to the surface. The output linguistic terms for each PID gain $k_p'$ , $k_i'$ and $k_d'$ are (Fig. (10)): large forward (LF), forward (F), no change (NC), backward (B), large backward (LB) over the interval [-1, 1]. Tables Ⅰ- define the fuzzy rules for the $k_p'$ , $k_i'$ and $k_d'$ gains. The center of gravity (COG) method is used to defuzzify the output variable (22).

$ \begin{eqnarray} COG= \dfrac{\sum\limits_{i=1}^m \mu(f_i)\cdot f_i}{\sum\limits_{i=1}^m \mu(f_i)}. \end{eqnarray} $ (22)
Fig. 10 Fuzzy membership functions for: (a) error, (b) change of error, and (c) PID gains estimates $k'_p$ , $k'_i$ and $k'_d$
Table Ⅰ
Table Ⅱ
Table Ⅲ

In order to obtain feasible and optimum value for the PID parameters, the actual gains are computed using the range of each parameter determined experimentally (23) from the outputs of the fuzzy inference system.

$ \begin{eqnarray} &k_p'=\dfrac{K_P-K_{P_\min}}{K_{P_\max}-K_{P_\min}}= \dfrac{K_P}{K_{P_\max}-K_{P_\min}}\\ &K_p=k_p'\\ &k_i'=\dfrac{K_I-K_{I_\min}}{K_{I_\max}-K_{I_\min}}=\dfrac{K_I-0.01}{0.1-0.01}\\ &K_I=0.99k_i'+0.01\\ &k_d'=\dfrac{K_D-K_{D_\min}}{K_{D_\max}-K_{D_\min}}= \dfrac{K_D-0.003}{0.01-0.003}\\ &K_D=0.007k_d'+0.003 \end{eqnarray} $ (23)

where $K_P\in[0, 1], K_I\in[0.01, 0.1], K_D\in[0.003, 0.01]$ .

2) Adaptive Orientation Control

When the robot is in close proximity of (or in contact with) the object, the orientation of the object surface is estimated by proximity/contact sensors in terms of the pitch ($r_S^{\varphi}$ ) and yaw($r_S^{\psi}$ ) with respect to the wrist plane. Let the rotation matrix of the surface generic orientation, $r_S^{\phi}$ , be denoted with $R_S^{\phi}$ with respect to the wrist plane, and the rotation of the end-effector obtained from the initial global trajectory, be described by $R_G^{\phi}$ , the desired rotation of the end-effector, $R_R^{\phi}$ to align the end-effector with the surface can be determined by (24)

$ \begin{eqnarray} R_R^{\phi} [K]=R_G^{\phi} [K]*R_S^{\phi} [K] \end{eqnarray} $ (24)

where $r_G^{\phi}=[r_G^{\theta}, r_G^{\varphi}, r_G^{\psi}], r_S^{\phi}=[0, r_S^{\varphi}, r_S^{\psi}], r_R^{\phi}=[r_r^{\theta}, r_r^{\varphi}, r_r^{\psi}].$

In a surface following task, the $R_S^{\phi}$ is the orientation to be tracked by the robot. However, in order to obtain a smooth orientation, it is not directly sent to the controller. Instead, the orientation of the end-effector is updated using an adaptive orientation signal ($R_A^{\phi}$ ) at each moving step to compensate for the orientation error between the end-effector and the surface. The error decreases exponentially to eliminate the orientation error (adjust $r_S^{\phi}$ to zero) rapidly.

$ \begin{eqnarray} R_R^{\phi} [k]=R_G^{\phi} [k]*R_A^{\phi} [k] \end{eqnarray} $ (25)
$ \begin{eqnarray} R_A^{\phi} [k]=\eta *R_S^{\phi} [k]~~~~~~ \end{eqnarray} $ (26)

where $\eta$ is a proportional gain to decrease the error and minimize the convergence time. A large gain value reduces the convergence time but may lead to tracking loss or task failure due to fast motion, while a small gain value increases the convergence time. Therefore, in order to increase the convergence time, reduce oscillation near equilibrium point and preserve the system stability, the gain value is varied according to the orientation error value (27). The gain value is small initially ($\eta_0$ ) while the error is larger, and the error is reduced exponentially until it reaches a specific threshold ($\partial$ ) near the equilibrium point ($r_S^{\phi}$ = 0). Then, as the orientation error got smaller, the gain switches to a larger gain value ($\eta_0 < \eta_1$ ) to eliminate the remaining orientation error as fast as possible [22]. By using the adaptive orientation signal, the robot end-effector orientation angles are calculated by (28).

$ \begin{eqnarray} \eta=\left\{\begin{array}{llllll} \eta_0=\dfrac{r_S^{\phi}}{||r_S^{\phi} ||},&||r_S^{\phi}||\geq \partial\\[7mm] \eta_1=\dfrac{1}{\partial} r_S^{\phi},&||r_S^{\phi}|| < {\partial} \end{array}\right. \end{eqnarray} $ (27)
$ \begin{eqnarray} r_R^{\phi} [k]=\left\{\begin{array}{llllll} r_R^{\theta} (k)=r_G^{\theta}(k)\\ r_R^{\varphi} (k)=r_G^{\varphi} (k)+r_A^{\varphi} (k)\\ r_R^{\psi} (k)=r_G^{\psi} (k)+r_A^{\psi} (k). \end{array}\right. \end{eqnarray} $ (28)
B. Updating the Occupancy Grid

The object of interest might not be acquired with sufficient accuracy by the Kinect sensor alone due to its limited depth resolution or not be segmented completely from the point cloud. In order to update the occupancy grid previously described, the compliant wrist provides closer and higher accuracy feedback about the object's location. As shown in Fig. 11, the compliant plate is able to cover four smaller cells (one largest cell), given the selected initial resolution of the occupancy grid. The external sensors, which are protruding on the four sides of the compliant plate, can provide real-time look-ahead information while the robot is moving. As the robot moves along the global trajectory to its new pose, the existing occupancy map is updated using the new measurements. For example when the robot is moving to the right, the sensor situated on the right side will allow an update of the state of the two cells located ahead on the right, especially when the next cell value is 0.5 (unknown cell). If the sensor detects the object, the cell value changes to 1 (occupied), else it will change to zero (empty). The robot keeps moving in the previous direction if the next cell is occupied, else it stops there and moves to the next pose along the global trajectory.

Fig. 11 Cells coverage by the wrist embedded sensors

To achieve a seamless and efficient integration of vision and real-time sensory information and smoothly switch between different interaction modes, a mode switching scheme is also proposed as it provides the manipulator with the capability to operate independently in each of the 3 stages (free motion, proximity, and contact) or in transition in between stages to perform more complicated tasks as required by different applications. Since this work focuses on the task space control, a discrete-time switching system is proposed in order to switch smoothly between global trajectory following and sensor-based motion control. The proposed switched-control system consists of continuously working subsystems and a discrete system that supervise the switching between the subsystems [23]. The proposed switching control system is modeled as follows:

$ \begin{eqnarray} E_{(k+1)}=f_{{\delta}(e)} (E_{k}, D_k), \quad\quad \delta\in\{F, P, {\rm and}~ C\}. \end{eqnarray} $ (29)

The value $\delta$ determines which function $f_{{\delta}(e)} (E_{k}, D_k)$ controls the system behavior at each moving step ($k$ ) and $f_F, f_P, f_C$ correspond to free motion, proximity and contact control modes respectively. The switching signal $\delta$ (30) is determined based on the distance error (values are in mm). The error values shown in (30) are estimated experimentally based on the extensive compliant wrist sensor's performance analysis detailed in [21]. For a compliant wrist equipped with different IR sensors, or other robot-mounted distance measurement devices, re-calibration of (30) would be necessary, but the general framework would remain valid as such:

$ \begin{eqnarray} \delta(e)=\left\{\begin{array}{llllll} C,&E_k < -20\\ P,&-20 \leq E_k \leq 40\\ F,&E_k > 40. \end{array}\right. \end{eqnarray} $ (30)

In the free motion phase, which refers to the robot movement in an unconstrained workspace, the 3D model of the object is used to plan a global trajectory (detailed in Section Ⅲ) and guide the robot from an initial position towards the object where the robot pose is given by (31)

$ \begin{eqnarray} f_F=\left\{\begin{array}{llllll} pos_R [k]=pos_G [k]=[p_G^x, p_G^y, p_G^z]\\[2mm] r_R^{\phi} [k]= r_G^{\phi} [k]=[r_G^{\theta}, r_G^{\varphi}, r_G^{\psi}]. \end{array}\right. \end{eqnarray} $ (31)

The proximity phase enables the robot to refine the reference (global) trajectory using the proximity sensing information to track the desired trajectory within a specific distance without contact or adjust the robot pose before contact happens (transition) to damp the system and align the robot with surface. It also enables the robot to update the cell of the occupancy grid while moving along the global trajectory.

$ \begin{eqnarray} f_P=\left\{\begin{array}{llllll} pos_R [k]=pos_G [k]+ {Æ}_P [k]\\[2mm] r_R^{\phi} [k]= r_G^{\varphi} [k]+r_{AP}^{\varphi} [k] \end{array}\right. \end{eqnarray} $ (32)

where ÆP is position control signal generated by the self-tuning fuzzy PID controller and $r_{AP}^{\varphi}$ is adaptive orientation control signal obtained using proximity sensors.

The contact with surface is triggered by the internal sensors where the contact plate is deflected under externally applied forces. The contact motion mode enables the robot to more precisely adapt to the changes and forces generated by the surfaces with which it comes into contact. When the contact is detected, the internal sensors provide the fuzzy inputs (error and change of error) instead of external sensors to make sure that the robot remains in contact with the object and prevent the robot from being pushed too much into the object.

$ \begin{eqnarray} f_c=\left\{\begin{array}{llllll} pos_R [k]=pos_G [k]+ {Æ}_C [k]\\[3mm] r_R^{\phi} [k]= r_G^{\varphi} [k]+r_{AC}^{\varphi} [k] \end{array}\right. \end{eqnarray} $ (33)

where ${Æ}_{PC}$ and $r_{AC}^{\varphi}$ are adaptive position and orientation signals obtained from contact sensory information.


To validate the feasibility and assess the accuracy of the proposed method, experiments are carried out with a 7-DOF CRS F3 manipulator which consists of a 6-DOF robot arm mounted on a linear track. The algorithms used for this work were developed in C++ and run on a computer with an Intel core i7 CPU and Windows 7.

A. Object Segmentation

Table Ⅳ shows the efficiency of the proposed segmentation algorithm in extracting the object of interest from the scene shown in Fig. 3 where there are four objects of different shapes and colors. The original point cloud provided by the Kinect sensor after applying the depth filter includes 41 204 points from which over 99 % of those corresponding to the objects of interest have been successfully recovered from the point cloud.

Table Ⅳ
B. Contact Interaction

In this experiment, it is desired to closely follow the surface of a real automotive door panel (Fig. 12) while maintaining contact and accommodating its curvature. As shown in Fig. 12(a), the Kinect sensor is located facing the door panel and the robot moves on a two-meter linear track to fully reach the surface. The Kinect collects raw 3D information on the environment. The RGB image captured by the Kinect (Fig. 12(b)) is presented to the operator. The operator selects the object of interest and its point cloud is automatically extracted from the scene, except for the window area since glass is not imaged well with Kinect sensor technology and also the window area is beyond the robot workspace. A global trajectory is then generated using the proposed coverage path planning method (Section Ⅲ-B) to completely explore and scan the object of interest's surface (Fig. 13(a)).

Fig. 12 (a) Automotive door panel set up with respect to the robot and the Kinect sensor, and (b) front view of real automotive door panel
Fig. 13 (a) Trajectory planning to ensure full coverage over the region of interest. (b) Global trajectory. (c) Fuzzy self-tuning PID controller performance. (d) Internal sensors dataset

Fig. 13(a) shows the global (offline) trajectory generated using the proposed coverage path planning method, which consists of a set of 3D points (Fig. 13(b)). Once the global trajectory is generated, the free motion mode is initially activated to guide the robot towards the start point. When the robot is in close proximity of the object, the proximity (external) sensors detect the object, the motion control mode is switched to the proximity mode, where the adaptive pose controllers adjust the robot position and orientation using the external sensors in preparation for a safe contact with the object. When the initial contact happens the control mode switches to contact mode, and the adaptive pose controllers generate the depth and orientation control signals to maintain contact with the object. The fuzzy inputs and the depth control signal are shown in Fig. 13(c). The position control signal (${Æ}$ ) is varying from -20 mm to 25 mm, which shows the smooth and stable motion of the robot over the object surface. The ${Æ}$ signal changes according to the error ($E$ ) and change of error ($\Delta E$ ). When the robot moves horizontally the change of the ${Æ}$ signal is small but when the robot moves vertically on the object surface, the $E$ and $\Delta E$ are changed more substantially (due to change of the door curvature). As a result, the ${Æ}$ signal is larger to compensate the error as desired (Fig. 13(c)).

The internal sensors detect the contact when the compliant wrist is deflected. In order to make sure that the robot is perfectly in contact with the object and detects the contact by the internal sensors, the desired distance to the object is set to $d\breve{z}=-$ 3 mm. Therefore, the average distance obtained from sensors in each moving step ($k$ ) should be around -3 mm to eliminate the error ($E_k = D_k-d\breve{z}$ ). The average distance measurement of the internal sensors (Fig. 13(d)) from the object during the surface following process was about -3.42 mm, which shows that the robot maintained contact during the interaction (Fig. (14)). The robot orientation error is compensated using the adaptive orientation signal ($r_{AC}^{\varphi}$ ). The robot moves to the next pose defined by the global trajectory and the same process as above repeats for each moving step until the robot has explored the entire surface. Fig. 15 shows the system performance in refining the position and orientation of the robot during contact interaction. The depth adjustment over the object surface using the adaptive position controllers (depicted in red) is shown in Fig. 15(a), and compared with the depth measurements estimated from the RGB-D data (depicted in blue). The orientation correction is represented in Fig. 15(b), the global object surface orientation with respect to the robot base frame around the $y$ -axis (in blue) and $z$ -axis (in red) estimated as part of the offline path planning process are also updated locally using the online sensory information (in green and purple) from the compliant wrist.

Fig. 14 Robot performance at following the door panel's curved surface
Fig. 15 System performance in refining the robot pose in contact mode
C. Proximity Interaction

In this experiment, it is desired to scan and follow the object's surface (Fig. 12(b)) at a fixed distance from it. Fig. 16(a) shows the adaptive self-tuning fuzzy-PID performance in controlling the robot movement over the surface of the object while maintaining a desired distance (50 mm) from the object but still accommodating its curvature (Figs. 16(c) and (d)). In Fig. 16(c), the blue curve represents the pre-planned trajectory which is locally refined (orange curve) using the sensory information provided by the external sensors. The global trajectory is also refined locally in orientation (Fig. 16(d)). The average of the distance measurements (Fig. 16(b)) from the surface was 56.8 mm, which verifies that the robot remains within the desired distance from the object during the surface following.

Fig. 16 The system performance in refining the robot pose in proximity interaction

This work presented the design and implementation of a switching motion control system for automated and efficient proximity/contact interaction and surface following of objects of arbitrary shape. The proposed control architecture enables the robot to detect and follow the surface of an object of interest, with or without contact, using the information provided by a Kinect sensor and augmented locally using a specially designed instrumented compliant wrist. A tri-modal discrete switching control scheme is proposed to supervise the robot motion continuously and switch between different control modes and related controllers when it finds the robot within certain regions of the workspace relative to the object surface. The proposed method was implemented and validated through experimental results with a 7-DOF robotic manipulator. The experimental results demonstrated that the robot can successfully scan an entire region of an object of interest while closely following the curved surface of the object with or without contact.

The adaptive controllers and the switching control scheme can be improved in future work. This includes designing new adaptive controllers that are less sensitive and have faster but smoother response to changes in the environment, and using a hybrid switched control system to allow actual data fusion from multiple sensing sources which would result in smoother switching between the interaction modes.


The authors wish to acknowledge the contribution of Mr. Pascal Laferrière to this research through the development of the instrumented compliant wrist device.

[1] Y. H. Yin, H. Hu, and Y. C. Xia, "Active tracking of unknown surface using force sensing and control technique for robot, " Sens. Actuators A Phys., vol. 112, no. 2-3, pp. 313-319, May 2004.
[2] E. C. Li and Z. M. Li, "Surface tracking with robot force control in unknown environment, " Adv. Mater. Res. , vol. 328-330, pp. 2140-2143, Sep. 2011.
[3] Y. Kang, Z. J. Li, X. Q. Cao, and D. H. Zhai, "Robust control of motion/force for robotic manipulators with random time delays, " IEEE Trans. Control Syst. Technol. , vol. 21, no. 5, pp. 1708-1718, Sep. 2013.
[4] R. Ibrayev and Y. B. Jia, "Recognition of curved surfaces from "One-Dimensional" Tactile Data, " IEEE Trans. Autom. Sci. Eng. , vol. 9, no. 3, pp. 613-621, Jul. 2012.
[5] P. Payeur, C. Pasca, A. M. Cretu, and E. M. Petriu, "Intelligent haptic sensor system for robotic manipulation, " IEEE Trans. Instrum. Meas. , vol. 54, no. 4, pp. 1583-1592, Aug. 2004.
[6] D. Nakhaeinia, P. Payeur, A. Chávez-Aragón, A. M. Cretu, R. Laganière, and R. Macknojia, "Surface following with an RGB-D vision-guided robotic system for automated and rapid vehicle inspection, " Int. J. Smart Sens. Intell. Syst. , vol. 9 no. 2, pp. 419-447, Jun. 2016.
[7] S. Hutchinson, G. D. Hager, and P. I. Corke, "A tutorial on visual servo control, " IEEE Trans. Rob. Autom. , vol. 12, no. 5, pp. 651-670, Oct. 1996.
[8] D. Xiao, B. K. Ghosh, N. Xi, and T. J. Tarn, "Sensor-based hybrid position/force control of a robot manipulator in an uncalibrated environment, " IEEE Trans. Control Syst. Technol. , vol. 8, no. 4, pp. 635-645, Jul. 2000.
[9] C. C. Cheah, S. P. Hou, Y. Zhao, and J. J. E. Slotine, "Adaptive vision and force tracking control for robots with constraint uncertainty, " IEEE/ASME Trans. Mechatron., vol. 15, no. 3, pp. 389-399, Jun. 2010.
[10] I. Siradjuddin, L. Behera, T. M. McGinnity, and S. Coleman, "A position based visual tracking system for a 7 DOF robot manipulator using a Kinect camera, " in Proc. 2012 Int. Joint Conf. Neural Networks, Brisbane, QLD, Australia, pp. 1-7.
[11] G. J. García, P. Gil1, D. Llácer, and F. Torres, "Guidance of robot arms using depth data from RGB-D camera, " in Proc. 10th Int. Conf. Informatics in Control, Automation and Robotics, Reykjavík, Iceland, 2013, pp. 1-7.
[12] D. Nakhaeinia, R. Fareh, P. Payeur, and R. Laganière, "Trajectory planning for surface following with a manipulator under RGB-D visual guidance, " in Proc. IEEE Int. Symp. Safety, Security, and Rescue Robotics, Linkoping, Sweden, 2013, pp. 1-6.
[13] Z. J. Li, S. T. Xiao, S. S. Ge, and H. Su, "Constrained multilegged robot system modeling and fuzzy control with uncertain kinematics and dynamics incorporating foot force optimization, " IEEE Trans. Syst. Man Cybern. Syst. , vol. 46, no. 1, pp. 1-15, Jan. 2016.
[14] Z. J. Li, Z. T. Chen, J. Fu, and C. Y. Sun, "Direct adaptive controller for uncertain MIMO dynamic systems with time-varying delay and dead-zone inputs, " Automatica, vol. 63, no. 287-291, Jan. 2016.
[15] Y. M. Li, S. C. Tong, L. Liu, and G. Feng, "Adaptive output-feedback control design with prescribed performance for switched nonlinear systems, " Automatica, vol. 80, no. 225-231, Jun. 2017.
[16] R. Macknojia, A. Chávez-Aragón, P. Payeur, and R. Laganière, "Experimental characterization of two generations of Kinect's depth sensors, " in Proc. 2012 IEEE Int. Symp. Robotic and Sensors Environments, Magdeburg, Germany, pp. 150-155.
[17] R. Macknojia, A. Chávez-Aragón, P. Payeur, and R. Laganière, "Calibration of a network of Kinect sensors for robotic inspection over a large workspace, " in Proc. 2013 IEEE Workshop on Robot Vision, Clearwater Beach, FL, USA, pp. 184-190.
[18] H. Ghazouani, M. Tagina, and R. Zapata, "Robot navigation map building using stereo vision based 3D occupancy grid, " J. Artif. Intell. Theory Appl. , vol. 1, no. 3, pp. 63-72, Nov. 2011.
[19] E. U. Acar, H. Choset, A. A. Rizzi, P. N, Atkar, and D. Hull, "Morse decompositions for coverage tasks, " Int. J. Rob. Res. , vol. 21, no. 4, pp. 331-344, Apr. 2002.
[20] D. Nakhaeinia, P. Payeur, and R. Laganière, "Adaptive robotic contour following from low accuracy RGB-D surface profiling and visual servoing, " in Proc. Canadian Conf. Computer and Robot Vision, Montreal, QC, Canada, 2014, pp. 48-55.
[21] P. Laferrière, "Instrumented compliant wrist system for enhanced robotic interaction, " M. S. thesis, University of Ottawa, Ottawa, Canada, 2016.
[22] A. A. Moughlbay, E. Cervera, and P. Martinet, "Error regulation strategies for model based visual servoing tasks: application to autonomous object grasping with Nao robot, " in Proc. 12th Int. Conf. Control Automation Robotics & Vision, Guangzhou, China, 2012, pp. 1311-1316.
[23] T. Kröger and F. M. Wahl, "Stabilizing hybrid switched motion control systems with an on-line trajectory generator, " in Proc. 2010 IEEE Int. Conf. Robotics and Automation, Anchorage, AK, USA, pp. 4009-4015.