^{2} University of Chinese Academy of Sciences, Beijing 100190, China
Precision workpieces are widely used in many industrial areas, such as consumer electronic products, medical instruments, aeronautic vehicles, etc. The quality of the workpieces directly affects the functionality and reliability of the products. In advanced manufacturing, production of precision workpieces is featured by a great variety, large quantity, short cycle time, and highquality^{[1, 2]}. Along with the improved quality demands, for some workpieces, 100% of products are required to be checked. Meanwhile, the requirement for efficient detection and measurement for such a large quantity of items is increasing constantly. However, most of the quality control is still conducted manually, which has lots of drawbacks such as low consistency, low efficiency, and increasing labor shortages. Therefore, developing automatic detection and measurement techniques for precision workpieces attracts more and more attention, and worth deep investigation^{[3]}.
The reported workpiece detection and measurement approaches can be classified into two categories, i.e., contact methods based on coordinate measuring machine (CMM)^{[4, 5]} and the noncontact methods based on laser sensors^{[6, 7]}. Contact based systems normally consist of a CMM and touchtrigger probe, which acquire point features on the workpieces surface in an iterative way. The dimensional information of the workpieces is then obtained by corresponding displacement differences. Though, high precision can be achieved, manual auxiliary operations are required in these methods which greatly decrease the efficiency, and thus can hardly handle batch detection tasks. Besides, contact products may give rise to unexpected damage to the plate surface. To overcome these problems, noncontact methods exist as alternative solutions. For instance, in [6, 7], laser based inspection systems were presented, which were based on laser triangulation by means of a laser stripe in metrological applications. These methods acquire object surface points rapidly, but the precision of inspection is seriously influenced by resolution of laser sensors.
With the rapid development of digital cameras and machine vision, another noncontact method, vision based measurement technique, is widely researched. Digital cameras have the advantages of high resolution, compact dimension and low cost. Therefore, vision based measurement has become an important approach in metrological and inspecting applications^{[812]}. Sun et al.^{[9]} presented a vision based inspection system for the top, side and bottom surface of electric contact. Different image preprocessing and inspecting methods were proposed to detect the surface defects, but the quantitative measuring method for electric contact was not mentioned. Maddala et al.^{[10]} reported a vision based measurement approach, which was derived from a superimposed adaptable ring for pill shape detection. An automatic surface defect detection system for mobile phone screen glass was introduced by Jian et al.^{[11]} A contourbased registration method and an improved fuzzy cmeans cluster (IFCM) algorithm were developed respectively. Ghidoni et al.^{[12]} proposed an automatic color inspection system for colored wires in electric cables. By means of a selflearning subsystem, the system implemented the inspection process in an automatic way. Neither manual input from the operator nor loading new data to the machine is required. This method will bring about a striking effect only when the objects have obvious color features. These systems based on vision detection described above are not invasive, and can achieve high accuracy. However, in the literature, there is no research for automatic detection of numerous precision workpieces along the production batch. Moreover, a strong constraint that automatic workpieces detection algorithm has to abolish is the ability of being realtime.
Automatic precision workpieces measurement task can be generally divided into two main subprocedures: workpieces detection and dimensional measurement. Detection is the primary and essential step, particularly in detection for all workpieces along the production batch. Missing one workpiece from the detecting sequence will lead to a serious error of following pickup judgments. If the mistake is not discovered soon enough, all the workpieces in the batch need to be retested. Therefore, high accuracy of the detection algorithm is very important in this application. Meanwhile, realtime detection requires the algorithm to have high efficiency. Many object detection algorithms have been researched for various applications^{[1316]}. Among these algorithms, template matching is particularly applicable to the target with prior knowledge, such as shape, size and texture. The object can be found by computing the similarity between the model and all the relevant poses, when the similarity is higher than the given threshold. In ^{[17]}, the similarity is defined based on the intensity of the template and the search image. This method can obtain the difference of object surface, which can be used for defect detection. However, comparing difference in all pixels is computationally expensive. Feature based methods recognize the object in a more compressed and efficient way than gray value based ones^{[11, 18]}. Points, contours, polygons, or regions can be considered as the features in different applications. For example, in literature ^{[18]}, edges and their orientations are used as features. The model generated from one reference image of the object consists of the shape and corresponding gradient directions along this shape. This category of algorithms allows a rapid computation, therefore, it is more efficient. However, in the application of online detection for considerable number of workpieces, higher detection efficiency is needed.
In order to further improve the speed of workpieces detection, visual saliency theories are introduced in this paper. Salient object detection that is considered as a preprocessing procedure in computer vision is widely used in object detection, image retrieval and object recognition^{[1921]}. Bringing in visual saliency theories has great prospects for realtime detection of the enormous numbers of workpieces. For decades, many research efforts have been performed to establish the computational salient models, and there are mainly two kinds of models, topdown saliency model and bottomup saliency model. Topdown salient object detection methods are taskdriven by supervised learning with prior knowledge of the object^{[19]}. Bottomup salient object detection methods are datadriven by lots of image features, such as contour, contrast and texture^{[20, 21]}. A combined detection method of topdown and bottomup saliency is presented in this paper, which takes advantage of both the high detection accuracy of topdown features extraction and the high detection efficiency of bottomup salient regions detection.
The motivation of this paper is to design a detection and measurement algorithm for online automatic detection and measurement system of precision workpieces. A realtime workpieces detection and measurement algorithm combining topdown and bottomup saliency (DMTBS) is presented. In the proposed DMTBS method, prior knowledge of workpieces is obtained by topdown salient feature extraction to improve the accuracy of detection and a target template is established based on these features for following detection. Then, bottomup salient region detection based on background contrast for online images is proposed, which reduces detection region and increases algorithm efficiency. Finally, the camera with telecentric lens is calibrated and crucial dimension of workpieces is measured. In summary, the main contributions of this paper include:
1) A realtime automatic detection and measurement system for precision workpieces is designed.
2) A fast and accurate detection and measurement algorithm named as DMTBS for workpieces based on topdown and bottomup saliency is presented.
3) Practical and comparative experiments are conducted, and the results demonstrate the accuracy and efficiency of the proposed algorithm and system can meet the requirement of manufacturing.
The rest of this paper is organized as follows. The detection task specification and the designed system are introduced in Section 2. Then, the whole detection and measurement process are described. In Section 3, techniques of precision workpieces detection are presented. Topdown feature extraction and bottomup saliency detections are described in details. Section 4 gives the method of workpieces measurement, and the calibration of the visual system is presented. The experiments and results are given in Section 5. Finally, this paper is concluded in Section 6.
2 Task specification and system design 2.1 Task specificationHigh precision threedimensional metal workpieces are extremely common in manufacturing. The size of these components is usually smaller than
Download:


Fig. 1. Samples of precision workpieces 
2.2 System design
The precision workpieces tasks are summarized as
1) Locate the detection target in the workpieces sequence.
2) Measure the crucial dimension of current work piece.
3) Eliminate the defective ones automatically.
Automatic precision workpieces measurement system is designed as shown in Fig. 2. It consists of a rotation disc conveyor, an automatic feeding machine, a locating device, a visual measurement unit (including cameras and corresponding lighting systems), a pickup mechanism, and a host computer. In visual measurement units, lens system and illumination play very important roles for highquality image acquisition. The normal lens systems perform a perspective projection of the world. A larger image will be produced when objects are closer to the lens. In order to eliminate the perspective distortions, bilateral telecentric lenses are adopted in this measurement system. In addition, because of the strong reflection of metal objects, a coaxial illumination fitting to telecentric lenses system is used to reduce or prevent specular reflections and shadows.
Download:


Fig. 2. System configuration scheme 
Workpieces are conveyed to the rotation disc conveyor by the automatic feeding machine. The locating device adjusts all workpieces in almost the same position and pose on the disk. Through the rotation disc conveyor, workpieces arrive at the visual detection units sequentially. Host computer is used to control the whole detection and measurement procedure, including the components feeding control, images capturing and images processing in visual detection unit, and the pickup mechanism control to eliminate defective ones. The relationship diagram of the host computer and external devices is shown as Fig. 3.
Download:


Fig. 3. Relationship diagram of host computer and external devices 
2.3 Framework of proposed detection and measurement algorithm
Vision based workpieces detection and measurement algorithm is composed of two main parts: Workpieces detection and workpieces measurement. The workflow of the proposed algorithm is shown in Fig. 4. The part of workpieces detection combines topdown feature extraction and bottomup saliency detection. Workpieces measurement algorithm consists of camera calibration and dimension measurement. In the next sections, details of workpieces in detection and measurement will be discussed, respectively.
Download:


Fig. 4. Schematic block diagram of workpieces detection and measurement 
3 Workpieces detection combining topdown and bottomup saliency
Both accuracy and efficiency are very required for the application of online detection on enormous numbers of precision workpieces. Detecting the target workpiece accurately is a prerequisite for following defective workpiece judgment. Meanwhile, effective detection algorithm is the determining factor of system speed. In order to satisfy these two requirements at once, visual attention is introduced.
Humans face a tremendous amount of visual information in the surrounding world every moment, which cannot be completely processed by the visual system. However, humans can scan the environment rapidly and find the target accurately because of the mechanism of visual attention. By identifying salient regions in visual fields, people allocate their attention on the salient regions when viewing complex natural scenes. For many decades, psychologists and computer scientists have done a great deal of researches and presented a number of computational models of salient object detection, which are categorized as either topdown or bottomup models. Topdown models apply the prior knowledge of the object to achieve a high accuracy searching. Bottomup models analyze the basic features of image to find the potential location of the object. In the task of workpieces detection, locating the object and potential area are very important for following extreme position detection and speeding up to realize realtime detection. Therefore, the proposed DMTBS method combining topdown and bottomup saliency is presented.
In this section, we first perform topdown feature extraction from prior knowledge of standard workpieces images, then online bottomup saliency detection method is presented. Finally, strategies of fast and accurate workpieces detection based on topdown feature extraction and bottomup saliency detection are described in detail. The flow chart of the detection process is shown in Fig. 5.
Download:


Fig. 5. Flow chart of the detection process 
3.1 Topdown feature extraction
Precision workpieces have wealthy prior knowledge, such as contours, shapes and sizes. Therefore, these features can be extracted from standard workpieces offline and the template is established by one or more images or computer aided design (CAD) model of the qualified object itself.
In images of precision workpieces acquired online, contour characteristics are the most robust. Therefore, contours are adopted as the feature to establish the template. Contour extraction is very important, especially for workpieces with fillet. The fillet of workpiece usually leads to fuzzy edges, which is difficult to extract^{[22, 23]}. The fuzzy edges will increase the computational complexity in the following template creating step. In addition, similarity measure between template contour and target contour will be reduced. Therefore, an improved contour extraction method is discussed to detect accurate single pixel contour. In order to improve the efficiency of contour extraction, topdown saliency detection is applied. Topdown salience models are taskdriven, which rely on the prior knowledge of the test image.
Firstly, in order to remove noise and enhance contrast of detection object in image, image filtering and image enhancement are adopted in the step of image preprocessing. Median filtering is a kind of nonlinear digital filtering technique. By means of a median filter, important region contours and details are preserved, at the same time, noise of the image is removed. Then, histogram equalization^{[24]} is applied to enhance the details of the image.
After above image preprocessing, topdown salience detection is implemented to reduce following process region. Afterwards, Canny operator is used to detect the original contour of object^{[25]}. Because of the fillet of component, the contours extracted in the image are usually redundant, multiple, and having false edges. In the next step of target searching, more pixels of contour lead to more computational complexity. Meanwhile, false contour will reduce the accuracy of searching results. In order to obtain single pixel contour, morphology is employed to optimize the original contour extracted by Canny operator. Firstly, dilation is utilized to combine the fuzzy edges. Then, through skeletonizing these edges and combining collinear ones, onepixel contour will be obtained.
The optimized contour is consistent, integrative, and minimal, which is appropriate for template creating.
3.2 Bottomup saliency detection based on background contrastAfter establishing an accurate template of workpieces offline, the effective method of searching this template in online images is considered. Searching an object in the whole image is computationally expensive. Consequently, reducing the searching region is obviously a feasible method to decrease the computational complexity. Salient regions, which are distinct from the image background, can perfectly cover the potential area where the objects are located. Therefore, saliency detection algorithm before object searching is discussed, which will facilitate to improve detection efficiency.
Bottomup saliency detection is datadriven, which relies on lowlevel image characteristics without prior knowledge about the object. Because of the difficulty of background determination, existing methods calculate saliency applying its contrast to local neighborhoods or the whole image. In our applications, workpieces are normally in the middle of the image. Therefore, the regions near the image boundary are considered as background.
First, in order to extract the image boundary, we use simple linear iterative cluster (SLIC) superpixel segmentation method to over segment the image^{[26]}. Compared with traditional grid image segmentation method, superpixel segmentation can preserve the integrity of a salient object and reduce the number of image blocks. Then, the superpixels, which touch the boundary of the image, are considered as background elements, and we denote this set of elements as
After extracting image boundary
$ \begin{equation} contrast({p_i}) = \sum\limits_{j \in B} {d({c_i}, {c_j})} \exp\left (  \frac{{d({l_i}, {l_j})}}{{2\sigma _l^2}}\right) \end{equation} $  (1) 
where
As described in the previous section, workpieces own rich prior knowledge. We apply topdown feature extraction method to obtain crucial features of workpieces and a standard template is established based on these features. By calculating the similarity between workpieces template and the images acquired online, the target workpiece can be detected through setting an appropriate threshold. In order to realize realtime detection, bottomup salient region detection based on background contrast for online images is proposed. By reducing detection region of online images, the detecting efficiency is able to be further improved.
In this subsection, the calculation of similarity between the template established by topdown feature extraction and salient region of searching images is discussed. At last, a hierarchical search strategy based on the image pyramids is presented to increase the speed of algorithm^{[15]}.
The optimized contour of the template, which is generated from an image of the object, consists of a set of points
$ \begin{align*}A = \left( {\begin{array}{*{20}{c}} {\cos \theta }&{  \sin \theta }\\ {\sin \theta }&{\cos \theta } \end{array}} \right). \end{align*} $ 
The similarity measure is a measurement for comparing the transformed template to the image. It should be robust to the presence of occlusion, clutter, and nonlinear illumination changes. The similarity measure s, which can achieve these requirements, is computed as follows:
$ \begin{align} s =& \frac{1}{n}\sum\limits_{i = 1}^n \frac{d_{i'^{\rm T}}e_{q+p'}}{\d'_i\\e_{q+p'}\}=\nonumber\\ & \frac{1}{n}\sum\limits_{i = 1}^n\frac{t'_iv_{r+r'_i, c+c'_i}+u'_iw_{r+r'_i, c+c'_i}}{\t'^2_i+u'^2_i\\v^2_{r+r'_i, c+c'_i}+w^2_{r+r'_i, c+c'_i}\} \end{align} $  (1) 
where
$ \begin{equation} {s_j} = \frac{1}{j}\sum\limits_{i = 1}^j {\frac{{d_i'^{\rm T}{e_{q + {p'}}}}}{{\ {d'_i} \\ {{e_{q + {p'}}}} \}}}. \end{equation} $  (3) 
Because the sum of the normalized dot products is all no more than 1, the partial score
The evaluation of the similarity measures on the entire image is very timeconsuming, even if the stopping criteria discussed above are used. In order to gain a speedup, we can try to reduce the number of poses that need to be checked as well as the number of template points. Image pyramid scales down the image and the template multiple times by a factor of 2, to create a data structure, as shown in Fig. 6.
Download:


Fig. 6. Schematic diagram of image pyramids 
In this paper, the mean filter is adopted to construct image pyramids. The hierarchical search strategy is explained as follows:
Algorithm 1. The hierarchical search strategy based on image pyramid.
Input: Template image
Output: flag: 1 object is found; 0 no object is found, Position
1) Calculate an appropriate number of image pyramids
2) Generate
3) Generate
4) Initialize
5) for
6) Calculate the similarity measurement s
7) between
8) if
9) The object is found at the ith layer:
10)
11) match in the lower pyramid level to avoid the
12) uncertainty in the location of the match
13) else
14) Break
15) end if
16) end for
17) if
18)
19) else
20)
21) end if
22) return flag, position
Using image pyramid can greatly improve the computational efficiency. If we perform the complete matching, for example, on level 4, the amount of computations can be reduced by a factor of 4 096.
4 Measurement of dimension of workpieces 4.1 Calibration of camera system with telecentric lensIn the measurement of precision workpieces, high measurement accuracy is demanded. Because of the characteristics of purely orthographic projections, constant magnification and very small distortion, telecentric lens system is chosen in the proposed measurement system. In order to obtain the actual dimensions of workpieces, camera system with telecentric lens need to be calibrated first. In this section, we present a convenient and practical method for telecentric lens system calibration.
In bilateral telecentric lens system, two telecentric lenses are located separately at two sides of a small aperture stop. The distance between the two lenses is the sum of their focal lengths, and the aperture stop is exactly placed in the focal plane between the two lenses. Therefore, only the light rays and the emergent rays are both approximately parallel to the optical axis of the lens. The magnification k is a significant parameter of a bilateral telecentric lens system, therefore, it must be calibrated precisely.
In visual measurement, the relative distance between two points is more practical than the absolute positions of the points. Therefore, we can utilize points with known distances to calibrate the visual system. Suppose there are
$ \begin{equation} \left[ {\begin{array}{*{20}{c}} {\Delta {u_i}}\\ {\Delta {v_i}}\\ 1 \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} k&0&0\\ 0&k&0\\ 0&0&1 \end{array}} \right]\left[ {\begin{array}{*{20}{c}} {\Delta {x_{wi}}}\\ {\Delta {y_{wi}}}\\ 1 \end{array}} \right], \quad i=1, 2, \cdots, n. \end{equation} $  (4) 
In order to facilitate the practical application, we use the distance between two points to calculate the parameter k:
$ \begin{equation} k = \frac{{\sqrt {\Delta {x_w}^2 + \Delta {y_w}^2} }}{{\sqrt {\Delta {u^2} + \Delta {v^2}} }}. \end{equation} $  (5) 
In industrial manufacturing, precision workpieces need to assemble with other components. The error in a crucial dimension, such as the size of assembly holes on the bottom of workpieces, directly influences the performance and reliability of the products. Therefore, it is of great significance to measure these crucial dimensions precisely.
Take the assembly holes on workpieces for example, the measurement steps are as follows: First, segment object from image; Second, extract the contour of the hole and remove obvious outliers; Finally, fitting the contour to circles by minimizing the sum of the squared distances of the contour points to the circle.
$ \begin{equation} {\varepsilon ^2} = \sum\limits_{i = 1}^n {{{\left( {\sqrt {{{({r_i}  \alpha )}^2} + {{({c_i}  \beta )}^2}}  \frac{d}{2}} \right)}^2}}. \end{equation} $  (6) 
Since the visual system have been calibrated in advance, the actual size of the hole can be obtained as
$ \begin{equation} D = k\times{d} \end{equation} $  (7) 
where D is the actual diameter of the hole and d is the pixel diameter of the hole.
4.3 Measurement procedure of workpieces dimensionTo measure the dimension of a workpiece, the object detection process is performed first. Then, by extracting and fitting contours of the workpieces, pixel dimension of the workpiece can be obtained. Finally, transform the pixel result to actual dimension by calibration parameters. The complete procedure of precision workpieces detection and measurement algorithm combining topdown and bottomup saliency is described as follows.
Algorithm 2. Precision workpiece detection and measurement combining topdown and bottomup saliency.
Input: Template image
Output: Flag (1 object is found; 0 no object is found), Diameter
1) a) Topdown feature extraction
2) Preprocessing of
3) Canny edge detection
4) Optimize edges
5) Establish template edges
6) b) Bottomup saliency detection
7) Superpixel segmentation to
8) Image boundary
9) for each point
10) Calculate the measurement of region saliency as a)
11) end for
12) get salience region
13) c) Workpieces searching
14) Initialize pyramid layer
15) search region
16) for
17) Calculate
18) and
19) if
20) flag at the ith layer:
21) else
22) Break
23) end if
24) end for
25) if
26)
27) else
28)
29) end if
30) d) Crucial dimension measurement
31) Camera calibration
32) Circle fitting and get pixel diameter d
33) Calculate the actual diameter
34) return flag, position
According to the system designed in Section 2, an experimental system was established, as shown in Fig. 7. Vision unit is consisted of a charge coupled device (CCD) camera and a telecentric lens system with a magnification of 0.057
Download:


Fig. 7. Experimental system 
5.2 Feature extraction experiments
Samples of precision workpiece to be detected are shown in Fig. 1. We choose the first type samples that have a hole in the bottom of the workpieces in the following experiments.
In order to establish the template contour, we applied Canny detector to extract original contour in the template image. The parameters of Canny operator were
Download:


Fig. 8. Contour detection results 
5.3 Saliency detection experiments
The number of pixels in the search image influences the computation complexity directly in workpieces detection. Reducing the search region is a practical method to increase matching efficiency. Therefore, the saliency region detection method described in Section 3.2 was implemented. First, oversegment the input image with superpixel segmentation method. The superpixel size was set to 600 pixels, and the image elements were shown in Fig. 9 (b). Then, we extracted the boundary of the image (as shown in Fig. 9(c)) and computed contrast saliency against these boundary elements. Then, the salient region was segmented as shown in Fig. 9 (d).
Download:


Fig. 9. Salient region detection results 
5.4 Workpieces detection experiments
According to the workpieces detection algorithm presented in Section 3, the practical experiments are implemented in this section. Two measures were adopted to evaluate the performance of our method: Algorithm runtime and similarity score. Runtime is algorithm computation time, and smaller values equate to higher efficiency. Similarity score described in Section 3.3 is a percentage, and higher values indicate better matches. There were two main parameters which influence the performance of DMTBS algorithm. One is the number of pyramid layers chosen in the process of workpieces searching. The other one is the window size of dilation operation in the process of template establishment.
As introduced in the previous section, image pyramid can reduce computation complexity. Ten workpieces were chosen as samples to perform detection experiments with the template established previously. For each sample, seven independent trials were performed with different pyramid layers, from 1 layer to 7 layers. The runtimes of the algorithm with different pyramid layers were recorded, as shown in Table 1.
In order to display the results visually, we draw the curve of algorithm runtimes with different pyramid layers as shown in Fig. 10 (a). The average runtimes of ten samples with different pyramid layers are shown in Fig. 10 (b). As we could see from Figs. 10 (a) and (b), when 1 pyramid layer was chosen, the algorithm runtimes were up to 2 000 ms. When 2 pyramid layers were employed, the algorithm runtimes were approximately 100 ms, which show a significant decrease. From 3 to 5 layers, the algorithm runtimes still had a small decline. In addition, after 5 layers, the results no longer dropped down but were nearly the same. The trend of the resulting curve indicated that image pyramid can improve the efficiency of the algorithm effectively. However, excessive layering cannot decline the runtime further. Instead, it may lead to losing the shape feature of the object in the top layer and fail the matching. Therefore, it is important to choose an appropriate number of image pyramid layers. In this experiment, 4 is considered suitable.
Download:


Fig. 10. Results of algorithm runtime with different pyramid layers 
The relationship between similarity scores and the pyramid layers was also experimented with 7 groups of trials. The results are shown in Table 2. Meanwhile, the curve of similarity scores with different pyramid layers was drawn as shown in Fig. 11 (a). The average similarity score of each number of layers was shown in Fig. 11 (b).
Download:


Fig. 11. Results of similarity scores with different pyramid layers 
As we can see in Table 2, the maximum value of the similarity score is 88.63 %, and the minimum value is 62.40 %. From Fig. 11 (a), each sample had almost the same value in different pyramid layers. Therefore, the difference of the results is caused by the shape of each sample not the number of pyramid layers. Therefore, it can be observed that the number of pyramid layers influence similarity score limitedly.
As introduced in the previous section, another aspect that influences the algorithm effects was the size of dilation window. In the stage of establishing the template, the size of dilation window determines the contour of the template directly. Therefore, ten workpieces were selected as samples to perform matches with the template. Five templates were made by different size of dilation windows from the same image. The sizes of dilation windows were
To display the results visually, we draw the curve of algorithm runtimes with different size of dilation window as shown in Fig. 12 (a). The average runtime of ten samples with different size of dilation window is shown in Fig. 12 (b). As we can see in Table 3, the maximum value of algorithm runtime is 4.75 milliseconds, and the minimum value is 3.11 milliseconds. Among all these experiments, the maximum difference is 1.64 milliseconds. The relationship between algorithm runtime and the size of dilation window is not simple scaling relationship, from Figs. 12 (a) and (b). Therefore, size of dilation window has little effect on algorithm efficiency.
Download:


Fig. 12. Results of algorithm runtime with different size of dilation window 
The relationship between similarity scores and the size of dilation window was also experimented with 5 groups of trials. For each group, ten workpieces were selected to match with the templates established by different size of dilation windows from the same image. The sizes of dilation windows were
The curve of similarity scores for different size of dilation window was drawn in Fig. 13 (a). The average similarity scores of ten samples with different size of dilation window are shown in Fig. 13 (b). The maximum value of each sample is marked in Table 4. Obviously, the maximum values appear 7 times in the size of dilation window
Download:


Fig. 13. Results of similarity scores with different size of dilation window 
5.5 Workpieces detection comparative experiments
In order to analyze the efficiency of the proposed DMTBS algorithm, comparative experiments with two other detection algorithms were performed, which were grayvalue based method in [17] and contourbased method in [11]. Ten workpieces were selected to execute the detection by different methods in the experiments. The algorithm runtimes are listed in Table 5, and the corresponding curves of the comparison results are drawn as shown in Fig. 14.
Download:


Fig. 14. Curves of comparison results of different methods runtimes 
As we can see in Table 5 and Fig. 14, the average algorithm runtimes of the method in [17], the method in [11] and ours are: 9.07 ms, 4.215 ms and 4.094 ms, respectively. Compared with the method in [17], which is grayvalue based method, contourbased method in [11] is faster in computation speed, because the pixel numbers of the object contour are much less than the pixel numbers of the object surface. In addition, our method applying topdown and bottomup salience detection reduces the search region. Therefore, the computation efficiency is further improved.
Accordingly, matching similarity scores are listed in Table 6, and the corresponding curves of the comparison results are drawn as shown in Fig. 15.
Download:


Fig. 15. Curves of comparison results of different methods similarity scores 
As shown in Table 6 and Fig. 16, the average similarity scores of the method in [17], the method in [11] and ours are: 61.684 %, 66.028 % and 76.471 %, respectively. The results demonstrate that our method has the highest similarity score. If we set 60 % as a threshold to classify an object is found successfully or not, then the success rates of workpieces detection are 60 %, 70 % and 100 %. Therefore, the results of experiments illustrate that the method proposed in this paper is characterized by high computation speed and outstanding effects.
Download:


Fig. 16. Measurement results 
5.6 Measurement experiments for dimension of workpieces
According to measurement method presented in Section 4, the camera system with telecentric lens should be calibrated first. Pixel equivalents of the camera were gained by a microcalliper as follows:
$ \begin{align} k = &\frac{1}{n}\sum\limits_{i = 1}^n {\frac{{\sqrt {\Delta {x_{Wi}}^2 + \Delta {y_{Wi}}^2} }}{{\sqrt {\Delta {u_i}^2 + \Delta {v_i}^2} }}} =\nonumber\\ & \frac{1}{n}\sum\limits_{i = 1}^n {\frac{1}{{\Delta {d_i}}}} = 0.008 977 \end{align} $  (8) 
where,
In this experiment, diameter of the assembly hole on the bottom of the workpiece was selected as the crucial dimension to conduct size measurement using the method described in Section 4. Ten workpieces were chosen as samples, and the ground truth was obtained from micrometer. The measurement results are listed in Table 7.
Figs. 16 (a) and (b) show the absolute errors and the relative errors of measurement results visually, respectively. From Table 7, the maximum absolute error of the diameter is 0.035, and the maximum relative error is 0.51 %. The results demonstrate that the measurement precision of workpieces critical dimension can meet the requirement of manufacturing.
6 DiscussionsA novel precision workpieces detection and measurement method combining topdown and bottomup saliency is presented in this paper. By means of this algorithm, a realtime automatic detection and measurement instrument is designed. Template creation by topdown feature extraction ensures the accuracy of detection, and reduction of searching region by bottomup saliency detection improves the efficiency of the algorithm. Practical experiments and comparative experiments are conducted, and the results illustrate that the proposed workpieces detection method is characterized by high efficiency and good effects. In addition, calibration of the visual system with telecentric lens is discussed. Crucial dimensions of workpieces are measured whose maximum relative error is less than 0.51 %. The measurement precision can meet the requirement of manufacturing. Furthermore, the proposed detection algorithm DMTBS is an important reference for similar object detection tasks, such as pill detection, sugar detection and so on.
AcknowledgementsThis work was supported by National Natural Science Foundation of China (Nos. 61379097, 91748131, 61771471, U1613213 and 61627808), National Key Research and Development Plan of China (No. 2017YFB1300202), and Youth Innovation Promotion Association Chinese Academy of Sciences (CAS) (No. 2015112).
[1] 
E. N. Malamas, E. G. M. Petrakis, M. Zervakis, L. Petit, J. D. Legat. A survey on industrial vision systems, applications and tools. Image and Vision Computing, vol.21, no.2, pp.171188, 2003. DOI:10.1016/S02628856(02)00152X 
[2] 
Q. Y. Gu, I. Ishii. Review of some advances and applications in realtime highspeed vision:Our views and experiences. International Journal of Automation and Computing, vol.13, no.4, pp.305318, 2016. DOI:10.1007/s1163301610240 
[3] 
N. Mostofi, F. Samadzadegan, S. Roohy, M. Nozari. Using vision metrology system for quality control in automotive industries. International Society for Photogrammetry and Remote Sensing, vol. XXXIXB5, pp.3337, 2012. DOI:10.5194/isprsarchivesXXXIXB5332012 
[4] 
Y. F. Qu, Z. B. Pu, G. D. Liu. Combination of a vision system and a coordinate measuring machine for rapid coordinate metrology. In Proceedings of the SPIE 4927, Optical Design and Testing, SPIE, Shanghai, China, pp. 581585, 2002. DOI: 10.1117/12.471672.

[5] 
V. Carbone, M. Carocci, E. Savio, G. Sansoni, L. De Chiffre. Combination of a vision system and a coordinate measuring machine for the reverse engineering of freeform surfaces. The International Journal of Advanced Manufacturing Technology, vol.17, no.4, pp.263271, 2001. DOI:10.1007/s001700170179 
[6] 
M. Rak, A. Wózniak. Systematic errors of measurements on a measuring arm equipped with a laser scanner on the results of optical measurements. Advanced Mechatronics Solutions, R. Jabló nski, T. Brezina, Eds., Cham, Germany: Springer, pp. 355360, 2016. DOI: 10.1007/978331923923154.

[7] 
S. Martínez, E. Cuesta, J. Barreiro, B. Álvarez. Analysis of laser scanning and strategies for dimensional and geometrical control. The International Journal of Advanced Manufacturing Technology, vol.46, no.58, pp.621629, 2010. DOI:10.1007/s0017000921068 
[8] 
Y. Xie, X. D. Yang, Z. Liu, S. N. Ren, K. Chen. Method for visual localization of oil and gas wellhead based on distance function of projected features. International Journal of Automation and Computing, vol.14, no.2, pp.147158, 2017. DOI:10.1007/s1163301710631 
[9] 
T. H. Sun, C. C. Tseng, M. S. Chen. Electric contacts inspection using machine vision. Image and Vision Computing, vol.28, no.6, pp.890901, 2010. DOI:10.1016/j.imavis.2009.11.006 
[10] 
K. T. Maddala, R. H. Moss, W. V. Stoecker, J. R. Hagerty, J. G. Cole, N. K. Mishra, R. J. Stanley. Adaptable ring for visionbased measurements and shape analysis. IEEE Transactions on Instrumentation and Measurement, vol.66, no.4, pp.746756, 2017. DOI:10.1109/TIM.2017.2650738 
[11] 
C. X. Jian, J. Gao, Y. H. Ao. Automatic surface defect detection for mobile phone screen glass based on machine vision. Applied Soft Computing, vol.52, pp.348358, 2017. DOI:10.1016/j.asoc.2016.10.030 
[12] 
S. Ghidoni, M. Finotto, E. Menegatti. Automatic color inspection for colored wires in electric cables. IEEE Transactions on Automation Science and Engineering, vol.12, no.2, pp.596607, 2015. DOI:10.1109/TASE.2014.2360233 
[13] 
M. Ulrich, C. Steger, A. Baumgartner. Realtime object recognition using a modified generalized Hough transform. Pattern Recognition, vol.36, no.11, pp.25572570, 2003. DOI:10.1016/S00313203(03)001699 
[14] 
C. Steger. Similarity measures for occlusion, clutter, and illumination invariant object recognition. Joint Pattern Recognition Symposium, B. Radig, S. Florczyk, Eds., Berlin Heidelberg, Germany: Springer, pp. 148154, 2001. DOI: 10.1007/354045404720.

[15] 
A. Uchida, Y. Ito, K. Nakano. Fast and accurate template matching using pixel rearrangement on the GPU. In Proceedings of the 2nd International Conference on Networking and Computing, IEEE, Osaka, Japan, pp. 153159, 2011. DOI: 10.1109/ICNC.2011.30.

[16] 
B. K. Choudhary, N. K. Sinha, P. Shanker. Pyramid method in image processing. Journal of Information Systems and Communication, vol.3, no.1, pp.269273, 2012. 
[17] 
M. GharaviAlkhansari. A fast globally optimal algorithm for template matching using lowresolution pruning. IEEE Transactions on Image Processing, vol.10, no.4, pp.526533, 2001. DOI:10.1109/83.913587 
[18] 
N. A. Jalil, A. S. H. Basari, S. Salam, N. K. Ibrahim, M. A. Norasikin. The utilization of template matching method for license plate recognition:A case study in Malaysia. Advanced Computer and Communication Engineering Technology, Cham, Germany:Springer, pp.10811090, 2015. DOI:10.1007/9783319076744100 
[19] 
L. Zhang, M. H. Tong, T. K. Marks, H. Shan, G. W. Cottrell. SUN: A Bayesian framework for saliency using natural statistics. Journal of Vision, vol. 8, no. 7, Article number 32, 2008. DOI: 10.1167/8.7.32.

[20] 
Y. K. Luo, P. Wang, W. Y. Li, X. P. Shang, H. Qiao. Salient object detection based on boundary contrast with regularized manifold ranking. In Proceedings of the 12th World Congress on Intelligent Control and Automation, IEEE, Guilin, China, pp. 20742079, 2016. DOI: 10.1109/WCICA.2016.7578649.

[21] 
M. M. Cheng, N. J. Mitra, X. L. Huang, P. H. S. Torr, S. M. Hu. Global contrast based salient region detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.37, no.3, pp.569582, 2015. DOI:10.1109/TPAMI.2014.2345401 
[22] 
P. Dollár, C. L. Zitnick. Fast edge detection using structured forests. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.37, no.8, pp.15581570, 2015. DOI:10.1109/TPAMI.2014.2377715 
[23] 
J. B. Wu, Z. P. Yin, Y. L. Xiong. The fast multilevel fuzzy edge detection of blurry images. IEEE Signal Processing Letters, vol.14, no.5, pp.344347, 2007. DOI:10.1109/LSP.2006.888087 
[24] 
S. C. Huang, W. C. Chen. A new hardwareefficient algorithm and reconfigurable architecture for image contrast enhancement. IEEE Transactions on Image Processing, vol.23, no.10, pp.44264437, 2014. DOI:10.1109/TIP.2014.2348869 
[25] 
C. Steger. Analytical and empirical performance evaluation of subpixel line and edge detection. In Proceedings of Empirical Evaluation Methods in Computer Vision, IEEE, Los Alamitos, USA, pp. 123, 1998.

[26] 
R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, S. Susstrunk. SLIC superpixels compared to stateoftheart superpixel methods. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.34, no.11, pp.22742282, 2012. DOI:10.1109/TPAMI.2012.120 