Vision-based Object’s Shape Determination for Robot Alignment

Farah Adiba Azman, Mohd Razali Daud, Amir Izzani Mohamed, Addie Irawan, R. M. Taufika R. Ismail


This study provides vision-based system solutions for a peg-in-hole problem faced by a fork lift like robot used to transport copper wire spools from a rack, in which the spools are arranged side by side to a specified place. The copper wire spool (a cylindrical object on which the copper wire is wound and have a rim at each end) is held by 3 cylindrical shafts; one of the shafts is inserted through the center hole of the spool and another two shafts is held at the bottom of the spool. The aim of the development of vision-based system is to enable the robot to pick up the spool autonomously. To enable the center cylindrical shaft to be inserted nicely through the center hole of the spool, the center point of the spool must be on the center line of the camera Field of View (FOV). The problem to be solved in this study is how to determine that the center point is overlapped with the center line of the camera FOV. Firstly, a circle with the same radius of the spool’s rim was created at the center of the camera frame on screen, and then the spool’s front rim was tracked until it is overlapped with the circle on the screen image to ensure it is on the line of the camera FOV. However, the scope of this paper is limited to copper wire spool detection, and the confirmation of the front rim overlapping conditions is based on real time video processing. The proposed system uses Circular Hough Transform (CHT), binarization, morphology and edge detection of the sampled images from real-time video recording. A Logitech Webcam C270, which has an autofocus camera and HD view with lower price is used. By integrating the Logitech webcam for windows with MATLAB R2016a, all computations, programming and processing of this project are done using the MATLAB. Several experiments had been carried out and from the result obtained, the system is able to track the spool and determine the correct position of the robot to pick up the spool.


Circle Detection; Copper Wire Spool; Image Processing; Vision-based System;

Full Text:



V. A. Ho, D. V. Dao, S. Sugiyama, and S. Hirai, “Development and analysis of a sliding tactile soft fingertip embedded with a microforce/moment sensor,” IEEE Trans. Robot., vol. 27, no. 3, pp. 411–424, 2011.

W. Haipeng, W. Sun, X. Lin, and Z. Wang, “A centralized multisensor particle filter algorithm of formation targets,” 2016 6th Int. Conf. Digit. Inf. Commun. Technol. Its Appl. DICTAP 2016, pp. 50– 55, 2016.

N. A. A. Lokman, H. Ahmad, and M. R. Daud, “Design and analysis of FLC and feedback control for three finger gripper system,” J. Teknol., vol. 78, no. 10–4, pp. 61–67, 2016.

Y. Suzuki, K. Koyama, A. Ming, and M. Shimojo, “Grasping strategy for moving object using Net-Structure Proximity Sensor and vision sensor,” Robot. Autom. (ICRA), 2015 IEEE Int. Conf., pp. 1403–1409, 2015.

E. Arruda, J. Wyatt, and M. Kopicki, “Active vision for dexterous grasping of novel objects,” IEEE Int. Conf. Intell. Robot. Syst., vol. 2016–Novem, pp. 2881–2888, 2016.

M. W. Abdullah, H. Roth, M. Weyrich, and J. Wahrburg, “An approach for peg-in-hole assembling using intuitive search algorithm based on human behavior and carried by sensors guided industrial robot,” IFAC-PapersOnLine, vol. 28, no. 3, pp. 1476–1481, 2015.

Y. Meng, S. Gong, and C. Liu, “A fast computer vision system for defect detection of rubber keypad,” ICCASM 2010 - 2010 Int. Conf. Comput. Appl. Syst. Model. Proc., vol. 2, no. Iccasm, 2010.

N. I. Binti Zaidi, N. A. A. Binti Lokman, M. R. Bin Daud, H. Achmad, and K. A. Chia, “Fire recognition using RGB and YCbCr color space,” ARPN J. Eng. Appl. Sci., vol. 10, no. 21, pp. 9786–9790, 2015.

K. Ogawa, Y. Ito, and K. Nakano, “Efficient Canny Edge Detection Using a GPU,” 2010 First Int. Conf. Netw. Comput., pp. 279–280, 2010.

P. Tsarouchi, S. A. Matthaiakis, G. Michalos, S. Makris, and G. Chryssolouris, “A method for detection of randomly placed objects for robotic handling,” CIRP J. Manuf. Sci. Technol., vol. 14, pp. 20– 27, 2016.

A. O. Djekoune, K. Messaoudi, and K. Amara, “Incremental circle hough transform: An improved method for circle detection,” Opt. - Int. J. Light Electron Opt., vol. 133, pp. 17–31, 2017.

S. Chiu and J. Liaw, “A proposed circle/circular arc detection method using the modified randomized hough transform,” J. Chinese Inst. Eng., vol. 29, no. 3, pp. 533–538, 2006.

A. O. Djekoune, K. Messaoudi, and K. Amara, “Incremental circle hough transform: An improved method for circle detection,” Optik (Stuttg)., vol. 133, pp. 17–31, 2017.

V. K. Yadav, S. Batham, A. K. Acharya, and R. Paul, “Approach to accurate circle detection: Circular Hough Transform and Local Maxima concept,” 2014 Int. Conf. Electron. Commun. Syst. ICECS 2014, pp. 3–7, 2014.

N. J. Gandhi, V. J. Shah, and R. Kshirsagar, “Mean shift technique for image segmentation and Modified Canny Edge Detection Algorithm for circle detection,” Int. Conf. Commun. Signal Process. ICCSP 2014 - Proc., no. 1, pp. 246–250, 2014.

M. Hossein, D. Haji, J. R. Mianroodi, N. Norouzi, and A. Khajooeizadeh, “An Innovative Implementation of Circular Hough Transform using Eigenvalues of Covariance Matrix for Detecting Circles,” no. September, pp. 14–16, 2011.

J. Ni, Z. Khan, S. Wang, K. Wang, and S. K. Haider, “Automatic detection and counting of circular shaped overlapped objects using circular hough transform and contour detection,” Proc. World Congr. Intell. Control Autom., vol. 2016–Septe, no. Kylx15 0496, pp. 2902– 2906, 2016.

R. Hussin, M. R. Juhari, N. W. Kang, R. C. Ismail, and A. Kamarudin, “Digital image processing techniques for object detection from complex background image,” Procedia Eng., vol. 41, no. Iris, pp. 340–344, 2012.


  • There are currently no refbacks.

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.

ISSN: 2180-1843

eISSN: 2289-8131