Tool Sorting Algorithm Using Faster R-CNN and Haar Classifiers

Robinson Jiménez-Moreno (1), Paula Useche (2), Javier O. Pinzón-Arenas (3)
(1) Mechatronics Engineering Department, Universidad Militar Nueva Granada, Bogotá D.C, 110111, Colombia
(2) Mechatronics Engineering Department, Universidad Militar Nueva Granada, Bogotá D.C, 110111, Colombia
(3) Mechatronics Engineering Department, Universidad Militar Nueva Granada, Bogotá D.C, 110111, Colombia
Fulltext View | Download
How to cite (IJASEIT) :
Jiménez-Moreno , Robinson, et al. “Tool Sorting Algorithm Using Faster R-CNN and Haar Classifiers”. International Journal on Advanced Science, Engineering and Information Technology, vol. 11, no. 6, Dec. 2021, pp. 2445-51, doi:10.18517/ijaseit.11.6.12597.
The following paper presents an algorithm for sorting up to 5 different tools based on deep learning and specifically in a convolutional neural network, according to the top in pattern recognition found in state of the art and compared by a Haar classifier in object recognition tasks. A Faster R-CNN is used to detect and classify tools located randomly on a table and a Haar classifier to detect other tools delivered by the user. The Faster R-CNN allows recognizing the existing tools on the table and where they are located in the physical space. The Haar classifier detects and tracks, in real-time, a tool delivered by the user's hand to sort it on the table, together with the other elements. Both the training of the convolutional network and the design of the Haar classifier are exposed. The algorithm detects and classifies the tools found on a table, then orders them side by side, and finally waits for the user to deliver some of the five missing tools on the table, take it from his hand, and locate it at the end of the row of objects. A Faster R-CNN was used with an accuracy of 70.8% and a Haar classifier with a 96% recognition, managing to order the five tools in a physical environment. The average time in comparison demonstrates that the Haar classifier presents a lower computational cost.

C. S. Franklin, E. G. Dominguez, J. D. Fryman, M. L. Lewandowski, “Collaborative robotics: New era of human-robot cooperation in the workplace”, Journal of Safety Research, Volume 74, ISSN 0022-4375, Pages 153-160, 2020, DOI: 10.1016/j.jsr.2020.06.013.

N. Abdo, C. Stachniss, L. Spinello, W. Burgard. “Robots organize my shelves! Tidying up objects by predicting user preferences”, IEEE International Conference on Robotics and Automation (ICRA), pp. 1557-1564 2015. IEEE. WA, USA.

E. Magrini, F. Ferraguti, A. Jacopo, F. Pini, A. De Luca, F. Leali, “Human-robot coexistence and interaction in open industrial cells”. Robotics and Computer-Integrated Manufacturing, Volume 61, pp 1-19, 2020.

E.De Coninck, T.Verbelen, P.Van Molle, P.Simoens, B. Dhoed, “Learning robots to grasp by demonstration”, Robotics and Autonomous Systems, Volume 127, 2020.

H. Yang, R. Liu, S. Kumara, “Self-organizing network modelling of 3D objects”, CIRP Annals, Volume 69, Issue 1, pp 409-412, ISSN 0007-8506, 2020. DOI: 10.1016/j.cirp.2020.04.099.

A. Krizhevsky, I. Sutskever, G. E. Hinton. “Imagenet classification with deep convolutional neural networks”. Advances in neural information processing systems. pp. 1097-1105, 2012.

D. C. Ciresan, U. Meier, J. Masci, L. Maria Gambardella, J. Schmidhuber. “Flexible, high performance convolutional neural networks for image classification”. In International Joint Conference on Artificial Intelligence (IJCAI). pp. 1237-1242. Catalonia, Spain. 2011.

A. Mangal, H. Malik, G. Aggarwal, “An Efficient Convolutional Neural Network Approach for Facial Recognition”, 10th International Conference on Cloud Computing, Data Science & Engineering (Confluence). pp. 817-822. Noida, India. 2020.

M. Taskiran, N. Kahraman, C. E. Erdem, “Face recognition: Past, present and future (a review)”, Digital Signal Processing, Volume 106, 102809, ISSN 1051-2004, 2020, DOI: 10.1016/j.dsp.2020.102809.

N. Hassan, K. W. Ming and C. K. Wah, "A Comparative Study on HSV-based and Deep Learning-based Object Detection Algorithms for Pedestrian Traffic Light Signal Recognition," 3rd International Conference on Intelligent Autonomous Systems (ICoIAS), Singapore, pp. 71-76, 2020.

J. Lee, J. Chung, K. Sohn, “Reinforcement Learning for Joint Control of Traffic Signals in a Transportation Network”. IEEE Transactions on Vehicular Technology, vol. 69, no. 2, pp. 1375-1387. 2020.

J. Sharad, S. Suraj, K. Nitin. “First steps toward CNN based source classification of document images shared over messaging app”. Signal Processing: Image Communication. Volume 78. pp 32-412019.

D. Ciregan, U. Meier, J. Schmidhuber. “Multi-column deep neural networks for image classification”, In 2012 IEEE conference on Computer vision and pattern recognition (CVPR), pp. 3642-3649. IEEE. Providence, Rhode Island, USA. 2012.

C. G. Pachón-Suescíºn, J. O. Pinzón-Arenas, R. Jimí©nez-Moreno,"Fruit Identification and Quality Detection by Means of DAG-CNN," International Journal on Advanced Science, Engineering and Information Technology, vol. 10, no. 5, pp. 2183-2188, 2020. DOI: 10.18517/ijaseit.10.5.8684.

I. Gangopadhyay, A. Chatterjee, I. Das., “Face detection and recognition using haar classifier and LBP histogram”, International Journal of advanced research in computer science, vol. 9, no 2, pp 592-598, 2018.

P. C. Useche, R. Jimenez-Moreno, J. O. Pinzon. “Comparison between CNN and Haar classifiers for surgical instrumentation classification”. Contemporary Engineering Sciences, vol. 10, no. 28. pp. 1351-1363, 2017.

J. O. Pinzon, R. Jimenez-Moreno, “Comparison between handwritten word and speech record in real-time using CNN architectures”. International Journal of Electrical and Computer Engineering (IJECE). Vol 10, No 4 , pp. 4313-4321, 2020.

S. Ren, K. He, R. Girshick, J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks”. Advances in neural information processing systems, pp. 91-99, 2015.

S. Kermani, M. G. Oghli, A. Mohammadzadeh, R. Kafieh, “NF-RCNN: Heart localization and right ventricle wall motion abnormality detection in cardiac MRI”. Physica Medica, Volume 70, pp 65-74, 2020. DOI: 10.1016/j.ejmp.2020.01.011.

L. Jiao, S. Dong, S. Zhang, C. Xie, H. Wang. AF-“RCNN: An anchor-free convolutional neural network for multi-categories agricultural pest detection”. Computers and Electronics in Agriculture, Volume 174, 105522, 2020. DOI: 10.1016/j.compag.2020.105522.

F. Liu, G. Li, S. Yang, W. Yan, G. He, L. Lin, “Detection of heterogeneity on multi-spectral transmission image based on multiple types of pseudo-color maps”. Infrared Physics & Technology, Volume 106, 2020, DOI: 10.1016/j.infrared.2020.103285.

M.D. Zeiler, R. Fergus, “Visualizing and Understanding Convolutional Networks”. In: European Conference on Computer Vision. (ECCV). Lecture Notes in Computer Science, vol 8689, pp 818-833. Zurich, Switzerland, 2014.

S. Wan, S. Goudos, “Faster R-CNN for multi-class fruit detection using a robotic vision system”, Computer Networks, Volume 168, 107036, ISSN 1389-1286, 2020. DOI: 10.1016/j.comnet.2019.107036.

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Authors who publish with this journal agree to the following terms:

    1. Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
    2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
    3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).