Position Data Estimation System Based on Recognized Field Landmark Using Deep Neural Network for ERSOW Soccer Robot

Iwan Kurnianto Wibowo (1), Mochamad Mobed Bachtiar (2), Erna Alfi Nurrohmah (3), Vega Kurnia Garindra Wardhana (4)
(1) Department of Informatics and Computer Engineering, Politeknik Elektronika Negeri Surabaya, Surabaya, 60111, Indonesia
(2) Department of Informatics and Computer Engineering, Politeknik Elektronika Negeri Surabaya, Surabaya, 60111, Indonesia
(3) Department of Informatics and Computer Engineering, Politeknik Elektronika Negeri Surabaya, Surabaya, 60111, Indonesia
(4) Department of Informatics and Computer Engineering, Politeknik Elektronika Negeri Surabaya, Surabaya, 60111, Indonesia
Fulltext View | Download
How to cite (IJASEIT) :
Wibowo, Iwan Kurnianto, et al. “Position Data Estimation System Based on Recognized Field Landmark Using Deep Neural Network for ERSOW Soccer Robot”. International Journal on Advanced Science, Engineering and Information Technology, vol. 13, no. 3, June 2023, pp. 961-8, doi:10.18517/ijaseit.13.3.18376.
One of the problems faced by soccer robots is how to find out the position of the robot itself and other robots on the field. A simple way to find out the robot's position is to use the odometry method. However, odometry is weak in accumulating position errors that reduce the accuracy of the moving robot's absolute position estimation and orientation. This paper presents a robot position data estimation system that is to be implemented on the ERSOW wheeled soccer robot. The robot can determine its position based on a unique landmark: an L-shaped line on the soccer robot field. We use a deep neural network method to recognize landmark L-shaped. Vision systems and deep learning inferences run on the Robot Operating System platform. After obtaining the distance of the robot to the L-shaped landmark, the robot's orientation and position relative to the field can be accurately determined based on the omnidirectional camera's perception. The results of the position estimation system in this study can be used to reduce position errors resulting from the odometry method. Based on the landmark L-shape recognition test results conducted on 641 datasets, the validation accuracy value is 0.806. The results of testing the robot position generated by vision obtained the largest error x about 2.32 cm and y about 1.99 cm.

M. Jiono, Y. D. Mahandi, S. Norma Mustika, S. Sendari, and A. M. Dzikri, “Self Localization Based on Neighborhood Probability Mapping for Humanoid Robot,” 4th Int. Conf. Vocat. Educ. Training, ICOVET 2020, pp. 355-359, 2020, doi: 10.1109/ICOVET50258.2020.9230237.

J. Palací­n, E. Rubies, and E. Clotet, “Systematic Odometry Error Evaluation and Correction in a Human”Sized Three”Wheeled Omnidirectional Mobile Robot Using Flower”Shaped Calibration Trajectories,” Appl. Sci., 2022, doi: 10.3390/app12052606.

F. Lui Hakim Ihsan, R. Adryantoro Priambudi, M. Mobed Bachtiar, and I. Kumianto Wibowo, “Heading Calibration in Robot Soccer ERSOW using Line Landmark on the Field,” IES 2020 - Int. Electron. Symp. Role Auton. Intell. Syst. Hum. Life Comf., pp. 226-232, 2020, doi: 10.1109/IES50839.2020.9231923.

D. R. Phang, W. Lee, and P. Michail, “and IMU Sensor,” 2019 IEEE Int. Meet. Futur. Electron Devices, Kansai, pp. 2019-2020, 2019.

M. A. Esfahani, H. Wang, K. Wu, and S. Yuan, “OriNet: Robust 3-D Orientation Estimation with a Single Particular IMU,” IEEE Robot. Autom. Lett., 2020, doi: 10.1109/LRA.2019.2959507.

R. B. Sousa, M. R. Petry, and A. P. Moreira, “Evolution of odometry calibration methods for ground mobile robots,” 2020, doi: 10.1109/ICARSC49921.2020.9096154.

Z. T. Romadon, H. Oktavianto, I. K. Wibowo, B. Sena Bayu Dewantara, E. A. Nurrohmah, and R. Adryantoro Priambudi, “Pose Estimation on Soccer Robot using Data Fusion from Encoders, Inertial Sensor, and Image Data,” in IES 2019 - International Electronics Symposium: The Role of Techno-Intelligence in Creating an Open Energy System Towards Energy Democracy, Proceedings, 2019, pp. 454-459, doi: 10.1109/ELECSYM.2019.8901578.

M. B. Alatise and G. P. Hancke, “Pose estimation of a mobile robot based on fusion of IMU data and vision data using an extended kalman filter,” Sensors (Switzerland), vol. 17, no. 10, 2017, doi: 10.3390/s17102164.

S. Luo, H. Lu, J. Xiao, Q. Yu, and Z. Zheng, “Robot detection and localization based on deep learning,” Proc. - 2017 Chinese Autom. Congr. CAC 2017, vol. 2017-Janua, pp. 7091-7095, 2017, doi: 10.1109/CAC.2017.8244056.

R. Alves, J. Silva De Morais, and K. Yamanaka, “Cost-effective indoor localization for autonomous robots using kinect and wifi sensors,” Intel. Artif., 2020, doi: 10.4114/intartif.vol23iss65pp33-55.

M. Karkoub, O. Bouhali, and A. Sheharyar, “Gas Pipeline Inspection Using Autonomous Robots with Omni-Directional Cameras,” IEEE Sens. J., vol. 21, no. 14, pp. 15544-15553, 2021, doi: 10.1109/JSEN.2020.3043277.

S. Barone, M. Carulli, P. Neri, A. Paoli, and A. V. Razionale, “An omnidirectional vision sensor based on a spherical mirror catadioptric system,” Sensors (Switzerland), 2018, doi: 10.3390/s18020408.

N. Engel, S. Hoermann, M. Horn, V. Belagiannis, and K. Dietmayer, “DeepLocalization”¯: Landmark-based Self-Localization with Deep Neural Networks,” 2019 IEEE Intell. Transp. Syst. Conf., pp. 926-933, 2019.

B. N. Krishna Sai and T. Sasikala, “Object Detection and Count of Objects in Image using Tensor Flow Object Detection API,” Proc. 2nd Int. Conf. Smart Syst. Inven. Technol. ICSSIT 2019, no. Icssit, pp. 542-546, 2019, doi: 10.1109/ICSSIT46314.2019.8987942.

D. Xianzhi, “Research on Camera Calibration Technology Based on Deep Neural Network in Mine Environment,” Proc. - 2020 Int. Conf. Comput. Vision, Image Deep Learn. CVIDL 2020, no. Cvidl, pp. 375-379, 2020, doi: 10.1109/CVIDL51233.2020.00-68.

M. R. Dwijayanto, S. Kurniawan, and B. Sugandi, “Real-Time Object Recognition for Football Field Landmark Detection Based on Deep Neural Networks,” Proc. 2019 2nd Int. Conf. Appl. Eng. ICAE 2019, 2019, doi: 10.1109/ICAE47758.2019.9221678.

A. F. VillaÌn, “Mastering OpenCV 4 with Python: A Practical Guide Covering Topics from Image Processing, Augmented Reality to Deep Learning with OpenCV 4 and Python 3.7,” Packt Publishing. 2019.

M. J. J. Douglass, “Book Review: Hands-on Machine Learning with Scikit-Learn, Keras, and Tensorflow, 2nd edition by Aurí©lien Gí©ron,” Phys. Eng. Sci. Med., 2020, doi: 10.1007/s13246-020-00913-z.

A. M. Taqi, F. Al-Azzo, A. Awad, and M. Milanova, “Skin Lesion Detection by Android Camera based on SSD-Mo-bilenet and TensorFlow Object Detection API,” Am. J. Adv. Res., 2019, doi: 10.5281/zenodo.3264022.

T. V. Janahiraman and M. S. M. Subuhan, “Traffic light detection using tensorflow object detection framework,” 2019, doi: 10.1109/ICSEngT.2019.8906486.

M. Abagiu, D. Popescu, F. L. Manta, and L. C. Popescu, “Use of a Deep Neural Network for Object Detection in a Mobile Robot Application,” EPE 2020 - Proc. 2020 11th Int. Conf. Expo. Electr. Power Eng., no. Epe, pp. 221-225, 2020, doi: 10.1109/EPE50722.2020.9305648.

L. Jing and Y. Tian, “Self-Supervised Visual Feature Learning with Deep Neural Networks: A Survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence. 2021, doi: 10.1109/TPAMI.2020.2992393.

Y. Koo and S. H. Kim, “Distributed Logging System for ROS-based Systems,” 2019 IEEE Int. Conf. Big Data Smart Comput. BigComp 2019 - Proc., pp. 1-3, 2019, doi: 10.1109/BIGCOMP.2019.8679372.

M. Marian, F. Stinga, M. T. Georgescu, H. Roibu, D. Popescu, and F. Manta, “A ROS-based Control Application for a Robotic Platform Using the Gazebo 3D Simulator,” Proc. 2020 21st Int. Carpathian Control Conf. ICCC 2020, 2020, doi: 10.1109/ICCC49264.2020.9257256.

A. Y. R. Ruiz and B. Chandrasekaran, “A Robotic Control System Using Robot Operating System and MATLAB for Sensor Fusion and Human-Robot Interaction,” 2020 10th Annu. Comput. Commun. Work. Conf. CCWC 2020, pp. 550-555, 2020, doi: 10.1109/CCWC47524.2020.9031184.

T. Witte and M. Tichy, “Checking consistency of robot software architectures in ROS,” Proc. - Int. Conf. Softw. Eng., pp. 1-8, 2018, doi: 10.1145/3196558.3196559.

X. Liu et al., “Self-supervised Learning: Generative or Contrastive,” IEEE Trans. Knowl. Data Eng., 2021, doi: 10.1109/TKDE.2021.3090866.

F. Zhang, Q. Li, Y. Ren, H. Xu, Y. Song, and S. Liu, “An expression recognition method on robots based on mobilenet V2-SSD,” 2019 6th Int. Conf. Syst. Informatics, ICSAI 2019, no. Icsai, pp. 118-122, 2019, doi: 10.1109/ICSAI48974.2019.9010173.

G. Yu, L. Wang, M. Hou, Y. Liang, and T. He, “An adaptive dead fish detection approach using SSD-MobileNet,” Proc. - 2020 Chinese Autom. Congr. CAC 2020, no. 2018, pp. 1973-1979, 2020, doi: 10.1109/CAC51589.2020.9326648.

Y. Qian, R. Jiacheng, W. Pengbo, Y. Zhan, and G. Changxing, “Real-Time detection and localization using SSD method for oyster mushroom picking robot∗,” 2020 IEEE Int. Conf. Real-Time Comput. Robot. RCAR 2020, pp. 158-163, 2020, doi: 10.1109/RCAR49640.2020.9303258.

X. Song, P. Jiang, and H. Zhu, “Research on Unmanned Vessel Surface Object Detection Based on Fusion of SSD and Faster-RCNN,” Proc. - 2019 Chinese Autom. Congr. CAC 2019, pp. 3784-3788, 2019, doi: 10.1109/CAC48633.2019.8997431.

Authors who publish with this journal agree to the following terms:

    1. Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
    2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
    3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).