International Journal on Advanced Science, Engineering and Information Technology, Vol. 11 (2021) No. 4, pages: 1373-1378, DOI:10.18517/ijaseit.11.4.12694
Visual Commands for Control of Food Assistance Robot
Javier O. Pinzón-Arenas, Robinson Jimenez-MorenoAbstract
Assistance robots improve people's quality of life in residential and office tasks, especially for people with physical limitations. In the case of the elderly or people with upper limb motor disabilities, an assistance robot for food support is necessary. This development is based on a mixed environment, a real and virtual environment working interactively. A camera located in front of the user is used, at a distance of 60 cm, so that it has an excellent visual range to capture the user's hand gestures for the commands. Pattern recognition based on a deep learning algorithm is made with convolutional neural networks to identify the user's hand gestures. This work exposes the network's training and the results of the robot command's execution. A virtual environment is presented in which a robotic arm with a spoon-like effector is used in a machine vision system that allows eight different types of commands to be recognized for the robot by training a faster R-CNN network for which a database of 640 images is used, achieving a degree of system performance of over 95%. The average time in the execution of a cycle from detecting and identify the command gesture to move the robot towards the food and return in front of the user is 21 seconds, making the development useful for real-time applications.
Keywords:
Convolutional neural network; faster R-CNN; assistance robot; virtual environment.
Viewed: 1185 times (since abstract online)
cite this paper download