Cite Article

Visual Commands for Control of Food Assistance Robot

Choose citation format

BibTeX

@article{IJASEIT12694,
   author = {Javier O. Pinzón-Arenas and Robinson Jimenez-Moreno},
   title = {Visual Commands for Control of Food Assistance Robot},
   journal = {International Journal on Advanced Science, Engineering and Information Technology},
   volume = {11},
   number = {4},
   year = {2021},
   pages = {1373--1378},
   keywords = {Convolutional neural network; faster R-CNN; assistance robot; virtual environment.},
   abstract = {Assistance robots improve people's quality of life in residential and office tasks, especially for people with physical limitations. In the case of the elderly or people with upper limb motor disabilities, an assistance robot for food support is necessary.  This development is based on a mixed environment, a real and virtual environment working interactively.  A camera located in front of the user is used, at a distance of 60 cm, so that it has an excellent visual range to capture the user's hand gestures for the commands. Pattern recognition based on a deep learning algorithm is made with convolutional neural networks to identify the user's hand gestures. This work exposes the network's training and the results of the robot command's execution. A virtual environment is presented in which a robotic arm with a spoon-like effector is used in a machine vision system that allows eight different types of commands to be recognized for the robot by training a faster R-CNN network for which a database of 640 images is used, achieving a degree of system performance of over 95%. The average time in the execution of a cycle from detecting and identify the command gesture to move the robot towards the food and return in front of the user is  21 seconds, making the development useful for real-time applications.},
   issn = {2088-5334},
   publisher = {INSIGHT - Indonesian Society for Knowledge and Human Development},
   url = {http://ijaseit.insightsociety.org/index.php?option=com_content&view=article&id=9&Itemid=1&article_id=12694},
   doi = {10.18517/ijaseit.11.4.12694}
}

EndNote

%A Pinzón-Arenas, Javier O.
%A Jimenez-Moreno, Robinson
%D 2021
%T Visual Commands for Control of Food Assistance Robot
%B 2021
%9 Convolutional neural network; faster R-CNN; assistance robot; virtual environment.
%! Visual Commands for Control of Food Assistance Robot
%K Convolutional neural network; faster R-CNN; assistance robot; virtual environment.
%X Assistance robots improve people's quality of life in residential and office tasks, especially for people with physical limitations. In the case of the elderly or people with upper limb motor disabilities, an assistance robot for food support is necessary.  This development is based on a mixed environment, a real and virtual environment working interactively.  A camera located in front of the user is used, at a distance of 60 cm, so that it has an excellent visual range to capture the user's hand gestures for the commands. Pattern recognition based on a deep learning algorithm is made with convolutional neural networks to identify the user's hand gestures. This work exposes the network's training and the results of the robot command's execution. A virtual environment is presented in which a robotic arm with a spoon-like effector is used in a machine vision system that allows eight different types of commands to be recognized for the robot by training a faster R-CNN network for which a database of 640 images is used, achieving a degree of system performance of over 95%. The average time in the execution of a cycle from detecting and identify the command gesture to move the robot towards the food and return in front of the user is  21 seconds, making the development useful for real-time applications.
%U http://ijaseit.insightsociety.org/index.php?option=com_content&view=article&id=9&Itemid=1&article_id=12694
%R doi:10.18517/ijaseit.11.4.12694
%J International Journal on Advanced Science, Engineering and Information Technology
%V 11
%N 4
%@ 2088-5334

IEEE

Javier O. Pinzón-Arenas and Robinson Jimenez-Moreno,"Visual Commands for Control of Food Assistance Robot," International Journal on Advanced Science, Engineering and Information Technology, vol. 11, no. 4, pp. 1373-1378, 2021. [Online]. Available: http://dx.doi.org/10.18517/ijaseit.11.4.12694.

RefMan/ProCite (RIS)

TY  - JOUR
AU  - Pinzón-Arenas, Javier O.
AU  - Jimenez-Moreno, Robinson
PY  - 2021
TI  - Visual Commands for Control of Food Assistance Robot
JF  - International Journal on Advanced Science, Engineering and Information Technology; Vol. 11 (2021) No. 4
Y2  - 2021
SP  - 1373
EP  - 1378
SN  - 2088-5334
PB  - INSIGHT - Indonesian Society for Knowledge and Human Development
KW  - Convolutional neural network; faster R-CNN; assistance robot; virtual environment.
N2  - Assistance robots improve people's quality of life in residential and office tasks, especially for people with physical limitations. In the case of the elderly or people with upper limb motor disabilities, an assistance robot for food support is necessary.  This development is based on a mixed environment, a real and virtual environment working interactively.  A camera located in front of the user is used, at a distance of 60 cm, so that it has an excellent visual range to capture the user's hand gestures for the commands. Pattern recognition based on a deep learning algorithm is made with convolutional neural networks to identify the user's hand gestures. This work exposes the network's training and the results of the robot command's execution. A virtual environment is presented in which a robotic arm with a spoon-like effector is used in a machine vision system that allows eight different types of commands to be recognized for the robot by training a faster R-CNN network for which a database of 640 images is used, achieving a degree of system performance of over 95%. The average time in the execution of a cycle from detecting and identify the command gesture to move the robot towards the food and return in front of the user is  21 seconds, making the development useful for real-time applications.
UR  - http://ijaseit.insightsociety.org/index.php?option=com_content&view=article&id=9&Itemid=1&article_id=12694
DO  - 10.18517/ijaseit.11.4.12694

RefWorks

RT Journal Article
ID 12694
A1 Pinzón-Arenas, Javier O.
A1 Jimenez-Moreno, Robinson
T1 Visual Commands for Control of Food Assistance Robot
JF International Journal on Advanced Science, Engineering and Information Technology
VO 11
IS 4
YR 2021
SP 1373
OP 1378
SN 2088-5334
PB INSIGHT - Indonesian Society for Knowledge and Human Development
K1 Convolutional neural network; faster R-CNN; assistance robot; virtual environment.
AB Assistance robots improve people's quality of life in residential and office tasks, especially for people with physical limitations. In the case of the elderly or people with upper limb motor disabilities, an assistance robot for food support is necessary.  This development is based on a mixed environment, a real and virtual environment working interactively.  A camera located in front of the user is used, at a distance of 60 cm, so that it has an excellent visual range to capture the user's hand gestures for the commands. Pattern recognition based on a deep learning algorithm is made with convolutional neural networks to identify the user's hand gestures. This work exposes the network's training and the results of the robot command's execution. A virtual environment is presented in which a robotic arm with a spoon-like effector is used in a machine vision system that allows eight different types of commands to be recognized for the robot by training a faster R-CNN network for which a database of 640 images is used, achieving a degree of system performance of over 95%. The average time in the execution of a cycle from detecting and identify the command gesture to move the robot towards the food and return in front of the user is  21 seconds, making the development useful for real-time applications.
LK http://ijaseit.insightsociety.org/index.php?option=com_content&view=article&id=9&Itemid=1&article_id=12694
DO  - 10.18517/ijaseit.11.4.12694