An Efficient and Robust Mobile Augmented Reality Application

Siok Yee Tan (1), Haslina Arshad (2), Azizi Abdullah (3)
(1) Universiti Kebangsaan Malaysia, 43600 Bangi, Selangor Darul Ehsan, Malaysia.
(2) Universiti Kebangsaan Malaysia, 43600 Bangi, Selangor Darul Ehsan, Malaysia.
(3) Universiti Kebangsaan Malaysia, 43600 Bangi, Selangor Darul Ehsan, Malaysia.
Fulltext View | Download
How to cite (IJASEIT) :
Tan, Siok Yee, et al. “An Efficient and Robust Mobile Augmented Reality Application”. International Journal on Advanced Science, Engineering and Information Technology, vol. 8, no. 4-2, Sept. 2018, pp. 1672-8, doi:10.18517/ijaseit.8.4-2.6810.
AR technology is perceived to be evolved from the bases of Virtual Reality (VR) technology. The ultimate goal of AR is to provide better management and ubiquitous access to information by using seamless techniques in which the interactive real world is combined with an interactive computer-generated world in one coherent environment. The direction of research in the field of AR has been shifted from traditional Desktop based mediums to the mobile devices such as the smartphones. However, image recognition on smartphones enforces many restrictions and challenges in the form of efficiency and robustness which are the general performance measurement of image recognition. Smart phones have limited processing capabilities as compared to the PC platform, hence the process of mobile AR application development and use of image recognition algorithm need to be emphasised. The processes of mobile AR application development include detection, description and matching. All the processes and algorithms need to be carefully selected in order to create an efficient and robust mobile AR application. The algorithm used in this work for detection, description and matching are AGAST, FREAK and Hamming distance respectively. The computation time, robustness towards rotation, scale and brightness are evaluated. The dataset used to evaluate the mobile AR application is the benchmark dataset; Mikolajczyk. The results showed that the mobile AR application is efficient with a computation time of 29.1ms. The robustness towards scale, rotation and brightness changes of the mobile AR application also obtained high accuracy which is 89.76%, 87.71% and 83.87% respectively. Hence, combination of algorithm AGAST, FREAK and Hamming distance are suitable to create an efficient and robust mobile AR application.

R. T. A. Azuma, “Survey of Augmented Reality,” Presence: Teleoperators and Virtual Environments, vol. 6, no. 4. pp. 355-385, 1997.

D. Nincarean, M. B. Alia, N. D. A. Halim, and M. H. A. Rahman, “Mobile Augmented Reality: The Potential for Education,” Procedia - Soc. Behav. Sci., vol. 103, pp. 657-664, 2013.

M. Pu, N. A. A. Majid, and B. Idrus, “Framework based on Mobile Augmented Reality for Translating Food Menu in Thai Language to Malay Language,” Int. J. Adv. Sci. Eng. Inf. Technol., vol. 7, no. 1, pp. 153-159, 2017.

M. J. Sadik and M. C. Lam, “Stereoscopic Vision Mobile Augmented Reality System Architecture in Assembly Tasks,” J. Eng. Appl. Sci., vol. 12, no. 8, pp. 2098-2105, 2017.

H. Arshad, M. C. Lam, W. K. Obeidy, and S. Y. Tan, “An Efficient Cloud based Image Target Recognition SDK for Mobile Applications,” Int. J. Adv. Sci. Eng. Inf. Technol., vol. 7, no. 2, pp. 496-502, 2017.

N. C. Hashim, N. A. A. Majid, H. Arshad, and W. K. Obeidy, “User Satisfaction for an Augmented Reality Application to Support Productive Vocabulary Using Speech Recognition,” Adv. Multimed., 2018.

L. W. Shang, M. H. Zakaria, and I. Ahmad, “Mobile phone augmented reality postcard,” J. Telecommun. Electron. Comput. Eng., vol. 8, no. 2, pp. 135-139, 2016.

H. Arshad, S. A. Chowdhury, L. M. Chun, B. Parhizkar, and W. K. Obeidy, “A freeze-object interaction technique for handheld augmented reality systems,” Multimed. Tools Appl., 2016.

D. Wagner and D. Schmalstieg, “History and Future of Tracking for Mobile Phone Augmented Reality,” in 2009 International Symposium on Ubiquitous Virtual Reality, 2009, pp. 7-10.

W. K. Obeidy, H. Arshad, S. Y. Tan, and H. Rahman, “Developmental Analysis of a Markerless Hybrid Tracking Technique for Mobile Augmented Reality Systems,” in Advances in Visual Informatics, 4th International Visual Informatics Conference, IVIC 2015, 2015, pp. 99-110.

H. Uchiyama and E. Marchand, “Object Detection and Pose Tracking for Augmented Reality: Recent Approaches,” Found. Comput. Vis., pp. 1-8, 2012.

C. Harris and M. Stephens, “A Combined Corner and Edge Detector,” in Procedings of the Alvey Vision Conference 1988, 1988, p. 23.1-23.6.

C. Mikolajczyk, K.Schmid, “A performance evaluation of local descriptors,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 10, pp. 1615-1630, 2005.

E. Rosten and T. Drummond, “Fusing points and lines for high performance tracking,” in Proceedings of the IEEE International Conference on Computer Vision, 2005, vol. II, pp. 1508-1515.

G. Lowe, “SIFT - The Scale Invariant Feature Transform,” Int. J., vol. 2, pp. 91-110, 2004.

H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-Up Robust Features (SURF),” Comput. Vis. Image Underst., vol. 110, no. 3, pp. 346-359, 2008.

J. R. Quinlan, “Induction of Decision Trees,” Mach. Learn., vol. 1, no. 1, pp. 81-106, 1986.

H. Zhang, J. Wohlfeil, and D. GrieíŸbach, “Extension and evaluation of the agast feature detector,” ISPRS, vol. III, no. 4, pp. 133-137, 2016.

E. Mair, G. D. Hager, D. Burschka, M. Suppa, and G. Hirzinger, “Adaptive and generic corner detection based on the accelerated segment test,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2010, vol. 6312 LNCS, no. PART 2, pp. 183-196.

Y. K. Y. Ke and R. Sukthankar, “PCA-SIFT: a more distinctive representation for local image descriptors,” Proc. 2004 IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognition, 2004. CVPR 2004., vol. 2, pp. 2-9, 2004.

K. Mikolajczyk, A. Zisserman, and C. Schmid, “Shape recognition with edge-based features,” in Procedings of the British Machine Vision Conference 2003, 2003, p. 79.1-79.10.

T. Quack, H. Bay, and L. Van Gool, “Object Recognition for the Internet of Things,” First Int. Conf. Internet Things (IoT 2008), vol. 4952, pp. 230-246, 2008.

L. Naimark and E. Foxlin, “Circular data matrix fiducial system and robust image processing for a wearable vision-inertial self-tracker,” in Proceedings - International Symposium on Mixed and Augmented Reality, ISMAR 2002, 2002, pp. 27-36.

M. Calonder, V. Lepetit, C. Strecha, and P. Fua, “BRIEF: Binary robust independent elementary features,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2010, vol. 6314 LNCS, no. PART 4, pp. 778-792.

E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: An efficient alternative to SIFT or SURF,” in Proceedings of the IEEE International Conference on Computer Vision, 2011, pp. 2564-2571.

S. Leutenegger, M. Chli, and R. Y. Siegwart, “BRISK: Binary Robust invariant scalable keypoints,” in Proceedings of the IEEE International Conference on Computer Vision, 2011, pp. 2548-2555.

A. Alahi, R. Ortiz, and P. Vandergheynst, “FREAK: Fast retina keypoint,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2012, pp. 510-517.

S. Y. Tan, H. Arshad, and A. Azizi, “Evaluation on Binary Descriptor in Markerless Augmented Reality,” in The 3rd National Doctoral Seminar on Artificial Intelligence Technology, 2014, pp. 1-6.

A. Ufkes and M. Fiala, “A markerless augmented reality system for mobile devices,” in Proceedings - 2013 International Conference on Computer and Robot Vision, CRV 2013, 2013, pp. 226-233.

P. E. Danielsson, “Euclidean distance mapping,” Comput. Graph. Image Process., vol. 14, no. 3, pp. 227-248, 1980.

T. Tian, F. Yang, K. Zheng, and Q. Gao, “A Fast Local Image Descriptor Based on Patch Quantization,” in International Conference on Human Centered Computing, 2017, pp. 64-75.

V. Balntas, K. Lenc, A. Vedaldi, and K. Mikolajczyk, “HPatches: A benchmark and evaluation of handcrafted and learned local descriptors,” in Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 2017, vol. 2017-Janua, pp. 3852-3861.

D. J. Matuszewski, A. Hast, C. Wahlby, and I.-M. Sintorn, “A short feature vector for image matching: The Log-Polar Magnitude feature descriptor,” PLoS One, vol. 12, no. 11, 2017.

W. K. Obeidy, “A Markerless Hybrid Tracking Technique To Improve The Efficiency And Robustness Of Mobile Augmented Reality,” Universiti Kebangsaan Malaysia, 2014.

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Authors who publish with this journal agree to the following terms:

    1. Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
    2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
    3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).