Automated Test Cases and Test Data Generation for Dynamic Structural Testing in Automatic Programming Assessment Using MC/DC

Rohaida Romli (1), Shahadath Sarker (2), Mazni Omar (3), Musyrifah Mahmod (4)
(1) School of Computing, College of Arts and Sciences, Universiti Utara Malaysia, Kedah, Malaysia
(2) School of Computing, College of Arts and Sciences, Universiti Utara Malaysia, Kedah, Malaysia
(3) School of Computing, College of Arts and Sciences, Universiti Utara Malaysia, Kedah, Malaysia
(4) School of Computing, College of Arts and Sciences, Universiti Utara Malaysia, Kedah, Malaysia
Fulltext View | Download
How to cite (IJASEIT) :
Romli, Rohaida, et al. “Automated Test Cases and Test Data Generation for Dynamic Structural Testing in Automatic Programming Assessment Using MC DC”. International Journal on Advanced Science, Engineering and Information Technology, vol. 10, no. 1, Feb. 2020, pp. 120-7, doi:10.18517/ijaseit.10.1.10166.
Automatic Programming Assessment (or APA) is known as a method to assist educators in executing automated assessment and grading on students’ programming exercises and assignments. Having to execute dynamic testing in APA, providing an adequate set of test data via a systematic process of test data generation is necessarily essential. Though researches respecting to software testing have proposed various significant methods to realize automated test data generation, it occurs that recent studies of APA rarely utilized these methods. Merely some of the limited studies appeared to resolve this circumstance, yet the focus on realizing test set and test data covering more thorough dynamic-structural testing are still deficient. Thus, we propose a method that utilizes MC/DC coverage criteria to support more thorough automated test data generation for dynamic-structural testing in APA (or is called DyStruc-TDG). In this paper, we reveal the means of deriving and generating test cases and test data for the DyStruc-TDG method and its verification concerning the reliability criteria (or called positive testing) of test data adequacy in programming assessments. This method offers a significant impact on assisting educators dealing with introductory programming courses to derive and generate test cases and test data via APA regardless of having knowledge of designing test cases mainly to execute structural testing. As regards to this, it can effectively reduce the educators’ workload as the process of manual assessments is typically prone to errors and promoting inconsistency in marking and grading.

R. Romli, S. Sulaiman, K. Z. Zamli, “Automatic programming assessment and test data generation a review on its approaches”. In Proceedings of Information Technology (ITSim) International Symposium 3, 2010, pp.1186-1192.

D. Jackson, “A Software System for Grading Student Computer Programs”, Computers and Education, 27 (3-4), pp. 171-180, 1996.

R. Saikkonen., L. Malmi, A. Korhonen, “Fully Automatic Assessment of Programming Exercises”, ACM SIGCSE Bulletin, 33 (3), 2001, pp.133-136.R. E. Sorace, V. S. Reinhardt, and S. A. Vaughn, “High-speed digital-to-RF converter,” U.S. Patent 5 668 842, Sept. 16, 1997.

D. Jackson, M. Ushe, “Grading student programs using ASSYST”, Proceedings of the 28th SIGCSE Technical Symposium on Computer Science Education, San Jose, CA., 1997, pp. 335-339.

M. Luck, M. S. Joy, “A secure on-line submission system”, Journal of Software - Practise and Experience, 29 (8), pp. 721-740, 1999.

L. Malmi, V. Karavirta,, A. Korhonen, J. Nikander, O. Seppala, P. Silvasti, “Visual Algorithm Simulation Exercise System with Automatic Assessment: TRAKLA2”, Informatics in Education, 3(2), pp. 267-288, 2004.

M. Choy, U. Nazir, C.K Poon, Y.Y Yu, “Experiences in Using an Automated System for Improving Students’ of Computer Programming” , Advances in Web-Based Learning - ICWL 2005, Lecture Notes in Computer Science, Vol. 3583/2005, 2005, pp. 267-272.

T. Tang, R. Smith, J. Warren, S. Rixner, “Data-Driven Test Case Generation for Automated Programming Assessment”, Proceedings of the 2016 ACM Conference on Innovation and Technology in Computer Science Education ITiCSE 16, 2016, pp. 260-265.

H. Fangohr, N. O'Brien, A. Prabhakar, A. Kashyap, “Teaching Phyton Programming with Automatic Assessment and Feedback Provision”, arXiv:1509.03556 [cs.CY], 2015, pp. 1-26.

S. Monpratarnchai, S. Fujiwara, A. Katayama, T. Uehara, “Automated Testing for Java Programs using JPF-based Test Case Generation, ACM SIGSOFT Software Engineering Notes, 39 (1), 2014, pp. 1-5.

L. A. Clarke, “A system to generate test data and symbolically execute programs”, IEEE Transaction on Software Engineering, SE-2(3), pp. 215-222, 1976.

N. Gupta, A.P Mathur, M. L Soffa, “Automated Test Data Generation Using an Iterative Relaxation Method”, ACM SIGSOFT Software Engineering Notes, 23 (6), pp. 231-245, 1998.

J. Offutt, S. Liu, A. Abdurazik, P. Ammann, “Generating Test Data from State-Based Specifications”, Software Testing, Verification And Reliability, Vol. 13, pp. 25-53, 2003.

K.Z. Zamli,. A. M. Isa, M. F. J. Klaib, S.N. Azizan, “Tool for Automated Test Data Generation (and Execution) Based on Combinatorial Approach”, International Journal of Software Engineering and Its Applications, 1(1), pp. 19-36, 2007.

W. Zidoune, T. Benouhiba, “Targeted adequacy criteria for search-based test data generation”, International Conference on Information Technology and E-Services, 2012, pp. 1-6.

R.P. Pargas, M. J. Harrold, R. Peck, “Test-Data Generation Using Genetic Algorithms”, Journal of Software Testing, Verification and Reliability, 9(4), pp. 63-282, 1999.

P. Ihantola, “Test Data Generation for Programming Exercises with Symbolic Execution ind Java PathFinder”, Proceedings of the 6th Baltic Sea Conference on Computing Education Research: Koli Calling 2006, 2006, pp. 87 - 94.

N. Tillmann, J. D Halleux, T. Xie, S. Gulwani, J. Bishop, “Teaching and Learning Programming and Software Engineering via Interactive Gaming”, Proceedings of the 2013 International Conference on Software Engineering (ICSE’13), San Francisco,CA, USA, 2013, pp. 1117-1126.

R. Romli, “Test Data Generation Framework for Automatic Programming Assessment”, PhD Thesis, Universiti Sains Malaysia, Malaysia, 2014.

K. J. Hayhurst, D. S. Veerhusen, J.J. Chilenski,, L.K Rierson, “A practical tutorial on modified condition/decision coverage”, NASA STI Report Series, 2001.

H. Zhu, P.A. V. Hall, J. H. R May, “Software Unit Test Coverage and Adequacy”, ACM Computing Surveys, 29 (4), pp. 365-427, 1997.

H. Zhu, “Axiomatic Assessment of Control Flow-based Software Test Adequacy Criteria”, Software Engineering Journal, 10 (5), pp. 194 -204, 1995.

K. Ghani, J.A Clark, “Automatic Test Data Generation for Multiple Condition and MCDC Coverage”, Proceedings of the 2009 Fourth International Conference on Software Engineering Advances, 2009, pp. 152 -157.

C. Douce, D. Livingstone & J. Orwell, J. “Automatic test-based assessment of programming: A review”, Journal on Educational Resources in Computing (JERIC), 5(3), Aticle No. 4, 2005.

P. Ihantola, T. Ahoniemi, V. Karavirta & O. Seppí¤lí¤, O. “Review of recent systems for automatic assessment of programming assignments”, In Proceedings of the 10th Koli Calling International Conference on Computing Education Research , 2010, pp. 86-93.

PY. Liang, Q. Liu, J. Xu & D. Wang, D. “The recent development of automated programming assessment”. Proceedings of International Conference on Computational Intelligence and Software Engineering (CiSE 2009), 2009, pp. 1-5.

K. A. Rahman & M. J. Nordin, A review on the static analysis approach in the automated programming assessment systems. Proceedings of National Conference on Programming , 2007, Vol. 7.

IPL Information Processing Ltd. Designing Unit Test Cases, 1997 .Available: http://www.ipl.com/pdf/p0829.pdf. Retrieved on: 10 Feb 2009.

J. Watkins, S. Mills, Testing IT: An Off-the-Shelf Software Testing Process, 2nd Edition, 2011, Cambridge University Press, NY, USA.

S. Rayadurgam, M.P.E. Heimdahl, “Generating MC/DC Adequate Test Sequences Through Model Checking”, Proceedings of the 28th Annual IEEE/NASA Software Engineering Workshop -- SEW-03. Greenbelt, Maryland, 2003, pp. 1-5.

M. Pezze, M. Young, Software Testing and Analysis: Process, Principles, and Techniques, 2008, John Wiley & Sons, Inc, USA.

J.R. Fraenkel, N.E Wallen, How to Design and Evaluate Research in Education, 4th Edition, 2000, McGraw-Hill Companies, Inc, U.S.A.

Authors who publish with this journal agree to the following terms:

    1. Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
    2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
    3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).