Test Adequacy Assessment Using Test-Defect Coverage Analytic Model

Sharifah Mashita Syed-Mohamad, Nur Hafizah Haron, Tom McBride


Software testing is an essential activity in software development process that has been widely used as a means of achieving software reliability and quality. The emergence of incremental development in its various forms required a different approach to determining the readiness of the software for release. This approach needs to determine how reliable the software is likely to be based on planned tests, not defect growth and decline as typically shown in reliability growth models. A combination of information from a number of sources into an easily understood dashboard is expected to provide both qualitative and quantitative analyses of test and defect coverage properties. Hence, Test-Defect Coverage Analytic Model (TDCAM) is proposed which combines test and defect coverage information presented in a dashboard to help deciding whether there are enough tests planned. A case study has been conducted to demonstrate the usage of the proposed model. The visual representations and results gained from the case study show the benefits of TDCAM in assisting practitioners making informed test adequacy-related decisions.


Defect Coverage; Iterative and Incremental Development; Software Analytics; Software Testing;

Full Text:



S. Yamada, J. Hishitasni, and S. Osaki, “Software-reliability growth with a Weibull test-effort: a model and application,” IEEE Transactions on Reliability, vol. 42, no. 1, pp. 100-106, 1993.

A. Wood, “Software reliability growth models: assumptions vs. reality,” in Proceedings of the Eighth International Symposium On Software Reliability Engineering, 1997, pp. 136-141.

M. R. Lyu, “Software reliability engineering: a roadmap,” in 2007 Future of Software Engineering IEEE Computer Society, 2007, pp. 153-170.

S. M. Syed-Mohamad, An Empirical Investigation of Software Reliability Indicators. University of Technology, Sydney, 2012.

H. Zhu, P. A. V. Hall, and J. H. R. May, “Software unit test coverage and adequacy,” ACM Computing Surveys, vol. 29, no. 4, pp. 366-427, 1997.

P. G. Frankl, R. G. Hamlet, B. Littlewood, and L. Strigini, “Evaluating testing methods by delivered reliability,” IEEE Trans. On Soft. Eng., vol. 24, no. 8, pp. 586-601, 1998.

L. Jihyun, S. Kang and D. Lee, “Survey on software testing practices,” IET Soft., vol. 6, no. 3, pp. 275-282, 2012.

T. L. Graves, A. F. Karr, J. S. Marron, H. Siy, “Predicting fault incidence using software change history,” IEEE Trans. on Soft. Eng. vol. 26, no. 7, pp. 653–661, 2000.

N. Fenton, and O. Niclas, “Quantitative analysis of faults and failures in a complex software system,” IEEE Trans on Soft. Eng., vol. 26, no. 8, pp. 797-814, 2000.

N. Nagappan and T. Ball, “Use of relative code churn measures to predict system defect density,” in Proceedings of the 27th IEEE International Conference on Software Engineering (ICSE 2005), 2005, pp. 284 – 292.

S. M. Syed-Mohamad, and T. McBride, “Open source, agile and reliability measures,” in Proceedings of the 12th International Conference on Quality Engineering in Software Technology (CONQUEST), Nuremberg, Germany: International Software Quality Institute. 2009, pp. 103-118.

J. D. Mala, V. Mohan and M. Kamalapriya, “Automated software test optimisation framework-an artificial bee colony optimisation-based approach,” IET Software, vol. 4, no. 5, pp. 334-348, 2010.

L. Inozemtseva and R. Holmes, “Coverage is not strongly correlated with test suite effectiveness,” in Proceedings of the 36th International Conference on Software Engineering, 2014, pp. 435-445.

F. Rahman, D. Posnett, I. Herraiz, and P. Devanbu, “Sample size vs. bias in defect prediction,” in Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering, 2013, pp. 147-157.

Y. K. Malaiya, M. N. Li, and J. M. Bieman, “Software reliability growth with test coverage,” IEEE Transactions on Reliability, vol. 51, no. 4, pp. 420-426, 2002.

A. Mockus, N. Nagappan and T. T. Dinh-Trong, “Test coverage and post-verification defects: A multiple case study,” in Proceedings of the 3rd IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM 2009), 2009, pp. 291-301.

X. Cai, and M. R. Lyu, “The effect of code coverage on fault detection under different testing profiles,” in Proceedings of the Workshop on Advances in Model-Based Software Testing (A-MOST), St. Louis, Missouri, 2005, pp. 1-7.

E. J. Weyuker, “An empirical study of the complexity of data flow testing,” in Proceedings of the Second Workshop on Software Testing, Verification, and Analysis, 1988, pp. 188-195.

T. M. Khoshgoftaar, and J. C. Munson, “Predicting software development errors using software complexity metrics,” IEEE Journal on Selected Areas in Communications, vol. 8, no. 2, pp. 253-261, 1990.

O. Baysal, R. Holmes, and M. W. Godfrey, “Developer dashboards: the need for qualitative analytics,” IEEE Software, vol. 30, no. 4, pp. 46- 52, 2013.

S. Halliday, B. Karin and V. S. Rossouw, “A business approach to effective information technology risk analysis and management,” Information Management & Computer Security, vol. 4, no. 1, pp. 19- 31, 1996.

J. Marian, “Significance of different software metrics in defect prediction,” Software Engineering: An International Journal, vol. 1, no. 1, pp. 86–95, 2012.

A. Okutan, and O. T. Yildiz, “Software defect prediction using Bayesian networks,” Empirical Software Engineering. vol. 19, no. 1, pp. 154-181, 2014.


  • There are currently no refbacks.

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.

ISSN: 2180-1843

eISSN: 2289-8131