Cite Article

Adopting the Appropriate Performance Measures for Soft Computing-based Estimation by Analogy

Choose citation format

BibTeX

@article{IJASEIT10178,
   author = {Muhammad Arif Shah and Dayang N. A. Jawawi and Mohd Adham Isa and Muhammad Younas and Ahmad Mustafa},
   title = {Adopting the Appropriate Performance Measures for Soft Computing-based Estimation by Analogy},
   journal = {International Journal on Advanced Science, Engineering and Information Technology},
   volume = {10},
   number = {2},
   year = {2020},
   pages = {629--634},
   keywords = {estimation by analogy; performance metrics; MMRE; PRED; software development effort.},
   abstract = {

Soft Computing based estimation by analogy is a lucrative research domain for the software engineering research community. There are a considerable number of models proposed in this research area. Therefore, researchers are of interest to compare the models to identify the best one for software development effort estimation. This research showed that most of the studies used mean magnitude of relative error (MMRE) and percentage of prediction (PRED) for the comparison of their estimation models. Still, it was also found in this study that there are quite a number of criticisms done on accuracy statistics like MMRE and PRED by renowned authors. It was found that MMRE is an unbalanced, biased, and inappropriate performance measure for identifying the best among competing estimation models. The accuracy statistics, e.g., MMRE and PRED, are still adopted in the evaluation criteria by the domain researchers, stating the reason for “widely used,” which is not a valid reason. This research study identified that, since there is no practical solution provided so far, which could replace MMRE and PRED, the researchers are adopting these measures. The approach of partitioning the large dataset into subsamples was tried in this paper using estimation by analogy (EBA) model. One small and one large dataset were considered for it, such as Desharnais and ISBSG release 11. The ISBSG dataset is a large dataset concerning Desharnais. The ISBSG dataset was partitioned into subsamples. The results suggested that when the large datasets are partitioned, the MMRE produces the same or nearly the same results, which it produces for the small dataset. It is observed that the MMRE can be trusted as a performance metric if the large datasets are partitioned into subsamples.

},    issn = {2088-5334},    publisher = {INSIGHT - Indonesian Society for Knowledge and Human Development},    url = {http://ijaseit.insightsociety.org/index.php?option=com_content&view=article&id=9&Itemid=1&article_id=10178},    doi = {10.18517/ijaseit.10.2.10178} }

EndNote

%A Shah, Muhammad Arif
%A Jawawi, Dayang N. A.
%A Isa, Mohd Adham
%A Younas, Muhammad
%A Mustafa, Ahmad
%D 2020
%T Adopting the Appropriate Performance Measures for Soft Computing-based Estimation by Analogy
%B 2020
%9 estimation by analogy; performance metrics; MMRE; PRED; software development effort.
%! Adopting the Appropriate Performance Measures for Soft Computing-based Estimation by Analogy
%K estimation by analogy; performance metrics; MMRE; PRED; software development effort.
%X 

Soft Computing based estimation by analogy is a lucrative research domain for the software engineering research community. There are a considerable number of models proposed in this research area. Therefore, researchers are of interest to compare the models to identify the best one for software development effort estimation. This research showed that most of the studies used mean magnitude of relative error (MMRE) and percentage of prediction (PRED) for the comparison of their estimation models. Still, it was also found in this study that there are quite a number of criticisms done on accuracy statistics like MMRE and PRED by renowned authors. It was found that MMRE is an unbalanced, biased, and inappropriate performance measure for identifying the best among competing estimation models. The accuracy statistics, e.g., MMRE and PRED, are still adopted in the evaluation criteria by the domain researchers, stating the reason for “widely used,” which is not a valid reason. This research study identified that, since there is no practical solution provided so far, which could replace MMRE and PRED, the researchers are adopting these measures. The approach of partitioning the large dataset into subsamples was tried in this paper using estimation by analogy (EBA) model. One small and one large dataset were considered for it, such as Desharnais and ISBSG release 11. The ISBSG dataset is a large dataset concerning Desharnais. The ISBSG dataset was partitioned into subsamples. The results suggested that when the large datasets are partitioned, the MMRE produces the same or nearly the same results, which it produces for the small dataset. It is observed that the MMRE can be trusted as a performance metric if the large datasets are partitioned into subsamples.

%U http://ijaseit.insightsociety.org/index.php?option=com_content&view=article&id=9&Itemid=1&article_id=10178 %R doi:10.18517/ijaseit.10.2.10178 %J International Journal on Advanced Science, Engineering and Information Technology %V 10 %N 2 %@ 2088-5334

IEEE

Muhammad Arif Shah,Dayang N. A. Jawawi,Mohd Adham Isa,Muhammad Younas and Ahmad Mustafa,"Adopting the Appropriate Performance Measures for Soft Computing-based Estimation by Analogy," International Journal on Advanced Science, Engineering and Information Technology, vol. 10, no. 2, pp. 629-634, 2020. [Online]. Available: http://dx.doi.org/10.18517/ijaseit.10.2.10178.

RefMan/ProCite (RIS)

TY  - JOUR
AU  - Shah, Muhammad Arif
AU  - Jawawi, Dayang N. A.
AU  - Isa, Mohd Adham
AU  - Younas, Muhammad
AU  - Mustafa, Ahmad
PY  - 2020
TI  - Adopting the Appropriate Performance Measures for Soft Computing-based Estimation by Analogy
JF  - International Journal on Advanced Science, Engineering and Information Technology; Vol. 10 (2020) No. 2
Y2  - 2020
SP  - 629
EP  - 634
SN  - 2088-5334
PB  - INSIGHT - Indonesian Society for Knowledge and Human Development
KW  - estimation by analogy; performance metrics; MMRE; PRED; software development effort.
N2  - 

Soft Computing based estimation by analogy is a lucrative research domain for the software engineering research community. There are a considerable number of models proposed in this research area. Therefore, researchers are of interest to compare the models to identify the best one for software development effort estimation. This research showed that most of the studies used mean magnitude of relative error (MMRE) and percentage of prediction (PRED) for the comparison of their estimation models. Still, it was also found in this study that there are quite a number of criticisms done on accuracy statistics like MMRE and PRED by renowned authors. It was found that MMRE is an unbalanced, biased, and inappropriate performance measure for identifying the best among competing estimation models. The accuracy statistics, e.g., MMRE and PRED, are still adopted in the evaluation criteria by the domain researchers, stating the reason for “widely used,” which is not a valid reason. This research study identified that, since there is no practical solution provided so far, which could replace MMRE and PRED, the researchers are adopting these measures. The approach of partitioning the large dataset into subsamples was tried in this paper using estimation by analogy (EBA) model. One small and one large dataset were considered for it, such as Desharnais and ISBSG release 11. The ISBSG dataset is a large dataset concerning Desharnais. The ISBSG dataset was partitioned into subsamples. The results suggested that when the large datasets are partitioned, the MMRE produces the same or nearly the same results, which it produces for the small dataset. It is observed that the MMRE can be trusted as a performance metric if the large datasets are partitioned into subsamples.

UR - http://ijaseit.insightsociety.org/index.php?option=com_content&view=article&id=9&Itemid=1&article_id=10178 DO - 10.18517/ijaseit.10.2.10178

RefWorks

RT Journal Article
ID 10178
A1 Shah, Muhammad Arif
A1 Jawawi, Dayang N. A.
A1 Isa, Mohd Adham
A1 Younas, Muhammad
A1 Mustafa, Ahmad
T1 Adopting the Appropriate Performance Measures for Soft Computing-based Estimation by Analogy
JF International Journal on Advanced Science, Engineering and Information Technology
VO 10
IS 2
YR 2020
SP 629
OP 634
SN 2088-5334
PB INSIGHT - Indonesian Society for Knowledge and Human Development
K1 estimation by analogy; performance metrics; MMRE; PRED; software development effort.
AB 

Soft Computing based estimation by analogy is a lucrative research domain for the software engineering research community. There are a considerable number of models proposed in this research area. Therefore, researchers are of interest to compare the models to identify the best one for software development effort estimation. This research showed that most of the studies used mean magnitude of relative error (MMRE) and percentage of prediction (PRED) for the comparison of their estimation models. Still, it was also found in this study that there are quite a number of criticisms done on accuracy statistics like MMRE and PRED by renowned authors. It was found that MMRE is an unbalanced, biased, and inappropriate performance measure for identifying the best among competing estimation models. The accuracy statistics, e.g., MMRE and PRED, are still adopted in the evaluation criteria by the domain researchers, stating the reason for “widely used,” which is not a valid reason. This research study identified that, since there is no practical solution provided so far, which could replace MMRE and PRED, the researchers are adopting these measures. The approach of partitioning the large dataset into subsamples was tried in this paper using estimation by analogy (EBA) model. One small and one large dataset were considered for it, such as Desharnais and ISBSG release 11. The ISBSG dataset is a large dataset concerning Desharnais. The ISBSG dataset was partitioned into subsamples. The results suggested that when the large datasets are partitioned, the MMRE produces the same or nearly the same results, which it produces for the small dataset. It is observed that the MMRE can be trusted as a performance metric if the large datasets are partitioned into subsamples.

LK http://ijaseit.insightsociety.org/index.php?option=com_content&view=article&id=9&Itemid=1&article_id=10178 DO - 10.18517/ijaseit.10.2.10178