Conditional Max-preserving Normalization: an Innovative Approach to Combining Diverse Classification Models
How to cite (IJASEIT) :
I. D. Mienye and Y. Sun, “A Survey of Ensemble Learning: Concepts, Algorithms, Applications, and Prospects,” IEEE Access, vol. 10, pp. 99129–99149, 2022, doi: 10.1109/access.2022.3207287.
Y. Yang, H. Lv, and N. Chen, “A Survey on ensemble learning under the era of deep learning,” Artificial Intelligence Review, vol. 56, no. 6, pp. 5545–5589, Nov. 2022, doi: 10.1007/s10462-022-10283-5.
S. Abimannan, E.-S. M. El-Alfy, Y.-S. Chang, S. Hussain, S. Shukla, and D. Satheesh, “Ensemble Multifeatured Deep Learning Models and Applications: A Survey,” IEEE Access, vol. 11, pp. 107194–107217, 2023, doi: 10.1109/access.2023.3320042.
Z. Li, F. Liu, W. Yang, S. Peng, and J. Zhou, “A Survey of Convolutional Neural Networks: Analysis, Applications, and Prospects,” IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 12, pp. 6999–7019, Dec. 2022, doi:10.1109/tnnls.2021.3084827.
P. Purwono, A. Ma’arif, W. Rahmaniar, H. I. K. Fathurrahman, A. Z. K. Frisky, and Q. M. ul Haq, “Understanding of Convolutional Neural Network (CNN): A Review,” International Journal of Robotics and Control Systems, vol. 2, no. 4, pp. 739–748, Jan. 2023, doi:10.31763/ijrcs.v2i4.888.
L. Cai, J. Gao, and D. Zhao, “A review of the application of deep learning in medical image classification and segmentation,” Annals of Translational Medicine, vol. 8, no. 11, pp. 713–713, Jun. 2020, doi:10.21037/atm.2020.02.44.
A. Mohammed and R. Kora, “An effective ensemble deep learning framework for text classification,” Journal of King Saud University - Computer and Information Sciences, vol. 34, no. 10, pp. 8825–8837, Nov. 2022, doi: 10.1016/j.jksuci.2021.11.001.
I. Ahmad, M. Yousaf, S. Yousaf, and M. O. Ahmad, “Fake News Detection Using Machine Learning Ensemble Methods,” Complexity, vol. 2020, pp. 1–11, Oct. 2020, doi: 10.1155/2020/8885861.
A. Mohammed and R. Kora, “A comprehensive review on ensemble deep learning: Opportunities and challenges,” Journal of King Saud University - Computer and Information Sciences, vol. 35, no. 2, pp. 757–774, Feb. 2023, doi: 10.1016/j.jksuci.2023.01.014.
M. Tanveer, M. A. Ganaie, and P. N. Suganthan, “Ensemble of classification models with weighted functional link network,” Applied Soft Computing, vol. 107, p. 107322, Aug. 2021, doi:10.1016/j.asoc.2021.107322.
X. Dong, Z. Yu, W. Cao, Y. Shi, and Q. Ma, “A survey on ensemble learning,” Frontiers of Computer Science, vol. 14, no. 2, pp. 241–258, Aug. 2019, doi: 10.1007/s11704-019-8208-z.
O. Sagi and L. Rokach, “Ensemble learning: A survey,” WIREs Data Mining and Knowledge Discovery, vol. 8, no. 4, Feb. 2018, doi:10.1002/widm.1249.
M. Kumar, K. Bajaj, B. Sharma, and S. Narang, “A Comparative Performance Assessment of Optimized Multilevel Ensemble Learning Model with Existing Classifier Models,” Big Data, vol. 10, no. 5, pp. 371–387, Oct. 2022, doi: 10.1089/big.2021.0257.
P. Mahajan, S. Uddin, F. Hajati, and M. A. Moni, “Ensemble Learning for Disease Prediction: A Review,” Healthcare, vol. 11, no. 12, p. 1808, Jun. 2023, doi: 10.3390/healthcare11121808.
R. Kora and A. Mohammed, “An enhanced approach for sentiment analysis based on meta-ensemble deep learning,” Social Network Analysis and Mining, vol. 13, no. 1, Mar. 2023, doi: 10.1007/s13278-023-01043-6.
R. Dey and R. Mathur, “Ensemble Learning Method Using Stacking with Base Learner, A Comparison,” Proceedings of International Conference on Data Analytics and Insights, ICDAI 2023, pp. 159–169, 2023, doi: 10.1007/978-981-99-3878-0_14.
J. Yao, X. Zhang, W. Luo, C. Liu, and L. Ren, “Applications of Stacking/Blending ensemble learning approaches for evaluating flash flood susceptibility,” International Journal of Applied Earth Observation and Geoinformation, vol. 112, p. 102932, Aug. 2022, doi:10.1016/j.jag.2022.102932.
A. Mohamed et al., “The Impact of Data processing and Ensemble on Breast Cancer Detection Using Deep Learning,” Journal of Computing and Communication, vol. 1, no. 1, pp. 27–37, Feb. 2022, doi:10.21608/jocc.2022.218453.
M. Salama, H. Abdelkader, and A. Abdelwahab, “A novel ensemble approach for heterogeneous data with active learning,” International Journal of Engineering Business Management, vol. 14, Mar. 2022, doi: 10.1177/18479790221082605.
X. Wang et al., “Hybrid feature ranking and classifier aggregation based on multi-criteria decision-making,” Expert Systems with Applications, vol. 238, p. 122193, Mar. 2024, doi:10.1016/j.eswa.2023.122193.
D. Zhu, S. Lu, M. Wang, J. Lin, and Z. Wang, “Efficient Precision-Adjustable Architecture for Softmax Function in Deep Learning,” IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 67, no. 12, pp. 3382–3386, Dec. 2020, doi: 10.1109/tcsii.2020.3002564.
H. Shao and S. Wang, “Deep Classification with Linearity-Enhanced Logits to Softmax Function,” Entropy, vol. 25, no. 5, p. 727, Apr. 2023, doi: 10.3390/e25050727.
J. Davis, T. Liang, J. Enouen, and R. Ilin, “Hierarchical Classification with Confidence using Generalized Logits,” 2020 25th International Conference on Pattern Recognition (ICPR), pp. 1874–1881, Jan. 2021, doi: 10.1109/icpr48806.2021.9412867.
E. M. Clarke, “Model checking,” Foundations of Software Technology and Theoretical Computer Science, pp. 54–56, 1997, doi:10.1007/bfb0058022.
K. Larsen, A. Legay, G. Nolte, M. Schlüter, M. Stoelinga, and B. Steffen, “Formal Methods Meet Machine Learning (F3ML),” Leveraging Applications of Formal Methods, Verification and Validation. Adaptation and Learning, pp. 393–405, 2022, doi:10.1007/978-3-031-19759-8_24.
T. Kulik et al., “A Survey of Practical Formal Methods for Security,” Formal Aspects of Computing, vol. 34, no. 1, pp. 1–39, Mar. 2022, doi:10.1145/3522582.
L. de Moura and N. Bjørner, “Z3: An Efficient SMT Solver,” Tools and Algorithms for the Construction and Analysis of Systems, pp. 337–340, 2008, doi: 10.1007/978-3-540-78800-3_24.
X. Huang et al., “A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability,” Computer Science Review, vol. 37, p. 100270, Aug. 2020, doi: 10.1016/j.cosrev.2020.100270.
G. Einziger, M. Goldstein, Y. Sa’ar, and I. Segall, “Verifying Robustness of Gradient Boosted Models,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, no. 01, pp. 2446–2453, Jul. 2019, doi: 10.1609/aaai.v33i01.33012446.
J. Kang, Z. Ullah, and J. Gwak, “MRI-Based Brain Tumor Classification Using Ensemble of Deep Features and Machine Learning Classifiers,” Sensors, vol. 21, no. 6, p. 2222, Mar. 2021, doi:10.3390/s21062222.
I. D. Mienye, Y. Sun, and Z. Wang, “An improved ensemble learning approach for the prediction of heart disease risk,” Informatics in Medicine Unlocked, vol. 20, p. 100402, 2020, doi:10.1016/j.imu.2020.100402.
H. Zeng et al., “DCAE: A dual conditional autoencoder framework for the reconstruction from EEG into image,” Biomedical Signal Processing and Control, vol. 81, p. 104440, Mar. 2023, doi:10.1016/j.bspc.2022.104440.
M. N. Khan, S. Das, and J. Liu, “Predicting pedestrian-involved crash severity using inception-v3 deep learning model,” Accident Analysis & Prevention, vol. 197, p. 107457, Mar. 2024, doi:10.1016/j.aap.2024.107457.
This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).