GRU and XGBoost Performance with Hyperparameter Tuning Using GridSearchCV and Bayesian Optimization on an IoT-Based Weather Prediction System

Hendri Darmawan (1), Mike Yuliana (2), Moch. Zen Samsono Hadi (3)
(1) Department of Postgraduate, Politeknik Elektronika Negeri Surabaya (PENS), Surabaya, 60111, Indonesia
(2) Department of Postgraduate, Politeknik Elektronika Negeri Surabaya (PENS), Surabaya, 60111, Indonesia
(3) Department of Postgraduate, Politeknik Elektronika Negeri Surabaya (PENS), Surabaya, 60111, Indonesia
Fulltext View | Download
How to cite (IJASEIT) :
Darmawan, Hendri, et al. “GRU and XGBoost Performance With Hyperparameter Tuning Using GridSearchCV and Bayesian Optimization on an IoT-Based Weather Prediction System”. International Journal on Advanced Science, Engineering and Information Technology, vol. 13, no. 3, June 2023, pp. 851-62, doi:10.18517/ijaseit.13.3.18377.
Weather is essential to human life, but it is difficult to forecast due to its diverse nature. We evaluated and compared the accuracy of two machine learning algorithms, GRU and XGBoost, in predicting weather patterns. We used GridSearchCV to tune the hyperparameters for the GRU algorithm and Bayesian optimization for the XGBoost algorithm. We used regression to predict weather sensor data and classification to predict rainfall in the following four days. We then deployed the best-performing model to the cloud server and connected it to the local IoT device with weather sensors in Sedati, Sidoarjo Regency, Indonesia. We conducted tests using data from the BMKG Juanda Sidoarjo and data from the local IoT device. The findings indicated that the XGBoost regression model outperformed the GRU model in the first stage, with an average RMSE of 1.2728125. In comparison, the average RMSE for GRU regression was 1.551666667. In the second stage, however, GRU regression performed better, with an average RMSE of 2.23, while the XGBoost regression had 2.28. In the classification tests, the GRU model had a higher F1 score of 0.88 in the first stage, while the XGBoost classification was 0.86. Both models had the same accuracy of 0.75 when tested with IoT data. However, the GRU classification model was better since it considered the context of the prediction, resulting in a lower likelihood of rain when it was not raining.

M. G. Schultz et al., “Can deep learning beat numerical weather prediction?,” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, vol. 379, no. 2194. Royal Society Publishing, Apr. 05, 2021. doi: 10.1098/rsta.2020.0097.

S. F. Tekin, O. Karaahmetoglu, F. Ilhan, I. Balaban, and S. S. Kozat, “Spatio-temporal weather forecasting and attention mechanism on convolutional LSTMs,” ArXiv, Feb. 2021, doi: 10.48550/ARXIV.2102.00696.

D. Munandar, “Multilayer perceptron (MLP) and autoregressive integrated moving average (ARIMA) models in multivariate input time series data: solar irradiance forecasting,” International Journal on Advanced Science Engineering Information Technology, vol. 9, no. 1, 2019, doi: 10.18517/ijaseit.9.1.6426.

G. Chen, S. Liu, and F. Jiang, “Daily weather forecasting based on deep learning model: A case study of Shenzhen city, China,” Atmosphere (Basel), vol. 13, no. 8, Aug. 2022, doi: 10.3390/atmos13081208.

X. Chen, Y. Liu, Y. Shen, K. Zhang, and H. Wei, “A data interpolation method for missing irradiance data of photovoltaic power station,” in 2020 Chinese Automation Congress (CAC), Nov. 2020, pp. 4735-4740. doi: 10.1109/CAC51589.2020.9326730.

M. Chhetri, S. Kumar, P. P. Roy, and B. G. Kim, “Deep BLSTM-GRU model for monthly rainfall prediction: A case study of Simtokha, Bhutan,” Remote Sens (Basel), vol. 12, no. 19, pp. 1-13, Oct. 2020, doi: 10.3390/rs12193174.

T. E. Putra, Husaini, D. Asrina, and M. Dirhamsyah, “The ability of the fast fourier transform to de-noise a strain signal,” in IOP Conference Series: Materials Science and Engineering, Oct. 2020, vol. 931, no. 1. doi: 10.1088/1757-899X/931/1/012011.

A. Gonzí¡lez-Dí­ez, J. A. Barreda-Argí¼eso, L. Rodrí­guez-Rodrí­guez, and J. Ferní¡ndez-Lozano, “The use of filters based on the Fast Fourier Transform applied to DEMs for the objective mapping of karstic features,” Geomorphology, vol. 385, Jul. 2021, doi: 10.1016/j.geomorph.2021.107724.

S. U. Khan, M. H. Siddiqi, and Y. Alhwaiti, “Signal-to-noise ratio comparison of several filters against Phantom image,” J Healthc Eng, vol. 2022, p. 4724342, 2022, doi: 10.1155/2022/4724342.

P. Bellavista, A. Corradi, and C. Giannelli, “Evaluating filtering strategies for decentralized handover prediction in the wireless internet,” in 11th IEEE Symposium on Computers and Communications (ISCC’06), 2006, pp. 167-174. doi: 10.1109/ISCC.2006.70.

H. Darmawan, M. Yuliana, and Moch. Z. S. Hadi, “Real-time weather prediction system using GRU with daily surface observation data from IoT,” in 2022 International Electronics Symposium (IES), 2022, pp. 221-226. doi: 10.1109/IES55876.2022.9888468.

M. Steininger, K. Kobs, P. Davidson, A. Krause, and A. Hotho, “Density-based weighting for imbalanced regression,” Mach Learn, vol. 110, no. 8, pp. 2187-2211, Aug. 2021, doi: 10.1007/s10994-021-06023-5.

J. M. Johnson and T. M. Khoshgoftaar, “Survey on deep learning with class imbalance,” J Big Data, vol. 6, no. 1, Dec. 2019, doi: 10.1186/s40537-019-0192-5.

H. Patel, D. Singh Rajput, G. Thippa Reddy, C. Iwendi, A. Kashif Bashir, and O. Jo, “A review on classification of imbalanced data for wireless sensor networks,” International Journal of Distributed Sensor Networks, vol. 16, no. 4. SAGE Publications Ltd, Apr. 01, 2020. doi: 10.1177/1550147720916404.

P. Zhang, Y. Jia, and Y. Shang, “Research and application of XGBoost in imbalanced data,” Int J Distrib Sens Netw, vol. 18, no. 6, Jun. 2022, doi: 10.1177/15501329221106935.

L. Huang, J. Qin, Y. Zhou, F. Zhu, L. Liu, and L. Shao, “Normalization techniques in training DNNs: Methodology, analysis and application,” Sep. 2020, doi: 10.48550/arXiv.2009.12836.

G. Aksu, C. O. Gí¼zeller, and M. T. Eser, “The effect of the normalization method used in different sample sizes on the success of artificial neural network model,” International Journal of Assessment Tools in Education, pp. 170-192, Apr. 2019, doi: 10.21449/ijate.479404.

X. Zhou, J. Xu, P. Zeng, and X. Meng, “Air pollutant concentration prediction based on GRU method,” in Journal of Physics: Conference Series, Mar. 2019, vol. 1168, no. 3. doi: 10.1088/1742-6596/1168/3/032058.

R. G. S. K., A. Kumar Verma, and S. Radhika, “K-nearest neighbors and grid search cv based real time fault monitoring system for industries,” in 2019 5th International Conference for Convergence in Technology (I2CT), 2019, pp. 1-5.

I. S. Isa, M. S. A. Rosli, U. K. Yusof, M. I. F. Maruzuki, and S. N. Sulaiman, “Optimizing the hyperparameter tuning of YOLOv5 for underwater detection,” IEEE Access, vol. 10, pp. 52818-52831, 2022, doi: 10.1109/ACCESS.2022.3174583.

K. Nakamura, B. Derbel, K. J. Won, and B. W. Hong, “Learning-rate annealing methods for deep neural networks,” Electronics (Switzerland), vol. 10, no. 16, Aug. 2021, doi: 10.3390/electronics10162029.

K. Mukherjee, A. Khare, and A. Verma, “A simple dynamic learning rate tuning algorithm for automated training of DNNs,” ArXiv, Oct. 2019, doi: 10.48550/ARXIV.1910.11605.

P. Cu Thi, J. E. Ball, and N. H. Dao, “Early stopping technique using a genetic algorithm for calibration of an urban runoff model,” International Journal of River Basin Management, 2021, doi: 10.1080/15715124.2021.1910517.

A. Ibrahem Ahmed Osman, A. Najah Ahmed, M. F. Chow, Y. Feng Huang, and A. El-Shafie, “Extreme gradient boosting (Xgboost) model to predict the groundwater levels in Selangor Malaysia,” Ain Shams Engineering Journal, vol. 12, no. 2, pp. 1545-1556, Jun. 2021, doi: 10.1016/j.asej.2020.11.011.

M. Miranda, K. Valeriano, and J. Sulla-Torres, “A detailed study on the choice of hyperparameters for transfer learning in covid-19 image datasets using Bayesian optimization,” International Journal of Advanced Computer Science and Applications, vol. 12, no. 4, pp. 327-335, 2021, doi: 10.14569/IJACSA.2021.0120441.

Q. Liang et al., “Benchmarking the performance of Bayesian optimization across multiple experimental materials science domains,” NPJ Comput Mater, vol. 7, no. 1, Dec. 2021, doi: 10.1038/s41524-021-00656-9.

M. Alizamir et al., “Advanced machine learning model for better prediction accuracy of soil temperature at different depths,” PLoS One, vol. 15, no. 4, Apr. 2020, doi: 10.1371/journal.pone.0231055.

C. Esposito, G. A. Landrum, N. Schneider, N. Stiefl, and S. Riniker, “GHOST: Adjusting the decision threshold to handle imbalanced data in machine learning,” J Chem Inf Model, vol. 61, no. 6, pp. 2623-2640, Jun. 2021, doi: 10.1021/acs.jcim.1c00160.

I. M. de Diego, A. R. Redondo, R. R. Ferní¡ndez, J. Navarro, and J. M. Moguerza, “General performance score for classification problems,” Applied Intelligence, vol. 52, no. 10, pp. 12049-12063, Aug. 2022, doi: 10.1007/s10489-021-03041-7.

P. Foltí½nek, M. Babiuch, and P. Å urí¡nek, “Measurement and data processing from Internet of Things modules by dual-core application using ESP32 board,” Measurement and Control (United Kingdom), vol. 52, no. 7-8, pp. 970-984, Sep. 2019, doi: 10.1177/0020294019857748.

Y. S. Mandza and A. Raji, “IoTivity cloud-enabled platform for energy management applications,” IoT, vol. 3, no. 1, pp. 73-90, Dec. 2021, doi: 10.3390/iot3010004.

D. S. Anindya, M. Yuliana, and Moch. Z. S. Hadi, “IoT based climate prediction system using long short-term memory (LSTM) algorithm as part of smart farming 4.0,” in 2022 International Electronics Symposium (IES), 2022, pp. 255-260. doi: 10.1109/IES55876.2022.9888486.

M. Ohyver, J. v. Moniaga, I. Sungkawa, B. E. Subagyo, and I. A. Chandra, “The comparison firebase real-time database and MySQL database performance using wilcoxon signed-rank test,” in Procedia Computer Science, 2019, vol. 157, pp. 396-405. doi: 10.1016/j.procs.2019.08.231.

T. O. Hodson, “Root mean square error (RMSE) or mean absolute error (MAE): when to use them or not,” Geosci Model Dev, vol. 15, no. 14, pp. 5481-5487, 2022, doi: 10.5194/gmd-2022-64.

F. Zhang et al., “What is the predictability limit of midlatitude weather?,” J Atmos Sci, vol. 76, no. 4, pp. 1077-1091, 2019, doi: 10.1175/JAS-D-18-0269.1.

H. Zhu, M. C. Wheeler, A. H. Sobel, and D. Hudson, “Seamless precipitation prediction skill in the tropics and extratropics from a global model,” Mon Weather Rev, vol. 142, no. 4, pp. 1556-1569, 2014, doi: 10.1175/MWR-D-13-00222.1.

Authors who publish with this journal agree to the following terms:

    1. Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
    2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
    3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).