Improving Convective Cloud Classification with Deep Learning: The CC-Unet Model

Humuntal Rumapea (1), Mohammad Zarlis (2), Syahril Efendy (3), Poltak Sihombing (4)
(1) Department of Computer Science, Universitas Sumatera Utara, Medan, 20155, Indonesia
(2) Department of System Information Management, Bina Nusantara University, Jakarta, 11480, Indonesia
(3) Department of Computer Science, Universitas Sumatera Utara, Medan, 20155, Indonesia
(4) Department of Computer Science, Universitas Sumatera Utara, Medan, 20155, Indonesia
Fulltext View | Download
How to cite (IJASEIT) :
Rumapea, Humuntal, et al. “Improving Convective Cloud Classification With Deep Learning: The CC-Unet Model”. International Journal on Advanced Science, Engineering and Information Technology, vol. 14, no. 1, Feb. 2024, pp. 28-36, doi:10.18517/ijaseit.14.1.18658.
Analyzing and mitigating natural disasters can be a challenging task, which is why the field of computer science, specifically artificial intelligence (AI) is necessary to aid in the complexity of disaster management. AI provides the tools and analytical models to help solve the intricacies of handling natural disasters. Convective clouds, closely related to rain and can lead to large-scale, prolonged hydrometeorological disasters, are a crucial component to consider. To improve the classification of these clouds, a predictive-analytical model based on deep learning, called the CC-Unet model, was developed. This model utilizes a U-Net architecture and is trained using a dataset of convective cloud images. The researchers used satellite image data from the Himawari 8 satellite collected in May and October 2021. The images were pre-processed and verified using observational data. The model was tested using a random train-test split method, showing that the CC-Unet model had a higher accuracy of 97.29% compared to the U-Net model, which had an accuracy of 94.17%. Additionally, the significance test using the Wilcoxon method showed that the CC-Unet model had significantly different performance results from the U-Net model. The ground truth image was also compared with the predicted image, showing a low root mean square error value of 0.0218, indicating a high level of similarity between the two. Overall, this research demonstrates the potential of AI and deep learning in classifying convective clouds to aid in natural disaster management.

M. Tshuma, J. A. Belle, and A. Ncube, “An Analysis of Factors Influencing Household Water, Sanitation, and Hygiene (WASH) Experiences during Flood Hazards in Tsholotsho District Using a Seemingly Unrelated Regression (SUR) Model,” Water, vol. 15, no. 2. 2023. doi:10.3390/w15020371.

D. D. Harisdani and D. Lindarto, “Partisipasi Masyarakat Dalam Penggunaan Teknik Biopori Untuk Mengendalikan Banjir Kota (Studi Kasus : Kelurahan Tanjung Rejo – Medan),” NALARs, vol. 17, no. 2, p. 97, 2018, doi:10.24853/nalars.17.2.97-104.

B. I. Nasution, F. M. Saputra, R. Kurniawan, A. N. Ridwan, A. Fudholi, and B. Sumargo, “Urban vulnerability to floods investigation in jakarta, Indonesia: A hybrid optimized fuzzy spatial clustering and news media analysis approach,” Int. J. Disaster Risk Reduct., vol. 83, p. 103407, 2022, doi:10.1016/j.ijdrr.2022.103407.

D. R. Prabawadhani, B. Harsoyo, T. H. Seto, and B. R. Prayoga, “Karakteristik Temporal Dan Spasial Curah Hujan Penyebab Banjir Di Wilayah Dki Jakarta Dan Sekitarnya,” J. Sains Teknol. Modif. Cuaca, vol. 17, no. 1, p. 21, 2016, doi:10.29122/jstmc.v17i1.957.

D. F. Saragih, “Problems and Solutions of the Flood Control Program in Medan City and its Surroundings,” vol. 9, no. January, pp. 47–58, 2023, doi:10.22146/jcef.4784.

N. C. Dang, M. N. Moreno-García, and F. De la Prieta, “Sentiment Analysis Based on Deep Learning: A Comparative Study,” Electronics, vol. 9, no. 3. 2020. doi:10.3390/electronics9030483.

M. I. Razzak, S. Naz, and A. Zaib, “Deep Learning for Medical Image Processing: Overview, Challenges and the Future BT - Classification in BioApps: Automation of Decision Making,” N. Dey, A. S. Ashour, and S. Borra, Eds., Cham: Springer International Publishing, 2018, pp. 323–350. doi:10.1007/978-3-319-65981-7_12.

J. Chai, H. Zeng, A. Li, and E. W. T. Ngai, “Deep learning in computer vision: A critical review of emerging techniques and application scenarios,” Mach. Learn. with Appl., vol. 6, p. 100134, 2021, doi:10.1016/j.mlwa.2021.100134.

L. She, H. K. Zhang, Z. Li, G. de Leeuw, and B. Huang, “Himawari-8 Aerosol Optical Depth (AOD) Retrieval Using a Deep Neural Network Trained Using AERONET Observations,” Remote Sens. 2020, Vol. 12, Page 4125, vol. 12, no. 24, p. 4125, Dec. 2020, doi:10.3390/RS12244125.

C. Liu et al., “A Machine Learning-based Cloud Detection Algorithm for the Himawari-8 Spectral Image,” Adv. Atmos. Sci. 2021, pp. 1–14, Jun. 2021, doi:10.1007/S00376-021-0366-X.

S. A. Ackerman, K. I. Strabala, W. P. Menzel, R. A. Frey, C. C. Moeller, and L. E. Gumley, “Discriminating clear sky from clouds with MODIS,” J. Geophys. Res., vol. 103, no. D24, pp. 32141–32157, 1998, doi:10.1029/1998jd200032.

M. B. Baker and T. Peter, “Small-scale cloud processes and climate,” Nature, vol. 451, no. 7176, pp. 299–300, Jan. 2008, doi:10.1038/nature06594.

M. Mohtasham Moein et al., “Predictive models for concrete properties using machine learning and deep learning approaches: A review,” J. Build. Eng., vol. 63, p. 105444, 2023, doi:10.1016/j.jobe.2022.105444.

J. Chen, G.-Q. Zeng, W. Zhou, W. Du, and K.-D. Lu, “Wind speed forecasting using nonlinear-learning ensemble of deep learning time series prediction and extremal optimization,” Energy Convers. Manag., vol. 165, pp. 681–695, 2018, doi:10.1016/j.enconman.2018.03.098.

T. Bai, D. Li, K. Sun, Y. Chen, and W. Li, “Cloud detection for high-resolution satellite imagery using machine learning and multi-feature fusion,” Remote Sens., vol. 8, no. 9, p. 715, Sep. 2016, doi:10.3390/rs8090715.

B. A. Baum et al., “MODIS cloud-top property refinements for Collection 6,” J. Appl. Meteorol. Clim., vol. 51, no. 6, pp. 1145–1163, Jun. 2012, doi:10.1175/jamc-d-11-0203.1.

N. Chen et al., “New neural network cloud mask algorithm based on radiative transfer simulations,” Remote Sens. Environ., vol. 219, pp. 62–71, Dec. 2018, doi:10.1016/j.rse.2018.09.029.

R. A. Frey et al., “Cloud detection with MODIS. Part I: Improvements in the MODIS cloud mask for Collection 5,” J. Atmos. Ocean. Technol., vol. 25, no. 7, pp. 1057–1072, 2008, doi:10.1175/2008jtecha1052.1.

K. Bessho et al., “An introduction to Himawari-8/9-Japan’s new-generation geostationary meteorological satellites,” J. Meteor. Soc. Japan, vol. 94, no. 2, pp. 151–183, 2016, doi:10.2151/jmsj.2016-009.

J. Gomis-Cebolla, J. C. Jimenez, and J. A. Sobrino, “MODIS probabilistic cloud masking over the Amazonian evergreen tropical forests: A comparison of machine learning-based methods,” Int. J. Remote Sens., vol. 11, no. 1, pp. 185–210, Jan. 2020, doi:10.1080/01431161.2019.1637963.

M. Rayat Imtiaz Hossain, L. Sigal, and J. J. Little, “Semantically Enhanced Global Reasoning for Semantic Segmentation,” arXiv e-prints. p. arXiv:2212.03338, Dec. 2022. doi:10.48550/arXiv.2212.03338.

Y. Guo, Y. Liu, T. Georgiou, and M. S. Lew, “A review of semantic segmentation using deep neural networks,” Int. J. Multimed. Inf. Retr., vol. 7, no. 2, pp. 87–93, 2018, doi:10.1007/s13735-017-0141-z.

X. Liu, D. Zhang, X. Zhu, and J. Tang, “VCT-NET: An Octa Retinal Vessel Segmentation Network Based on Convolution and Transformer,” in 2022 IEEE International Conference on Image Processing (ICIP), 2022, pp. 2656–2660. doi:10.1109/ICIP46576.2022.9898038.

F. Lin, C. N. Gao, and K. D. Yamada, “An Effective Convolutional Neural Network for Visualized Understanding Transboundary Air Pollution Based on Himawari-8 Satellite Images,” IEEE Geosci. Remote Sens. Lett., vol. 19, 2022, doi:10.1109/LGRS.2021.3102939.

V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 12, pp. 2481–2495, 2017, doi:10.1109/TPAMI.2016.2644615.

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation BT - Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015,” N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, Eds., Cham: Springer International Publishing, 2015, pp. 234–241.

K. Hu, D. Zhang, and M. Xia, “Cdunet: Cloud detection unet for remote sensing imagery,” Remote Sens., vol. 13, no. 22, pp. 1–22, 2021, doi:10.3390/rs13224533.

L. Jiao, L. Huo, C. Hu, and P. Tang, “Refined UNet: UNet-based refinement network for cloud and shadow precise segmentation,” Remote Sens., vol. 12, no. 12, pp. 1–28, 2020, doi:10.3390/rs12122001.

M. S. Mekala, G. Srivastava, J. H. Park, and H.-Y. Jung, “An effective communication and computation model based on a hybridgraph-deeplearning approach for SIoT,” Digit. Commun. Networks, vol. 8, no. 6, pp. 900–910, Dec. 2022, doi:10.1016/j.dcan.2022.07.004.

W. Xie et al., “SegCloud: A novel cloud image segmentation model using a deep convolutional neural network for ground-based all-sky-view camera observation,” Atmos. Meas. Tech., vol. 13, no. 4, pp. 1953–1961, 2020, doi:10.5194/amt-13-1953-2020.

A. Abdollahi, B. Pradhan, and A. M. Alamri, “An ensemble architecture of deep convolutional Segnet and Unet networks for building semantic segmentation from high-resolution aerial images,” Geocarto Int., vol. 37, no. 12, pp. 3355–3370, Jun. 2022, doi:10.1080/10106049.2020.1856199.

Z. Huang et al., “A novel tongue segmentation method based on improved U-Net,” Neurocomputing, vol. 500, pp. 73–89, Aug. 2022, doi:10.1016/j.neucom.2022.05.023.

L. Atika, S. Nurmaini, R. U. Partan, and E. Sukandi, “Image Segmentation for Mitral Regurgitation with Convolutional Neural Network Based on UNet, Resnet, Vnet, FractalNet and SegNet: A Preliminary Study,” Big Data and Cognitive Computing, vol. 6, no. 4. 2022. doi:10.3390/bdcc6040141.

A. Iqbal, M. Sharif, M. A. Khan, W. Nisar, and M. Alhaisoni, “FF-UNet: a U-Shaped Deep Convolutional Neural Network for Multimodal Biomedical Image Segmentation,” Cognit. Comput., vol. 14, no. 4, pp. 1287–1302, 2022, doi:10.1007/s12559-022-10038-y.

S. Niu, Q. Chen, L. de Sisternes, Z. Ji, Z. Zhou, and D. L. Rubin, “Robust noise region-based active contour model via local similarity factor for image segmentation,” Pattern Recognit., vol. 61, pp. 104–119, 2017, doi:10.1016/j.patcog.2016.07.022.

Z. Shaukat, Q. ul A. Farooq, S. Tu, C. Xiao, and S. Ali, “A state-of-the-art technique to perform cloud-based semantic segmentation using deep learning 3D U-Net architecture,” BMC Bioinformatics, vol. 23, no. 1, p. 251, 2022, doi:10.1186/s12859-022-04794-9.

Z. Zhang, A. Iwasaki, G. Xu, and J. Song, “Cloud detection on small satellites based on lightweight U-net and image compression,” J. Appl. Remote Sens., vol. 13, no. 2, p. 26502, Apr. 2019, doi:10.1117/1.JRS.13.026502.

Q. Deng, P. Lu, S. Zhao, and N. Yuan, “U-Net: A deep-learning method for improving summer precipitation forecasts in China,” Atmos. Ocean. Sci. Lett., p. 100322, Dec. 2022, doi:10.1016/J.AOSL.2022.100322.

J. Wang, M. Hadjikakou, R. J. Hewitt, and B. A. Bryan, “Simulating large-scale urban land-use patterns and dynamics using the U-Net deep learning architecture,” Comput. Environ. Urban Syst., vol. 97, p. 101855, Oct. 2022, doi:10.1016/j.compenvurbsys.2022.101855.

M. Marhamati et al., “LAIU-Net: A learning-to-augment incorporated robust U-Net for depressed humans’ tongue segmentation,” Displays, vol. 76, p. 102371, Jan. 2023, doi: 10.1016/j.displa.2023.102371.

F. Shojaiee and Y. Baleghi, “EFASPP U-Net for semantic segmentation of night traffic scenes using fusion of visible and thermal images,” Eng. Appl. Artif. Intell., vol. 117, p. 105627, Jan. 2023, doi:10.1016/j.engappai.2022.105627.

C. Beeche et al., “Super U-Net: A modularized generalizable architecture,” Pattern Recognit., vol. 128, p. 108669, Aug. 2022, doi:10.1016/j.patcog.2022.108669.

S. V. and I. G., “Encoder Enhanced Atrous (EEA) Unet architecture for Retinal Blood vessel segmentation,” Cogn. Syst. Res., vol. 67, pp. 84–95, 2021, doi:10.1016/j.cogsys.2021.01.003.

X. Yang, Z. Li, Y. Guo, and D. Zhou, “DCU-net: a deformable convolutional neural network based on cascade U-net for retinal vessel segmentation,” Multimed. Tools Appl., vol. 81, no. 11, pp. 15593–15607, 2022, doi:10.1007/s11042-022-12418-w.

T. Zhou, S. Canu, and S. Ruan, “Automatic COVID-19 CT segmentation using U-Net integrated spatial and channel attention mechanism,” Int. J. Imaging Syst. Technol., vol. 31, no. 1, pp. 16–27, Mar. 2021, doi:10.1002/ima.22527.

F. Wang and Y. Zai, “Image segmentation and flow prediction of digital rock with U-net network,” Adv. Water Resour., p. 104384, Jan. 2023, doi: 10.1016/j.advwatres.2023.104384.

C. H. Chuang, K. Y. Chang, C. S. Huang, and T. P. Jung, “IC-U-Net: A U-Net-based Denoising Autoencoder Using Mixtures of Independent Components for Automatic EEG Artifact Removal,” Neuroimage, vol. 263, p. 119586, Nov. 2022, doi:10.1016/j.neuroimage.2022.119586.

A. Lou, S. Guan, and M. H. Loew, “DC-UNet: rethinking the U-Net architecture with dual channel efficient CNN for medical image segmentation,” p. 98, 2021, doi: 10.1117/12.2582338.

X. Liu, H. Du, J. Xu, and B. Qiu, “DBGAN: A dual-branch generative adversarial network for undersampled MRI reconstruction,” Magn. Reson. Imaging, vol. 89, pp. 77–91, 2022, doi:10.1016/j.mri.2022.03.003.

V. A. Krylov, G. Moser, A. Voisin, S. B. Serpico, and J. Zerubia, “Change detection with synthetic aperture radar images by Wilcoxon statistic likelihood ratio test,” Proc. - Int. Conf. Image Process. ICIP, pp. 2093–2096, 2012, doi: 10.1109/ICIP.2012.6467304.

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Authors who publish with this journal agree to the following terms:

    1. Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
    2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
    3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).