Multi Focus Image Fusion with Region-Center Based Kernel

Ismail Ismail (1), Kamarul Hawari (2)
(1) Electrical Department, Politeknik Negeri Padang, Limau Manis, Padang, 25126, Indonesia
(2) Electrical and Electronics Engineering Department, Universiti Malaysia Pahang, Lebuh Raya Tun Razak, Pahang, 26300, Malaysia
Fulltext View | Download
How to cite (IJASEIT) :
Ismail, Ismail, and Kamarul Hawari. “Multi Focus Image Fusion With Region-Center Based Kernel”. International Journal on Advanced Science, Engineering and Information Technology, vol. 11, no. 1, Feb. 2021, pp. 57-63, doi:10.18517/ijaseit.11.1.11149.
The usage of the camera is ubiquitous nowadays. It provides highly accurate information. Then, the camera helps humans to carry out their specific tasks correctly. Furthermore, the camera becomes an important tool to achieve accurate computation in some fields, such as in medical diagnostic, robotics, remote sensing, and others. On the other side, the camera also has a weakness to capture detailed information of the scene in one image. Many images are needed to obtain the focus information of all the scenes since the lens's limitation depth of field produces out of focus regions beyond the focused object. To make a settlement of that case, the researchers have found a multi-focus image fusion process. This process selects all detailed information from a sequence of images and fuses them into one focused image. Through this fused image, the user such as human and machine can read the focus information easier. Later, the researchers developed multi-focus image fusion methods which various advanced procedures and algorithms. Furthermore, the implementation of multi-focus image fusion in new fields multiplied in the last two decades ago. It was able to create an accurate and efficient method to build a fused image. The proposed method is a kind of a new method in multi-focus image fusion. It works according to the region-center based kernel. The kernel processes input image to predict the detailed information of the scene. This method is robust to prevent the unexpected effects and sensitivities of noise. The proposed method generates a fused image with high accuracy. Finally, the assessment is done based on mutual information and structure similarity.

Y.Liu, L.Wang, J.Cheng, C.Li, and X.Chen, “Multi-focus image fusion: A Survey of the state of the art,” Inf. Fusion, vol. 64, no. April, pp. 71-91, 2020.

S.Masood, M.Sharif, M.Yasmin, M. A.Shahid, and A.Rehman, “Image fusion methods: A survey,” J. Eng. Sci. Technol. Rev., vol. 10, no. 6, pp. 186-194, 2017.

B.Meher, S.Agrawal, R.Panda, and A.Abraham, “A survey on region based image fusion methods,” Inf. Fusion, vol. 48, no. December 2017, pp. 119-132, 2019.

X.Xia, Y.Yao, L.Yin, S.Wu, H.Li, and Z.Yang, “Multi-focus image fusion based on probability filtering and region correction,” Signal Processing, 2018.

A. G.Felix and C.Juan, “Multi-focus image fusion for multiple images using adaptable size windows and parallel programming,” Signal, Image Video Process., no. 1, 2020.

A.Garnica-Carrillo, F.Calderon, and J.Flores, “Multi-focus image fusion by local optimization over sliding windows,” Signal, Image Video Process., vol. 12, no. 5, pp. 869-876, 2018.

S.Tello-Mijares and J.Bescós, “Region-based multifocus image fusion for the precise acquisition of Pap smear images,” J. Biomed. Opt., vol. 23, no. 05, p. 1, 2018.

D. P.Bavirisetti and R.Dhuli, “Multi-focus image fusion using multi-scale image decomposition and saliency detection,” Ain Shams Eng. J., vol. 9, no. 4, pp. 1103-1117, 2018.

M. S.Farid, A.Mahmood, and S. A.Al-Maadeed, “Multi-focus image fusion using Content Adaptive Blurring,” Inf. Fusion, vol. 45, no. January 2018, pp. 96-112, 2019.

M.Nejati et al., “Surface area-based focus criterion for multi-focus image fusion,” Inf. Fusion, vol. 36, pp. 284-295, 2017.

Z.Liu, Y.Chai, H.Yin, J.Zhou, and Z.Zhu, “A novel multi-focus image fusion approach based on image decomposition,” Inf. Fusion, vol. 35, pp. 102-116, 2017.

C.Ho, “Multi-Focus Image Fusion and Depth Map Estimation Based on Iterative Region Splitting Techniques,” MDPI, 2019.

G.Pajares and J. M.dela Cruz, “A wavelet-based image fusion tutorial,” Pattern Recognit., vol. 37, no. 9, pp. 1855-1872, 2004.

J.Li, G.Yuan, and H.Fan, “Multifocus image fusion using wavelet-domain-based deep cnn,” Comput. Intell. Neurosci., vol. 2019, 2019.

M. M. I.Ch, M. M.Riaz, N.Iltaf, A.Ghafoor, and S. S.Ali, “A multifocus image fusion using highlevel DWT components and guided filter,” Multimed. Tools Appl., vol. 79, no. 19-20, pp. 12817-12828, 2020.

T.Ransform, P. C. A.On, and Y. C.Olor, “Multi-Focus Image Fusion Based on Stationary Wavelet,” J. Southwest Jiaotong Univ., vol. 54, no. 5, 2019.

J.Li, G.Yuan, and H.Fan, “Multifocus image fusion using wavelet-domain-based deep cnn,” Comput. Intell. Neurosci., vol. 2019, 2019.

J.Dash, B.Dam, and R.Swain, “Design of multipurpose digital FIR double-band filter using hybrid firefly differential evolution algorithm,” Appl. Soft Comput. J., vol. 59, pp. 529-545, 2017.

K. J.Kim, J. H.Kim, and S. W.Nam, “Design of computationally efficient 2D FIR filters using sampling-kernel-based interpolation and frequency transformation,” Electron. Lett., vol. 51, no. 17, pp. 1326-1328, 2015.

R.Matei and D.Matei, “Circular IIR Filter Design and Applications in Biomedical Image Analysis,” Proc. 10th Int. Conf. Electron. Comput. Artif. Intell. ECAI 2018, pp. 1-6, 2019.

W. R. E.Gonzalez R C, Digital Image Processing, Third Edition, An adopted version. Pearson, 2008.

W.Lyu, W.Lu, and M.Ma, “No-reference quality metric for contrast-distorted image based on gradient domain and HSV space,” J. Vis. Commun. Image Represent., vol. 69, p. 102797, 2020.

Y.Zhang, X.Bai, and T.Wang, “Boundary finding based multi-focus image fusion through multi-scale morphological focus-measure,” Inf. Fusion, vol. 35, pp. 81-101, 2017.

B.Xiao and L.Yan, “An objective image fusion performance index: Normalized edge mutual information,” Proc. 30th Chinese Control Conf. CCC 2011, vol. 2, no. 1, pp. 3025-3028, 2011.

Z.Wang, A. C.Bovik, H. R.Sheikh, and E. P.Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600-612, 2004.

Y.Liu, S.Liu, and Z.Wang, “Multi-focus image fusion with dense SIFT,” Inf. Fusion, vol. 23, pp. 139-155, 2015.

S.Paul, I. S.Sevcenco, and P.Agathoklis, “Multi-exposure and multi-focus image fusion in gradient domain,” J. Circuits, Syst. Comput., vol. 25, no. 10, pp. 1-18, 2016.

Authors who publish with this journal agree to the following terms:

    1. Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
    2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
    3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).