An Elastic Frame Rate Up-Conversion for Sequential Omnidirectional Images
How to cite (IJASEIT) :
J. Zhou, Y. Fu, Y. Yang, and A. T. S. Ho, “Distributed video coding using interval overlapped arithmetic coding,” Signal Process. Image Commun., vol. 76, pp. 118-124, 2019, doi: https://doi.org/10.1016/j.image.2019.03.016.
R. Yang, M. Xu, T. Liu, Z. Wang, and Z. Guan, “Enhancing Quality for HEVC Compressed Videos,” IEEE Trans. Circuits Syst. Video Technol., 2019, doi: 10.1109/TCSVT.2018.2867568.
D. Checa and A. Bustillo, “A review of immersive virtual reality serious games to enhance learning and training,” Multimed. Tools Appl., 2020, doi: 10.1007/s11042-019-08348-9.
A. Habibian, T. Van Rozendaal, J. Tomczak, and T. Cohen, “Video compression with rate-distortion autoencoders,” 2019, doi: 10.1109/ICCV.2019.00713.
W. Bao, X. Zhang, L. Chen, L. Ding, and Z. Gao, “High-Order Model and Dynamic Filtering for Frame Rate Up-Conversion,” IEEE Trans. Image Process., 2018, doi: 10.1109/TIP.2018.2825100.
M. Zhang, W. Zhou, H. Wei, X. Zhou, and Z. Duan, “Frame level rate control algorithm based on GOP level quality dependency for low-delay hierarchical video coding,” Signal Process. Image Commun., vol. 88, p. 115964, 2020, doi: https://doi.org/10.1016/j.image.2020.115964.
C.-H. Yeh, J.-R. Lin, M.-J. Chen, C.-H. Yeh, C.-A. Lee, and K.-H. Tai, “Fast prediction for quality scalability of High Efficiency Video Coding Scalable Extension,” J. Vis. Commun. Image Represent., vol. 58, pp. 462-476, 2019, doi: https://doi.org/10.1016/j.jvcir.2018.12.021.
W. Shen, W. Bao, G. Zhai, L. Chen, X. Min, and Z. Gao, “Blurry Video Frame Interpolation,” 2020, doi: 10.1109/CVPR42600.2020.00516.
G. G. Lee, C. F. Chen, C. J. Hsiao, and J. C. Wu, “Bi-directional trajectory tracking with variable block-size motion estimation for frame rate up-convertor,” IEEE J. Emerg. Sel. Top. Circuits Syst., 2014, doi: 10.1109/JETCAS.2014.2298923.
A. Jimí©nez-Moreno, E. Martínez-Enríquez, and F. Díaz-de-María, “Bayesian adaptive algorithm for fast coding unit decision in the High Efficiency Video Coding (HEVC) standard,” Signal Process. Image Commun., vol. 56, pp. 1-11, 2017, doi: https://doi.org/10.1016/j.image.2017.04.004.
H. Liu, R. Xiong, D. Zhao, S. Ma, and W. Gao, “Multiple hypotheses bayesian frame rate up-conversion by adaptive fusion of motion-compensated interpolations,” IEEE Trans. Circuits Syst. Video Technol., 2012, doi: 10.1109/TCSVT.2012.2197081.
Y. Yang, L. Shen, H. Yang, and P. An “A content-based rate control algorithm for screen content video coding,” J. Vis. Commun. Image Represent., vol. 60, pp. 328-338, 2019, doi: https://doi.org/10.1016/j.jvcir.2019.02.031.
Y. Chen, R. Hu, J. Xiao, and Z. Wang, “Multisource surveillance video coding with synthetic reference frame,” J. Vis. Commun. Image Represent., vol. 65, p. 102685, 2019, doi: https://doi.org/10.1016/j.jvcir.2019.102685.
S. J. Yoon, H. H. Kim, and M. Kim, “Hierarchical Extended Bilateral Motion Estimation-Based Frame Rate Upconversion Using Learning-Based Linear Mapping,” IEEE Trans. Image Process., 2018, doi: 10.1109/TIP.2018.2861567.
P. A. Brousseau and S. Roy, “Calibration of axial fisheye cameras through generic virtual central models,” 2019, doi: 10.1109/ICCV.2019.00414.
W. Gao and S. Shen, “Dual-fisheye omnidirectional stereo,” 2017, doi: 10.1109/IROS.2017.8206587.
S. Ji, Z. Qin, J. Shan, and M. Lu, “Panoramic SLAM from a multiple fisheye camera rig,” ISPRS J. Photogramm. Remote Sens., 2020, doi: 10.1016/j.isprsjprs.2019.11.014.
P. Liu, L. Heng, T. Sattler, A. Geiger, and M. Pollefeys, “Direct visual odometry for a fisheye-stereo camera,” 2017, doi: 10.1109/IROS.2017.8205988.
L. F. Posada, A. Velasquez-Lopez, F. Hoffmann, and T. Bertram, “Semantic mapping with omnidirectional vision,” 2018, doi: 10.1109/ICRA.2018.8461165.
C. Won, J. Ryu, and J. Lim, “SweepNet: Wide-baseline omnidirectional depth estimation,” 2019, doi: 10.1109/ICRA.2019.8793823.
A. S. Satyawan, J. Hara, and H. Watanabe, “Automatic self-improvement scheme in optical flow-based motion estimation for sequential fisheye images,” ITE Trans. Media Technol. Appl., 2019, doi: 10.3169/mta.7.20.
S. Baker and I. Matthews, “Lucas-Kanade 20 years on: A unifying framework,” Int. J. Comput. Vis., 2004, doi: 10.1023/B:VISI.0000011205.11775.fd.
Matlab, “Matlab 2016, Tutorial.” www.mathworks.com/products/matlab.html (accessed Aug. 01, 2018).
Blender, “Blender 2.81, Tutorial.” www.blender.org (accessed Mar. 01, 2018).
N. Asuni and A. Giachetti, “Testimages: A large-scale archive for testing visual devices and basic image processing algorithms,” 2014, doi: 10.2312/stag.20141242.
Ricoh, “Ricoh Theta S, User Manual.” theta360.com/en/about/theta/s.html (accessed Jan. 01, 2017).
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).