Analysis and Evaluation of PointNet for Indoor Office Point Cloud Semantic Segmentation
How to cite (IJASEIT) :
F. Mortari, S. Zlatanova, L. Liu, G. Sithole, and J. Zhao, Space subdivision for indoor applications, no. December. 2014. doi: 10.13140/2.1.2914.2081.
S. Zlatanova, U. Isikdag, and M. S. Fine, “3D Indoor Models and Their Applications,” Encycl. GIS, pp. 1-12, 2015, doi: 10.1007/978-3-319-23519-6.
Z. Xiong and T. Wang, “Research on BIM Reconstruction Method Using Semantic Segmentation Point Cloud Data Based on PointNet,” IOP Conf. Ser. Earth Environ. Sci., vol. 719, no. 2, 2021, doi: 10.1088/1755-1315/719/2/022042.
S. A. Bello, S. Yu, C. Wang, J. M. Adam, and J. Li, “Review: Deep learning on 3D point clouds,” Remote Sens., vol. 12, no. 11, 2020, doi: 10.3390/rs12111729.
S. Liu, M. Zhang, P. Kadam, and C.-C. J. Kuo, 3D point cloud analysis”¯: traditional, deep learning, and explainable machine learning methods. Switzerland: Springer International Publishing, 2021.
Z. Ning, L. Tang, S. Qi, and Y. Liu, “Deep Learning on 3D Point Cloud for Semantic Segmentation,” Smart Innov. Syst. Technol., vol. 250, no. September, pp. 275-282, 2022, doi: 10.1007/978-981-16-4039-1_27.
M. E. Atik, Z. Duran, and D. Z. Seker, “Machine learning-based supervised classification of point clouds using multiscale geometric features,” ISPRS Int. J. Geo-Information, vol. 10, no. 3, 2021, doi: 10.3390/ijgi10030187.
Y. Guo, H. Wang, Q. Hu, H. Liu, L. Liu, and M. Bennamoun, “Deep Learning for 3D Point Clouds: A Survey,” IEEE Trans. Pattern Anal. Mach. Intell., pp. 1-1, 2020, doi: 10.1109/tpami.2020.3005434.
Y. Xie, J. Tian, and X. X. Zhu, “Linking Points With Labels in 3D: A Review of Point Cloud Semantic Segmentation,” arXiv, no. August, 2019.
G. Rocha, L. Mateus, J. Ferní¡ndez, and V. Ferreira, “A scan-to-bim methodology applied to heritage buildings,” Heritage, vol. 3, no. 1, pp. 47-65, 2020, doi: 10.3390/heritage3010004.
H. Macher, T. Landes, and P. Grussenmeyer, “From Point Clouds to Building Information Models: 3D Semi-Automatic Reconstruction of Indoors of Existing Buildings,” Appl. Sci., vol. 7, no. 10, Oct. 2017, doi: 10.3390/app7101030.
Y. Perez-Perez, M. Golparvar-Fard, and K. El-Rayes, “Segmentation of point clouds via joint semantic and geometric features for 3D modeling of the built environment,” Autom. Constr., vol. 125, no. December 2020, p. 103584, 2021, doi: 10.1016/j.autcon.2021.103584.
A. Nguyen and B. Le, “3D Point Cloud Segmentation: A Survey,” in 2013 6th IEEE Conference on Robotics, Automation and Mechatronics (RAM), 2013, pp. 225-230. [Online]. Available: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6758588%5Cnhttp://www.academia.edu/download/30390112/termpaper2.pdf
R. O. Duda and P. E. Hart, “Use of the Hough Transformation to Detect Lines and Curves in Pictures,” Commun. ACM, vol. 15, no. 1, pp. 11-15, 1972, doi: 10.1145/361237.361242.
F. Poux, C. Mattes, and L. Kobbelt, “Unsupervised Segmentation of Indoor 3D Point Cloud: Application to Object-Based Classification,” Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. - ISPRS Arch., vol. 44, no. 4/W1, pp. 111-118, 2020, doi: 10.5194/isprs-archives-XLIV-4-W1-2020-111-2020.
L. S. Runceanu, S. Becker, N. Haala, and D. Fritsch, “Indoor Point Cloud Segmentation for Automatic Object Interpretation,” pp. 147-159, 2017, [Online]. Available: https://www.dgpf.de/src/tagung/jt2017/proceedings/proceedings/papers/15_DGPF2017_Runceanu_et_al.pdf
T. Watanabe, “A Fuzzy RANSAC Algorithm Based on Reinforcement Learning Concept,” in IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2013, pp. 1-6.
M. A. Fischler and R. C. Bolles, “RANSAC: Random Sample Paradigm for Model Consensus: A Apphcatlons to Image Fitting with Analysis and Automated Cartography,” Graph. Image Process., vol. 24, no. 6, pp. 381-395, 1981.
Ruwen Schnabel, Roland Wahl, and Reinhard Klein, “Efficient RANSAC for Point-Cloud Shape Detection,” Comput. Graph. Forum, vol. 26, no. 2, pp. 214-226, 2007.
L. Li, F. Yang, H. Zhu, D. Li, Y. Li, and L. Tang, “An improved RANSAC for 3D point cloud plane segmentation based on normal distribution transformation cells,” Remote Sens., vol. 9, no. 5, 2017, doi: 10.3390/rs9050433.
A. Adam, E. Chatzilari, S. Nikolopoulos, and I. Kompatsiaris, “H-RANSAC: A Hybrid Point Cloud Segmentation Combining 2D and 3D Data,” ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci., vol. 4, no. 2, pp. 1-8, 2018, doi: 10.5194/isprs-annals-IV-2-1-2018.
C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation,” Comput. Vis. Pattern Recognit., 2017.
Z. Wu et al., “3D ShapeNets: A deep representation for volumetric shapes,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 07-12-June, pp. 1912-1920, 2015, doi: 10.1109/CVPR.2015.7298801.
D. Maturana and S. Scherer, “VoxNet: A 3D Convolutional Neural Network for Real-Time Object Recognition,” Int. Conf. Intell. Robot. Syst., pp. 922-928, 2015, [Online]. Available: http://www.thepositiveencourager.global/the-mentoring-approach/
H. Y. Meng, L. Gao, Y. K. Lai, and Di. Manocha, “VV-net: Voxel VAE net with group convolutions for point cloud segmentation,” Proc. IEEE Int. Conf. Comput. Vis., vol. 2019-Octob, pp. 8499-8507, 2019, doi: 10.1109/ICCV.2019.00859.
L. Tchapmi, C. Choy, I. Armeni, J. Gwak, and S. Savarese, “SEGCloud: Semantic segmentation of 3D point clouds,” Proc. - 2017 Int. Conf. 3D Vision, 3DV 2017, pp. 537-547, 2018, doi: 10.1109/3DV.2017.00067.
D. Rethage, J. Wald, J. Sturm, N. Navab, and F. Tombari, “Fully-Convolutional Point Networks for Large-Scale Point Clouds,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 11208 LNCS, pp. 625-640, 2018, doi: 10.1007/978-3-030-01225-0_37.
A. Dai, D. Ritchie, M. Bokeloh, S. Reed, J. Sturm, and M. Niebner, “ScanComplete: Large-Scale Scene Completion and Semantic Segmentation for 3D Scans,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 4578-4587, 2018, doi: 10.1109/CVPR.2018.00481.
K. Babacan, L. Chen, and G. Sohn, “Semantic Segmentation Of Indoor Point Clouds Using Convolutional Neural Network,” ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci., vol. 4, no. 4W4, pp. 101-108, 2017, doi: 10.5194/isprs-annals-IV-4-W4-101-2017.
Y. Rao, M. Zhang, Z. Cheng, J. Xue, J. Pu, and Z. Wang, “Semantic Point Cloud Segmentation Using Fast Deep Neural Network and DCRF,” Sensors, vol. 21, no. 8, pp. 1-16, 2021, doi: 10.3390/s21082731.
H. Su, S. Maji, E. Kalogerakis, and E. Learned-Miller, “Multi-view convolutional neural networks for 3D shape recognition,” Proc. IEEE Int. Conf. Comput. Vis., vol. 2015 Inter, pp. 945-953, 2015, doi: 10.1109/ICCV.2015.114.
A. Boulch, J. Guerry, B. Le Saux, and N. Audebert, “SnapNet: 3D point cloud semantic labeling with 2D deep segmentation networks,” Comput. Graph., vol. 71, pp. 189-198, 2018, doi: 10.1016/j.cag.2017.11.010.
B. Wu, A. Wan, X. Yue, and K. Keutzer, “SqueezeSeg: Convolutional Neural Nets with Recurrent CRF for Real-Time Road-Object Segmentation from 3D LiDAR Point Cloud,” in Proceedings - IEEE International Conference on Robotics and Automation, 2018, pp. 1887-1893. doi: 10.1109/ICRA.2018.8462926.
A. Milioto, I. Vizzo, J. Behley, and C. Stachniss, “RangeNet ++: Fast and Accurate LiDAR Semantic Segmentation,” IEEE Int. Conf. Intell. Robot. Syst., no. i, pp. 4213-4220, 2019, doi: 10.1109/IROS40897.2019.8967762.
C. R. Qi, H. Su, M. Niebner, A. Dai, M. Yan, and L. J. Guibas, “Volumetric and multi-view CNNs for object classification on 3D data,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, pp. 5648-5656, 2016, doi: 10.1109/CVPR.2016.609.
A. Dai and M. NieíŸner, “3DMV: Joint 3D-multi-view prediction for 3D semantic scene segmentation,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 11214 LNCS, pp. 458-474, 2018, doi: 10.1007/978-3-030-01249-6_28.
B. Zhang, S. Huang, W. Shen, and Z. Wei, “Explaining the pointnet: What has been learned inside the pointnet?,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work., vol. 2019-June, pp. 71-74, 2019.
A. Zaganidis, L. Sun, T. Duckett, and G. Cielniak, “Integrating Deep Semantic Segmentation into 3-D Point Cloud Registration,” IEEE Robot. Autom. Lett., vol. 3, no. 4, pp. 2942-2949, 2018, doi: 10.1109/LRA.2018.2848308.
H. Zhao, L. Jiang, J. Jia, P. Torr, and V. Koltun, “Point Transformer,” Proc. IEEE Int. Conf. Comput. Vis., pp. 16239-16248, 2021, doi: 10.1109/ICCV48922.2021.01595.
A. Komarichev, Z. Zhong, and J. Hua, “A-CNN: Annularly convolutional neural networks on point clouds,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2019-June, pp. 7413-7422, 2019, doi: 10.1109/CVPR.2019.00760.
Daniel, “CloudCompare Wiki,” 2015. https://www.cloudcompare.org/doc/wiki/index.php/Main_Page (accessed Jan. 01, 2023).
M. Weinmann, B. Jutzi, C. Mallet, and M. Weinmann, “Geometric Features And Their Relevance For 3D Point Cloud Classification,” ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci., vol. 4, no. 1W1, pp. 157-164, 2017, doi: 10.5194/isprs-annals-IV-1-W1-157-2017.
X. Yan, “Pointnet/Pointnet++ Pytorch,” https://github.com/yanx27/Pointnet_Pointnet2_pytorch, 2019. https://github.com/yanx27/Pointnet_Pointnet2_pytorch (accessed Jan. 05, 2023).
M. A. Rahman and Y. Wang, “Optimizing intersection-over-union in deep neural networks for image segmentation,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 10072 LNCS, pp. 234-244, 2016, doi: 10.1007/978-3-319-50835-1_22.
C. Wijaya, “Point Cloud Semantic Segmentation for Indoor Modeling Using PointNet,” Unpublished Master Thesis of Geomatics Engineering, Department of Geodetic Engineering, Universitas Gadjah Mada, 2023.
L. Loic and M. Simonovsky, “Large Scale PCSS with Superpoint Graphs,” Comput. Vis. Pap. Res., pp. 4558-4567, 2018, doi: 10.1109/CVPR.2018.00479.
M. Weinmann, A. Schmidt, C. Mallet, S. Hinz, F. Rottensteiner, and B. Jutzi, “Contextual classification of point cloud data by exploiting individual 3D neigbourhoods,” ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci., vol. 2, no. 3W4, pp. 271-278, 2015, doi: 10.5194/isprsannals-II-3-W4-271-2015.
J. Zhang, X. Lin, and X. Ning, “SVM-Based classification of segmented airborne LiDAR point clouds in urban areas,” Remote Sens., vol. 5, no. 8, pp. 3749-3775, 2013, doi: 10.3390/rs5083749.
A. Boulch, B. Le Saux, and N. Audebert, “Unstructured point cloud semantic labeling using deep segmentation networks,” Eurographics Work. 3D Object Retrieval, EG 3DOR, vol. 2017-April, pp. 17-24, 2017, doi: 10.2312/3dor.20171047.
H. S. Koppula, A. Anand, T. Joachims, and A. Saxena, “Semantic labeling of 3D point clouds for indoor scenes,” Adv. Neural Inf. Process. Syst. 24 25th Annu. Conf. Neural Inf. Process. Syst. 2011, NIPS 2011, pp. 1-9, 2011.
B. Haznedar, R. Bayraktar, A. E. Ozturk, and Y. Arayici, “Implementing PointNet for point cloud segmentation in the heritage context,” Herit. Sci., vol. 11, no. 1, pp. 1-18, 2023, doi: 10.1186/s40494-022-00844-w.
A. Nurunnabi, F. N. Teferle, J. Li, R. C. Lindenbergh, and S. Parvaz, “Investigation Of Pointnet For Semantic Segmentation Of Large-Scale Outdoor Point Clouds,” in International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives, 2021, vol. 46, no. 4/W5-2021, pp. 397-404. doi: 10.5194/isprs-Archives-XLVI-4-W5-2021-397-2021.

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).