International Journal on Advanced Science, Engineering and Information Technology, Vol. 13 (2023) No. 4, pages: 1510-1517, DOI:10.18517/ijaseit.13.4.19019

Coefficient Prediction for Physically-based Cloth Simulation Using Deep learning

Makara Mao, Hongly Va, Min Hong


Physically-based cloth simulation involves modeling cloth as a collection of particles or nodes connected by various types of constraints. These particles interact with each other and the environment, such as gravity or collisions, to accurately simulate the cloth's behavior. One essential component of such simulations is the set of material parameters or coefficients that dictate the cloth's physical properties, such as stiffness and damping. Deep learning-based coefficient prediction in physically-based cloth simulation involves using machine learning techniques, specifically deep neural networks, to predict the material parameters of cloth from its geometric and physical properties. The deep learning model is trained using a dataset of simulated cloth instances, where the material parameters are known. The input to the model is a set of geometric and physical properties of the cloth, such as the dimensions, orientation, and velocity. The output of the model is the set of material parameters that best represent the cloth's behavior under these conditions. This paper proposes a deep learning method for predicting these coefficients using a multi-label video classification approach. The training data is generated from a physics-based simulator, and the method is evaluated on some cloth simulations, such as fabric falling down, fabric with collision, and fabric affected by airflow. The cloth movement dataset is generated from a mass-spring-based simulation. The results show that the transformer model has much higher accuracy than other models. This study provides a promising approach for predicting the coefficients of virtual cloth in physically-based simulations.


Deep learning; computer vision; cloth simulation; transformer; LSTM; GRU

Viewed: 299 times (since abstract online)

cite this paper     download