DeepCloth Homepage

IEEE TPAMI

paper    video

THuman3.0 Dataset: a high resolution 3D human dataset containing multiple scans under each human-garment combination, suitable for researches on dynamic 3D human modeling

DeepCloth: Neural Garment Representation for
Shape and Style Editing

Zhaoqi Su1, Tao Yu1, Yangang Wang2, Yebin Liu1

1Tsinghua University, Beijing, China
2Southeast University, Nanjing, China


Figure 1: The demonstration of our garment shape inference, animation and 3D editing by our DeepCloth method. From left to right: input 3D scan, garment shape inference and reconstruction using our neural representation, garment animation under new pose, and two garment animation with garment style and shape editing.


Abstract

Garment representation, editing and animation are challenging topics in the area of computer vision and graphics. It remains difficult for existing garment representations to achieve smooth and plausible transitions between different shapes and topologies. In this work, we introduce, DeepCloth, a unified framework for garment representation, reconstruction, animation and editing. Our unified framework contains 3 components: First, we represent the garment geometry with a "topology-aware UV-position map", which allows for the unified description of various garments with different shapes and topologies by introducing an additional topology-aware UV-mask for the UV-position map. Second, to further enable garment reconstruction and editing, we contribute a method to embed the UV-based representations into a continuous feature space, which enables garment shape reconstruction and editing by optimization and control in the latent space, respectively. Finally, we propose a garment animation method by unifying our neural garment representation with body shape and pose, which achieves plausible garment animation results leveraging the dynamic information encoded by our shape and style representation, even under drastic garment editing operations. To conclude, with DeepCloth, we move a step forward in establishing a more flexible and general 3D garment digitization framework. Experiments demonstrate that our method can achieve state-of-the-art garment representation performance compared with previous methods.

Figure 2: Basic structures of our garment shape parametrization, animation and 3D inference modules.


Results


Technical Paper


Video Results


Citation

Zhaoqi Su, Tao Yu, Yangang Wang, and Yebin Liu. Deepcloth: Neural garment representation for shape and style editing. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(2):1581–1593, 2023.


@article{deepcloth_su2022,
  author={Su, Zhaoqi and Yu, Tao and Wang, Yangang and Liu, Yebin},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  title={DeepCloth: Neural Garment Representation for Shape and Style Editing},
  year={2023},
  volume={45},
  number={2},
  pages={1581-1593},
  doi={10.1109/TPAMI.2022.3168569}
}