An library for editing and rendering motion of 3D characters with deep learning [SIGGRAPH 2020]

K. S. Ernest (iFire) Lee 3b433087d0 Don't crash with many blender processes. 4 days ago
.github 3c6e23887c Add log4brain yaml. 2 months ago
dagster 74a6506647 Format python. 1 month ago
datasets 3b433087d0 Don't crash with many blender processes. 4 days ago
docs 74a6506647 Format python. 1 month ago
models 0d4c846e56 Checkpoint work. Unstable 1 month ago
options d826b1598b Organize imports. 6 months ago
pretrained 0275c2b4ef Checkpoint. 4 months ago
trash_bin 29e0dea9ef Update 2bvh. 1 month ago
utils 9339db710a Remove dead code. 2 months ago
.gitignore 8a097ca512 Ignore motions. 1 month ago
.log4brains.yml 64128ae090 Add adr. 2 months ago
README.md 3b433087d0 Don't crash with many blender processes. 4 days ago
demo 0d4c846e56 Checkpoint work. Unstable 1 month ago
eval.py 93f66e190c Remove training list. 1 month ago
eval_single_pair.py 74a6506647 Format python. 1 month ago
get_error.py 74a6506647 Format python. 1 month ago
license.txt 9422abb964 Add my name to the copyright. 4 months ago
loss_record.py d826b1598b Organize imports. 6 months ago
option_parser.py d3f4d6bfe2 Reduce minimum epochs. 2 months ago
test 74a6506647 Format python. 1 month ago
train 0d4c846e56 Checkpoint work. Unstable 1 month ago

README.md

KineticTransfer-GAN

Kinetic Transfer GAN ADRs

This library provides fundamental and advanced functions to work with 3D character animation in deep learning with Pytorch.

This library is maintained by K. S. Ernest (iFire) Lee.

Prerequisites

  • Windows 10, Linux or macOS
  • Python 3
  • CPU or NVIDIA GPU + CUDA CuDNN

Retarget one animation to another animation.

Each animation must belong to a certain skeleton topology and each skeleton topology should have 9,000 frames of animation. The required frames per topology may be reduced but these frame requirements were the reported numbers.

Motion Retargeting

The test dataset is included in the Motions directory within datasets.

To generate the demo examples with the pretrained model, run

python3 demo --save_dir=pretrained

The results will be saved in examples .

To reconstruct the quantitative result with the pretrained model, run

python3 test

Dataset

  • scoop install miniconda3 Install miniconda.

  • Create conda conda create --name kinetic environment.

  • Activate conda conda init powershell at the base directory.

  • Activate conda conda activate kinetic at the base directory.

  • Restart powershell

  • Install cuda. conda install -y tqdm scipy tensorboard pytorch torchvision torchaudio cudatoolkit=11.0 -c pytorch -c=conda-forge

  • Run blender -b -P datasets/fbx2bvh.py or blender -b -P datasets/gltf2bvh.py to convert animation files to bvh files. If you already have bvh file as dataset, please skip this step.

  • REQUIRED Run python datasets/preprocess.py to simplify the skeleton by removing some less interesting joints, e.g. fingers and convert bvh files into npy files. If you use your own data, you'll need to define simplified structure in datasets/bvh_parser.py . This information currently is hard-coded in the code. See the comment in source file for more details. There are four steps to make your own dataset work.

  • Training and testing character are hard-coded in datasets/__init__.py . You'll need to modify it if you want to use your own dataset.

Train

After preparing dataset, simply run

python train

It will use default hyper-parameters to train the model and save trained model in training directory. More options are available in option_parser.py . You can use tensorboard to monitor the training progress by running

tensorboard --logdir=./training/logs/

Train from scratch

We provide instructions for retraining our models.

Acknowledgments

The code in the utils directory is mostly taken from Holden et al. [2016].

In addition, part of the MoCap dataset is taken from Adobe Mixamo and from the work of Xia et al., however these MoCap datasets have been removed from the git HEAD.

The main deep editing operations provided here, motion retargeting, are based on work published in SIGGRAPH 2020 by Kfir Aberman, Peizhuo Li and Yijia Weng. Skeleton-Aware Networks for Deep Motion Retargeting: Project | Paper | Video

@article{aberman2020skeleton,
  author = {Aberman, Kfir and Li, Peizhuo and Sorkine-Hornung Olga and Lischinski, Dani and Cohen-Or, Daniel and Chen, Baoquan},
  title = {Skeleton-Aware Networks for Deep Motion Retargeting},
  journal = {ACM Transactions on Graphics (TOG)},
  volume = {39},
  number = {4},
  pages = {62},
  year = {2020},
  publisher = {ACM}
}