An library for editing and rendering motion of 3D characters with deep learning [SIGGRAPH 2020]

K. S. Ernest (iFire) Lee 068b2eacd3 Still works. 2 months ago
dagster e68c1ec66c Move epoch model training outside. 2 months ago
datasets 068b2eacd3 Still works. 2 months ago
models ed28c8e565 Restore last working skeleton and checkpoint 2 months ago
options 718d2f5876 Organize imports. 4 months ago
pretrained 908043912c Checkpoint. 2 months ago
utils a4bbe2825d BVH are assumed to be utf8. 2 months ago
.gitignore 390dba3b15 Remove. 2 months ago b3843eb69a Update readme. 2 months ago db4fe049d5 Remove axis conversion. 2 months ago
demo 068b2eacd3 Still works. 2 months ago 908043912c Checkpoint. 2 months ago 068b2eacd3 Still works. 2 months ago 5fb2b2b4be Switch to XYZ. 2 months ago a1ac38c2fa Checkpoint. 2 months ago 5fb2b2b4be Switch to XYZ. 2 months ago
license.txt 651d06a609 Add my name to the copyright. 2 months ago 718d2f5876 Organize imports. 4 months ago 0a042cd5b0 Increasing the epochs increases quality. 2 months ago
test 908043912c Checkpoint. 2 months ago
train 908043912c Checkpoint. 2 months ago


This library provides fundamental and advanced functions to work with 3D character animation in deep learning with Pytorch.

This library is maintained by K. S. Ernest (iFire) Lee.


  • Windows 10, Linux or macOS
  • Python 3

Retarget one animation to another animation.

Each animation must belong to a certain skeleton topology and each skeleton topology should have 9,000 frames of animation. The required frames per topology may be reduced but these frame requirements were the reported numbers.

Motion Retargeting

The test dataset is included in the Motions directory within datasets.

To generate the demo examples with the pretrained model, run

python3 demo --save_dir=pretrained

The results will be saved in examples .

To reconstruct the quantitative result with the pretrained model, run

python3 test


  • scoop install miniconda3 Install miniconda.

  • Install cuda. conda install -y tqdm scipy tensorboard pytorch torchvision torchaudio cudatoolkit=11.0 -c pytorch -c=conda-forge

  • Enter datasets directory and run blender -b -P or blender -b -P to convert fbx files to bvh files. If you already have bvh file as dataset, please skip this step.

  • Run python datasets/ to simplify the skeleton by removing some less interesting joints, e.g. fingers and convert bvh files into npy files. If you use your own data, you'll need to define simplified structure in datasets/ . This information currently is hard-coded in the code. See the comment in source file for more details. There are four steps to make your own dataset work.

  • Training and testing character are hard-coded in datasets/ . You'll need to modify it if you want to use your own dataset.


After preparing dataset, simply run

python3 train

It will use default hyper-parameters to train the model and save trained model in training directory. More options are available in . You can use tensorboard to monitor the training progress by running

tensorboard --logdir=./training/logs/

Train from scratch

We provide instructions for retraining our models.


The code in the utils directory is mostly taken from Holden et al. [2016].

In addition, part of the MoCap dataset is taken from Adobe Mixamo and from the work of Xia et al., however these MoCap datasets have been removed from the git HEAD.

The main deep editing operations provided here, motion retargeting, are based on work published in SIGGRAPH 2020 by Kfir Aberman, Peizhuo Li and Yijia Weng. Skeleton-Aware Networks for Deep Motion Retargeting: Project | Paper | Video

  author = {Aberman, Kfir and Li, Peizhuo and Sorkine-Hornung Olga and Lischinski, Dani and Cohen-Or, Daniel and Chen, Baoquan},
  title = {Skeleton-Aware Networks for Deep Motion Retargeting},
  journal = {ACM Transactions on Graphics (TOG)},
  volume = {39},
  number = {4},
  pages = {62},
  year = {2020},
  publisher = {ACM}