An library for editing and rendering motion of 3D characters with deep learning [SIGGRAPH 2020]
|
4 days ago | |
---|---|---|
.github | 2 months ago | |
dagster | 1 month ago | |
datasets | 4 days ago | |
docs | 1 month ago | |
models | 1 month ago | |
options | 6 months ago | |
pretrained | 4 months ago | |
trash_bin | 1 month ago | |
utils | 2 months ago | |
.gitignore | 1 month ago | |
.log4brains.yml | 2 months ago | |
README.md | 4 days ago | |
demo | 1 month ago | |
eval.py | 1 month ago | |
eval_single_pair.py | 1 month ago | |
get_error.py | 1 month ago | |
license.txt | 4 months ago | |
loss_record.py | 6 months ago | |
option_parser.py | 2 months ago | |
test | 1 month ago | |
train | 1 month ago |
This library provides fundamental and advanced functions to work with 3D character animation in deep learning with Pytorch.
This library is maintained by K. S. Ernest (iFire) Lee.
Retarget one animation to another animation.
Each animation must belong to a certain skeleton topology and each skeleton topology should have 9,000 frames of animation. The required frames per topology may be reduced but these frame requirements were the reported numbers.
The test dataset is included in the Motions directory within datasets
.
To generate the demo examples with the pretrained model, run
python3 demo --save_dir=pretrained
The results will be saved in examples
.
To reconstruct the quantitative result with the pretrained model, run
python3 test
scoop install miniconda3
Install miniconda.
Create conda conda create --name kinetic
environment.
Activate conda conda init powershell
at the base directory.
Activate conda conda activate kinetic
at the base directory.
Restart powershell
Install cuda. conda install -y tqdm scipy tensorboard pytorch torchvision torchaudio cudatoolkit=11.0 -c pytorch -c=conda-forge
Run blender -b -P datasets/fbx2bvh.py
or blender -b -P datasets/gltf2bvh.py
to convert animation files to bvh files. If you already have bvh file as dataset, please skip this step.
REQUIRED Run python datasets/preprocess.py
to simplify the skeleton by removing some less interesting joints, e.g. fingers and convert bvh files into npy files. If you use your own data, you'll need to define simplified structure in datasets/bvh_parser.py
. This information currently is hard-coded in the code. See the comment in source file for more details. There are four steps to make your own dataset work.
Training and testing character are hard-coded in datasets/__init__.py
. You'll need to modify it if you want to use your own dataset.
After preparing dataset, simply run
python train
It will use default hyper-parameters to train the model and save trained model in training
directory. More options are available in option_parser.py
. You can use tensorboard to monitor the training progress by running
tensorboard --logdir=./training/logs/
We provide instructions for retraining our models.
The code in the utils directory is mostly taken from Holden et al. [2016].
In addition, part of the MoCap dataset is taken from Adobe Mixamo and from the work of Xia et al., however these MoCap datasets have been removed from the git HEAD.
The main deep editing operations provided here, motion retargeting, are based on work published in SIGGRAPH 2020 by Kfir Aberman, Peizhuo Li and Yijia Weng. Skeleton-Aware Networks for Deep Motion Retargeting: Project | Paper | Video
@article{aberman2020skeleton,
author = {Aberman, Kfir and Li, Peizhuo and Sorkine-Hornung Olga and Lischinski, Dani and Cohen-Or, Daniel and Chen, Baoquan},
title = {Skeleton-Aware Networks for Deep Motion Retargeting},
journal = {ACM Transactions on Graphics (TOG)},
volume = {39},
number = {4},
pages = {62},
year = {2020},
publisher = {ACM}
}