Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel
General:  research Task:  sequence-to-sequence language modeling Data Domain:  nlp Integration:  arxiv
Myle Ott 8ce6499dbf
Merge internal changes (#422)
5 years ago
1082ba352c
Switch to DistributedDataParallelC10d and bump version 0.5.0 -> 0.6.0
5 years ago
f41088a564
match examples/stories/writingPrompts scripts to correct folder
5 years ago
8ce6499dbf
Merge internal changes (#422)
5 years ago
82a9f9230f
Fix arg formatting in preprocess.py and add fmt control for black formatting (#399)
5 years ago
3c19878f71
Refactor BacktranslationDataset to be more reusable (#354)
5 years ago
0bc5c2e9cc
fbshipit-source-id: 17992f6a5908f078942544b769eda7a340a5e359
5 years ago
d92ce54c65
Ignore generated files for temporal convolution tbc
6 years ago
0bc5c2e9cc
fbshipit-source-id: 17992f6a5908f078942544b769eda7a340a5e359
5 years ago
a15acdb062
Architecture settings and readme updates
6 years ago
e734b0fa58
Initial commit
6 years ago
e734b0fa58
Initial commit
6 years ago
df88ba95c7
Update README.md
5 years ago
1082ba352c
Switch to DistributedDataParallelC10d and bump version 0.5.0 -> 0.6.0
5 years ago
693894b6de
Merge small fixes from internal
5 years ago
e734b0fa58
Initial commit
6 years ago
8ce6499dbf
Merge internal changes (#422)
5 years ago
8ce6499dbf
Merge internal changes (#422)
5 years ago
ccd22212d8
Warn when using --update-freq on a single machine and --ddp-backend != no_c10d
5 years ago
82a9f9230f
Fix arg formatting in preprocess.py and add fmt control for black formatting (#399)
5 years ago
6e4d370af9
More updates for PyTorch (#114)
6 years ago
82a9f9230f
Fix arg formatting in preprocess.py and add fmt control for black formatting (#399)
5 years ago
1082ba352c
Switch to DistributedDataParallelC10d and bump version 0.5.0 -> 0.6.0
5 years ago
6c006a34ab
Take a dummy train step under OOM to keep multiprocessing in sync
5 years ago
Storage Buckets

README.md

You have to be logged in to leave a comment. Sign In

Introduction

Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling and other text generation tasks. It provides reference implementations of various sequence-to-sequence models, including:

Fairseq features:

  • multi-GPU (distributed) training on one machine or across multiple machines
  • fast beam search generation on both CPU and GPU
  • large mini-batch training even on a single GPU via delayed updates
  • fast half-precision floating point (FP16) training
  • extensible: easily register new models, criterions, and tasks

We also provide pre-trained models for several benchmark translation and language modeling datasets.

Model

Requirements and Installation

Currently fairseq requires PyTorch version >= 0.4.0. Please follow the instructions here: https://github.com/pytorch/pytorch#installation.

If you use Docker make sure to increase the shared memory size either with --ipc=host or --shm-size as command line options to nvidia-docker run.

After PyTorch is installed, you can install fairseq with:

pip install -r requirements.txt
python setup.py build develop

Getting Started

The full documentation contains instructions for getting started, training new models and extending fairseq with new model types and tasks.

Pre-trained Models

We provide the following pre-trained models and pre-processed, binarized test sets:

Translation

Description Dataset Model Test set(s)
Convolutional
(Gehring et al., 2017)
WMT14 English-French download (.tar.bz2) newstest2014:
download (.tar.bz2)
newstest2012/2013:
download (.tar.bz2)
Convolutional
(Gehring et al., 2017)
WMT14 English-German download (.tar.bz2) newstest2014:
download (.tar.bz2)
Convolutional
(Gehring et al., 2017)
WMT17 English-German download (.tar.bz2) newstest2014:
download (.tar.bz2)
Transformer
(Ott et al., 2018)
WMT14 English-French download (.tar.bz2) newstest2014 (shared vocab):
download (.tar.bz2)
Transformer
(Ott et al., 2018)
WMT16 English-German download (.tar.bz2) newstest2014 (shared vocab):
download (.tar.bz2)
Transformer
(Edunov et al., 2018; WMT'18 winner)
WMT'18 English-German download (.tar.bz2) See NOTE in the archive

Language models

Description Dataset Model Test set(s)
Convolutional
(Dauphin et al., 2017)
Google Billion Words download (.tar.bz2) download (.tar.bz2)
Convolutional
(Dauphin et al., 2017)
WikiText-103 download (.tar.bz2) download (.tar.bz2)

Stories

Description Dataset Model Test set(s)
Stories with Convolutional Model
(Fan et al., 2018)
WritingPrompts download (.tar.bz2) download (.tar.bz2)

Usage

Generation with the binarized test sets can be run in batch mode as follows, e.g. for WMT 2014 English-French on a GTX-1080ti:

$ curl https://s3.amazonaws.com/fairseq-py/models/wmt14.v2.en-fr.fconv-py.tar.bz2 | tar xvjf - -C data-bin
$ curl https://s3.amazonaws.com/fairseq-py/data/wmt14.v2.en-fr.newstest2014.tar.bz2 | tar xvjf - -C data-bin
$ python generate.py data-bin/wmt14.en-fr.newstest2014  \
  --path data-bin/wmt14.en-fr.fconv-py/model.pt \
  --beam 5 --batch-size 128 --remove-bpe | tee /tmp/gen.out
...
| Translated 3003 sentences (96311 tokens) in 166.0s (580.04 tokens/s)
| Generate test with beam=5: BLEU4 = 40.83, 67.5/46.9/34.4/25.5 (BP=1.000, ratio=1.006, syslen=83262, reflen=82787)

# Scoring with score.py:
$ grep ^H /tmp/gen.out | cut -f3- > /tmp/gen.out.sys
$ grep ^T /tmp/gen.out | cut -f2- > /tmp/gen.out.ref
$ python score.py --sys /tmp/gen.out.sys --ref /tmp/gen.out.ref
BLEU4 = 40.83, 67.5/46.9/34.4/25.5 (BP=1.000, ratio=1.006, syslen=83262, reflen=82787)

Join the fairseq community

Citation

If you use the code in your paper, then please cite it as:

@inproceedings{gehring2017convs2s,
  author    = {Gehring, Jonas, and Auli, Michael and Grangier, David and Yarats, Denis and Dauphin, Yann N},
  title     = "{Convolutional Sequence to Sequence Learning}",
  booktitle = {Proc. of ICML},
  year      = 2017,
}

License

fairseq(-py) is BSD-licensed. The license applies to the pre-trained models as well. We also provide an additional patent grant.

Credits

This is a PyTorch version of fairseq, a sequence-to-sequence learning toolkit from Facebook AI Research. The original authors of this reimplementation are (in no particular order) Sergey Edunov, Myle Ott, and Sam Gross.

Tip!

Press p or to see the previous file or, n or to see the next file

About

A fork for fairseq, migrated to DVC and used for NLP research.

Publications
View on arXiv  
Collaborators 1

Comments

Loading...