Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel
Type:  dataset Task:  language modelling Data Domain:  nlp Integration:  dvc git
d8183751ce
initial commit
1 year ago
d8183751ce
initial commit
1 year ago
d8183751ce
initial commit
1 year ago
d8183751ce
initial commit
1 year ago
d8183751ce
initial commit
1 year ago
Storage Buckets
Data Pipeline
Legend
DVC Managed File
Git Managed File
Metric
Stage File
External File

README.md

You have to be logged in to leave a comment. Sign In

1 Bllion Word Language Model Benchmark

paper | code | output probabilities

The purpose of the project is to make available a standard training and test setup for language modeling experiments.

The training/held-out data was produced from the WMT 2011 News Crawl data using a combination of Bash shell and Perl scripts distributed here.

This also means that your results on this data set are reproducible by the research community at large.

Besides the scripts needed to rebuild the training/held-out data, it also makes available log-probability values for each word in each of ten feld-out data sets, for each of the following baseline models:

  • unpruned Katz (1.1B n-grams),
  • pruned Katz (~15M n-grams),
  • unpruned Interpolated Kneser-Ney (1.1B n-grams),
  • pruned Interpolated Kneser-Ney (~15M n-grams)

Happy benchmarking!

The home page for this project is http://code.google.com/p/1-billion-word-language-modeling-benchmark/ which links to the code, a paper and a disucsion group.

This output is from revision 13 of the code run with perl v5.16.3. perl v5.14.2 gives the same output.

Tip!

Press p or to see the previous file or, n or to see the next file

About

The purpose of the project is to make available a standard training and test setup for language modeling experiments.

http://arxiv.org/abs/1312.3005
Collaborators 4

Comments

Loading...