Photo by Milad Fakurian on Unsplash

Google Books Ngrams Dataset for Machine Learning

Install DagsHub:

pip install dagshub
Click on copy button to copy content

To stream this data directly on DagsHub

from dagshub.streaming import DagsHubFilesystem

fs = DagsHubFilesystem(".", repo_url="https://dagshub.com/DagsHub-Datasets/google-ngrams-dataset")

fs.listdir("s3://datasets.elasticmapreduce/ngrams/books/")
Click on copy button to copy content

Description

N-grams are fixed size tuples of items. In this case the items are words extracted from the Google Books corpus. The n specifies the number of elements in the tuple, so a 5-gram contains five words or characters. The n-grams in this dataset were produced by passing a sliding window of the text of books and outputting a record for each new token.

Additional information

Update frequency

Not updated

Managed by

Not managed

License

Creative Commons Attribution 3.0 Unported License

Related datasets

Common Screens

Helpful Sentences from Reviews

Humor Detection from Product Question Answering Systems

Japanese Tokenizer Dictionaries

Launch your ML development to new heights with DagsHub

Back to top