|Michael Ekstrand f1083188ce fix url||10 months ago|
|.dvc||10 months ago|
|.vscode||4 years ago|
|bookdata||1 year ago|
|data||11 months ago|
|import||11 months ago|
|index||11 months ago|
|integrate||11 months ago|
|schemas||11 months ago|
|scripts||1 year ago|
|src||1 year ago|
|.editorconfig||1 year ago|
|.gitattributes||2 years ago|
|.gitignore||1 year ago|
|Cargo.lock||1 year ago|
|Cargo.toml||1 year ago|
|Dvcfile||1 year ago|
|LICENSE.txt||1 year ago|
|LinkageStats.ipynb||1 year ago|
|README.md||1 year ago|
|environment.yml||11 months ago|
|init.status.dvc||1 year ago|
|ol-explore.sql||2 years ago|
|run.py||1 year ago|
|schemas.dvc||1 year ago|
|DVC Managed File|
|Git Managed File|
|DVC Managed File|
|Git Managed File|
This repository contains the code to import and integrate the book and rating data that we work with. It imports and integrates data from several sources in a single PostgreSQL database; import scripts are primarily in Python, with Rust code for high-throughput processing of raw data files.
If you use these scripts in any published reseaerch, cite our paper:
Michael D. Ekstrand, Mucun Tian, Mohammed R. Imran Kazi, Hoda Mehrpouyan, and Daniel Kluver. 2018. Exploring Author Gender in Book Rating and Recommendation. In Proceedings of the 12th ACM Conference on Recommender Systems (RecSys '18). ACM, pp. 242–250. DOI:10.1145/3240323.3240373. arXiv:1808.07586v1 [cs.IR].
Note: the limitations section of the paper contains important information about the limitations of the data these scripts compile. Do not use this data or tools without understanding those limitations. In particular, VIAF's gender information is incomplete and, in a number of cases, incorrect.
In addition, several of the data sets integrated by this project come from other sources with their own publications. If you use any of the rating data, cite the appropriate original source paper. For each data set below, we have provided a link to the page that describes the appropriate citation.
pg_prewarm(from the PostgreSQL Contrib package) installed.
It is best if you do not store the data files on the same disk as your PostgreSQL database.
environment.yml file defines an Anaconda environment that contains all the required packages except for the PostgreSQL server. It can be set up with:
conda create -f environment.yml
All scripts read database connection info from the standard PostgreSQL client environment variables:
Alternatively, they will read from
We use Data Version Control (
dvc) to script the import and wire
its various parts together. A complete re-run, not including file download time, takes
approximately 8 hours on our hardware (24-core 2GHz Xeon, 128GiB RAM, spinning disks).
After creating your database, initialize the extensions (as the database superuser):
CREATE EXTENSION orafce; CREATE EXTENSION pg_prewarm; CREATE EXTENSION "uuid-ossp";
The default PostgreSQL performance configuration settings will probably not be very effective; we recommend turning on parallelism and increasing work memory, at a minimum.
This imports the following data sets:
data(auto-downloaded). If you use this data, cite the paper.
Several of these files can be auto-downloaded with the DVC scripts; others will need to be manually downloaded.
You can run the entire import process with:
Individual steps can be run with their corresponding
The import code consists of Python, Rust, and SQL code, wired together with DVC.
Python scripts live under
scripts, as a Python package. They should not be launched directly, but
run.py, which will make sure the environment is set up properly for them:
python run.py sql-script [options] script.sql
In order to allow DVC to be aware of current database state, we use a little bit of an unconventional
layout for many of our DVC scripts. Many steps have two
.dvc files with associated outputs:
step.dvcruns import stage
step.transcriptis (consistent) output from running
step, recording the actions taken. It is registered with DVC as the output of
step.status.dvcis an always-changed DVC stage that depends on
step.status, to check the current status in the database of that import stage.
step.statusis an uncached output (so it isn't saved with DVC, and we also ignore it from Git) that is registered as the output of
step.status.dvc. It contains a stable status dump from the database, to check whether
stepis actually in the database or has changed in a meaningful way.
Steps that depend on
step then depend on
The reason for this somewhat bizarre layoutis that if we just wrote the output files, and the database was reloaded or corrupted, the DVC status-checking logic would not be ableto keep track of it. This double-file design allows us to make subsequent steps depend on the actual results of the import, not our memory of the import in the Git repository.
init.status is an initial check for database initialization, and forces the creation of the
meta-structures used for tracking stage status. Everything touching the database should depend on it,
directly or indirectly.
Import steps are tracked in the
stage_status table in the database. For completed stages, this can
include a key (checksum, UUID, or other identifier) to identify a 'version' of the stage. Stages
can also have dependencies, which are solely used for computing the status of a stage (all actual
dependency relationships are handled by DVC):
stage_depstracks stage-to-stage dependencies, to say that one stage used another as input.
stage_filetracks stage-to-file dependencies, to say that a stage used a file as input.
source_file table tracks input file checksums.
Projects using the book database can also use
stage_status to obtain data version information, to
see if they are up-to-date.
bookdata package contains Python utility code, and the
src directory contains a number
of utility modules for use in the Rust code. To the extent reasonable, we have tried to mirror
design patterns and function names.
The general import philosophy is that we import the data into a PostgreSQL table in a raw form, only doing those conversions necessary to be able to access it with PostgreSQL commands and functions. We then extract information from this raw form into other tables and materialized views to enable relational queries.
Each data set's import process follows the following steps:
import/. This may be multiple steps; for example, OpenLibrary has a separate import step for each file. Actual import is usually handled by Rust or Python code.
Data integration then happens after the data sets are indexed (mostly).
Each data set comes with its own identification scheme for books:
We integrate these through two steps. First, we map ISBNs to numeric IDs with the
This table contains every ISBN (or ISBN-like thing, such as ASIN) we have seen and associates it
with a unique identifier.
Second, we map ISBN IDs to clusters with the
isbn_cluster table. A cluster is a collection of
related ISBNs, such as the different editions of a work. They correspond to GoodReads or OpenLibrary
'works' (in fact, when a GoodReads or OpenLibrary work is available, it is used to generate the
This allows us to connect ratings to metadata with maximal link coverage, by pulling in metadata across the whole book cluster.