Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel
Integration:  dvc git
939d810583
Commit data points in folder D:\Documents\Programming\Personal Projects\hackathonf23-Creativity_Unde
1 year ago
1 year ago
1 year ago
ca050f9987
wtf is happening
1 year ago
1 year ago
1e6eca0431
Merge branch 'main' of https://dagshub.com/ML-Purdue/hackathonf23-Creativity_Underflow
1 year ago
1c20764f5f
Add groupvit embedding extraction
1 year ago
1 year ago
1 year ago
0a7a4e41c3
Add Report
1 year ago
1e6eca0431
Merge branch 'main' of https://dagshub.com/ML-Purdue/hackathonf23-Creativity_Underflow
1 year ago
940bf7f5ed
Commit data points in folder D:\Documents\Programming\Personal Projects\hackathonf23-Creativity_Unde
1 year ago
e30d1e6c8c
this is fine
1 year ago
1 year ago
b06a08ac6e
Co-authored-by: Amanjyoti Mridha <QuackingBob@users.noreply.github.com>
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
Storage Buckets
Data Pipeline
Legend
DVC Managed File
Git Managed File
Metric
Stage File
External File

README.md

You have to be logged in to leave a comment. Sign In

hackathonf23-Creativity_Underflow - TextPSG Reproducibility Report

This document provides a comprehensive guide to reproducing the TextPSG project results for the hackathonf23-Creativity_Underflow. Please follow the instructions in each section carefully to ensure a successful replication of the project environment, execution, and results.

1. Code Repository

Access the runnable codebase and project files:

2. Configuration Instructions

The project settings are defined in:

  • Configuration File: settings.py

Repository Cloning

Clone the repository using Git:

git clone https://dagshub.com/ML-Purdue/hackathonf23-Creativity_Underflow.git

3. Data and Artifacts

Locate all necessary data and artifacts in the DVC file:

  • Data File: data.dvc

4. Setting Up the Environment

Software Environment

Create and activate a new conda environment with the following commands:

conda create -n textpsg python=3.10
conda activate textpsg
conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
conda install cython
pip install -r requirements.txt

Java Dependencies

Download the following Java dependencies:

Note: Place the English KBP jar file into the StanfordCoreNLP directory after downloading and unzipping.

Hardware Requirements

Ensure the following hardware specifications are met:

  • GPU: 1x NVIDIA GeForce RTX 3090
  • Memory: Minimum 96GB RAM
  • Storage: 500GB available space

5. Model Training Process

Execute the following steps to train the model:

  1. Replace "PATH TO OUTPUT FILE" and "PATH TO ANOTATIONS FILE" in text_preprocessing/TextPreprocessingCoreNLP.java and text_preprocessing/text_preprocessing_sng_parser.py
  2. Text Graph Generation: Use the provided Java and Python scripts for preprocessing.Run text_preprocessing/text_preprocessing_sng_parser.py and text_preprocessing/TextPreprocessingCoreNLP.java. See below for details to run the java file.

On Windows:

>>> pwd
'~/hackathonf23-Creativity_Underflow/'
>>> javac -encoding ISO-8859-1 -cp "<Path_To_Stanford_CoreNLP>\*;<Path_to_JSON_jar>;" text_preprocessing/TextPreprocessingCoreNLP.java
>>> java -cp "<Path_To_Stanford_CoreNLP>\*;<Path_to_JSON_jar>;" text_preprocessing/TextPreprocessingCoreNLP.java

On Linux:

>>> pwd
'~/hackathonf23-Creativity_Underflow/'
>>> javac -cp "<Path_To_Stanford_CoreNLP>/*:<Path_to_JSON_jar>" text_preprocessing/TextPreprocessingCoreNLP.java
>>> java -cp "<Path_To_Stanford_CoreNLP>/*:<Path_to_JSON_jar>" "text_processing/TextPreprocessingCoreNLP.java"
  1. Embeddings Generation: Generate embeddings for training and validation using python generateEmbeddings.py

  2. Embeddings Storage: Store the embeddings to a numpy memmapped file using python memmapEmbeddings.py

  3. Model Training: Begin the training process with python train.py

6. Our Paper

  • You can find our report here.
  • The original paper can be found here.

7. Citations

When referencing our work, please use the following citation:

@article{zhao2023textpsg,
      title={TextPSG: Panoptic Scene Graph Generation from Textual Descriptions},
      author={Chengyang Zhao and Yikang Shen and Zhenfang Chen and Mingyu Ding and Chuang Gan},
      year={2023},
      eprint={2310.07056},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}```
Tip!

Press p or to see the previous file or, n or to see the next file

About

No description

Collaborators 9

Comments

Loading...