Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel
Data Domain:  nlp Integration:  dvc git gitlab
bd896b3eeb
copy from github;
3 years ago
bd896b3eeb
copy from github;
3 years ago
da22f95e19
upd: visualizations for new marked data
3 years ago
b7d17d05b1
Merge branch 'tfidfClustering' into 'master'
2 years ago
df32769075
update download script
3 years ago
0f4b963678
solve conflicts
3 years ago
80af247160
experiment: svm with context
2 years ago
1de40a1316
add: script for computing context
2 years ago
cf3af3f2fd
Add experiment for clustering using tfidf
3 years ago
f23a7f09e6
git cloned from https://github.com/whatevernevermindbro/source_code_classification
3 years ago
38da43f8e8
add: code for evaluating catboost model on dataset with context
2 years ago
67f29f2c6d
replace code2vec with pycode2vec
2 years ago
bd896b3eeb
copy from github;
3 years ago
ae877d8024
added: conventions
3 years ago
79f739ec5e
added: base for future corpus
2 years ago
daa33d278d
add new data
2 years ago
5f4b52cc1f
update data
3 years ago
9095a79f2a
upd: saved three competitions tables
3 years ago
319f6a2523
added: DVC YAML with regex stage of pipeline
3 years ago
1b6b911c09
upd: commented
3 years ago
80af247160
experiment: svm with context
2 years ago
a2f99e4a23
Add exp for least
3 years ago
5ebd4bd804
Add exp for most
3 years ago
6b3d2553c3
Add exp for random
3 years ago
1c7da81609
experiment with nb-svm on old data
3 years ago
dc835f45bb
upd: github sync;
3 years ago
bd896b3eeb
copy from github;
3 years ago
a76a869340
added: templating (not tested yet);
3 years ago
80af247160
experiment: svm with context
2 years ago
a2f99e4a23
Add exp for least
3 years ago
5ebd4bd804
Add exp for most
3 years ago
6b3d2553c3
Add exp for random
3 years ago
d86db5a93c
added: parser dependecies
2 years ago
Storage Buckets
Data Pipeline
Legend
DVC Managed File
Git Managed File
Metric
Stage File
External File

README.md

You have to be logged in to leave a comment. Sign In

Source Code Classification

This is a repo of the Natural Language to Machine Learning (NL2ML) project of the Laboratory of Methods for Big Data Analysis at Higher School of Economics (HSE LAMBDA).

The project's official repo is stored on GitLab (HSE LAMBDA repository) - https://gitlab.com/lambda-hse/nl2ml The project's full description is stored on Notion - https://www.notion.so/NL2ML-Corpus-1ed964c08eb049b383c73b9728c3a231 The project's experiments are stored on DAGsHub - https://dagshub.com/levin/source_code_classification

Project Goals

Short-Term Goal

To build a model classifying a source code chunk and to specify where the detected class is exactly in the chunk (tag segmentation).

Long-Term Goal

To build a model generating code by getting a short raw english task in as an input.

Repository Description

This repository contains instruments which the project's team has been using to label source code chunks with Knowledge Graph vertices and to train models to recognize these vertices in future. By the Knowledge Graph vertices we mean an elementary part of ML-pipeline. The current latest version of the Knowledge Graph contains the following high-level vertices: ['import', 'data_import', 'data_export', 'preprocessing', 'visualization', 'model', 'deep_learning_model', 'train' 'predict'].

Data Download

To download the project data and models:

  1. Clone this repository
  2. Install DVC from https://dvc.org/doc/install
  3. Do dvc pull data or dvc pull data. Note: if you are failing on dvc pull [folder_to_pull], try dvc pull [folder_to_pull] --jobs 1

Contents:

The instruments which we have been using to reach the project goals are: notebooks parsing from Kaggle API and Github API, data preparation, regex-labellig, training models, validation models, model weights/coefficients analysis, errors analysis, synonyms analysis.

nl2ml_notebook_parser.py - a script for parsing Kaggle notebooks and process them to JSON/CSV/Pandas.

bert_distances.ipynb - a notebook with BERT expiremints concerning sense of distance between BERT embeddings where input tokens were tokenized source code chunks.

bert_classifier.ipynb - a notebook with preprocessing and training BERT-pipeline.

regex.ipynb - a notebook with creating labels for code chunks with regex

logreg_classifier.ipynb - a notebook with training logistic regression model on the regex labels with tf-idf and analyzing the outputs

Comments vs commented code.ipynb - a notebook with a model distinguishing NL-comments from commented source code

github_dataset.ipynb - a notebook with opening github_dataset

predict_tag.ipynb - a notebook with predicting class label (tag) with any model

svm_classifier.ipynb - a notebook with training SVM (replaced by svm_train.py) and analyzing SVM outputs

svm_train.py - a script for training SVM model

Conventions:

  • Input CSV: encoding='utf-8', sep=',' and CODE_COLUMN has to be == 'code_block' in all input CSVs
  • Knowledge Graphs: GRAPH_DIR has to be in the following format: './graph/graph_v{}.txt'.format(GRAPH_VER)
Tip!

Press p or to see the previous file or, n or to see the next file

About

This is a repo of the Natural Language to Machine Learning (NL2ML) project of the Laboratory of Methods for Big Data Analysis at Higher School of Economics (HSE LAMBDA).

https://www.notion.so/NL2ML-Corpus-1ed964c08eb049b383c73b9728c3a231
Collaborators 1

Comments

Loading...