Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel
app
e510938499
Update Label Studio API endpoint
9 months ago
79628bcb44
Barebones FastAPI REST service with health check
1 year ago
70f83776fa
Update the template to include more boiler plate code that can be reused
1 year ago
10b21f9e47
Initial commit
1 year ago
e295cf0e0d
Update the README
1 year ago
688a96f272
Add docker compose YAML for simplicity and update the port in the Dockerfile to match the default Label Studio ML backend port
1 year ago
00052ccc50
Add `requests` as a dependency
9 months ago
Storage Buckets

README.md

You have to be logged in to leave a comment. Sign In

RelaxML

A template for an ML Backend that implements a REST API compatible with Label Studio

TL;DR

To use the template:

  1. Add code to load your model to RelaxML.init()
  2. Also set self.model_version to whatever string you want there
  3. Loop through the tasks in RelaxML.predict() and make predictions using your model
  4. Send the results of each prediction back to Label Studio using RelaxML.send_predictions()

If you want to see a working example, check out the squirrel-example branch.

To run the ML backend:

  1. Add any pip requirements to requirements.txt
  2. Run docker-compose up --build

Label Studio ML Backend

To help you automate your annotation workflow, Label Studio allows you to connect your project to an ML backend. Once connected, Label Studio can send tasks to the ML backend to get predictions on your data. An annotator can then use these predictions to inform and speed up their labeling process.

The ML backend needs to run a REST API with some required endpoints

API Endpoint Requirements

/health

REQUIRED

The health check is how Label Studio checks that a given ML backend is up and running. Label studio will call this endpoint many times including when connecting the backend and before each set of tasks it sends over.

The /health endpoint should return a JSON dictionary that has the following format:

{
    "status": "UP",
    "v2": true
}

/setup

REQUIRED

The /setup endpoint is a way for Label Studio to pass information about the project to the ML backend. This includes information about the hostname, where Label Studio is running, access tokens, and schemas for the types of labels the project uses.

WARNING: the /setup endpoint, like the health check, will be called many times. Do not do model initialization in your logic that receives this endpoint!

The /setup endpoint should return a JSON with the model version:

{
    "model_version": "MyAwesomeModel:1.0"
}

This string can be whatever you want to help you identify which model was used to create predictions.

/predict

REQUIRED

/predict is the endpoint Label Studio will call to get prediction for one or more tasks. You can either return the predictions in the response to this request OR you can call a Label Studio endpoint to create predictions for a particular task.

The latter is prefered, as the former can lead to timeout errors on Label Studio's side as it waits for the response to the /predict endpoint.

Each task includes information about the data that needs to be annotated. If the data is a file (like an image or audio file), you will get a repo:// URI, which can be converted to a URL for downloading.

/train

OPTIONAL

You can setup Label Studio to automatically train a new model after annotations have been made. This is the endpoint that will get called when that happens.

Tip!

Press p or to see the previous file or, n or to see the next file

About

A template for an ML Backend that implements a REST API compatible with Label Studio

Collaborators 1

Comments

Loading...