Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: Update label-studio-ml-backend docs #6995

Draft
wants to merge 3 commits into
base: develop
Choose a base branch
from
Draft
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Next Next commit
docs: Update label-studio-ml-backend docs
micaelakaplan committed Jan 29, 2025
commit b79d35b254d9ccd42fa7e5ac73ad38e278fd32b2
16 changes: 0 additions & 16 deletions docs/source/guide/ml_tutorials.html
Original file line number Diff line number Diff line change
@@ -335,22 +335,6 @@
title: YOLO ML Backend for Label Studio
type: guide
url: /tutorials/yolo.html
- categories:
- Computer Vision
- Video Classification
- Temporal Labeling
- LSTM
hide_frontmatter_title: true
hide_menu: true
image: /tutorials/yolo-video-classification.png
meta_description: Tutorial on how to use an example ML backend for Label Studio
with TimelineLabels
meta_title: TimelineLabels ML Backend for Label Studio
order: 51
tier: all
title: TimelineLabels ML Backend for Label Studio
type: guide
url: /tutorials/yolo_timeline_labels.html
layout: templates
meta_description: Tutorial documentation for setting up a machine learning model with
predictions using PyTorch, GPT2, Sci-kit learn, and other popular frameworks.
8 changes: 4 additions & 4 deletions docs/source/tutorials/mmdetection-3.md
Original file line number Diff line number Diff line change
@@ -23,7 +23,7 @@ https://mmdetection.readthedocs.io/en/latest/
This example demonstrates how to use the MMDetection model with Label Studio to annotate images with bounding boxes.
The model is based on the YOLOv3 architecture with a MobileNetV2 backbone and trained on the COCO dataset.

![screenshot.png](/tutorials/screenshot.png)
![screenshot.png](screenshot.png)

## Before you begin

@@ -43,7 +43,7 @@ docker-compose up -d

See the tutorial in the documentation for building your own image and advanced usage:

https://github.com/HumanSignal/label-studio/blob/develop/docs/source/tutorials/object-detector.md
https://github.com/HumanSignal/label-studio/blob/master/docs/source/tutorials/object-detector.md


## Labeling config
@@ -85,7 +85,7 @@ In this example, you can combine multiple labels into one Label Studio annotatio
1. Clone the Label Studio ML Backend repository in your directory of choice:

```
git clone https://github.com/HumanSignal/label-studio-ml-backend
git clone https://github.com/heartexlabs/label-studio-ml-backend
cd label-studio-ml-backend/label_studio_ml/examples/mmdetection-3
```

@@ -166,4 +166,4 @@ gunicorn --preload --bind :9090 --workers 1 --threads 1 --timeout 0 _wsgi:app
```

* Use this guide to find out your access token: https://labelstud.io/guide/api.html
* You can use and increased value of `SCORE_THRESHOLD` parameter when you see a lot of unwanted detections or lower its value if you don't see any detections.
* You can use and increased value of `SCORE_THRESHOLD` parameter when you see a lot of unwanted detections or lower its value if you don't see any detections.
19 changes: 15 additions & 4 deletions docs/source/tutorials/segment_anything_2_image.md
Original file line number Diff line number Diff line change
@@ -131,16 +131,27 @@ cd label_studio_ml/examples/segment_anything_2_image
pip install -r requirements.txt
```

2. Download [`segment-anything-2` repo](https://github.com/facebookresearch/segment-anything-2) into the root directory. Install SegmentAnything model and download checkpoints using [the official Meta documentation](https://github.com/facebookresearch/segment-anything-2?tab=readme-ov-file#installation)

2. Download [`segment-anything-2` repo](https://github.com/facebookresearch/sam2) into the root directory. Install SegmentAnything model and download checkpoints using [the official Meta documentation](https://github.com/facebookresearch/sam2?tab=readme-ov-file#installation)
You should now have the following folder structure:

| root directory
| label-studio-ml-backend
| label-studio-ml
| examples
| segment_anythng_2_image
| sam2
| sam2
| checkpoints

3. Then you can start the ML backend on the default port `9090`:

```bash
cd ../
label-studio-ml start ./segment_anything_2_image
cd ~/sam2
label-studio-ml start ../label-studio-ml-backend/label_studio_ml/examples/segment_anything_2_image
```

Due to breaking changes from Meta [HERE](https://github.com/facebookresearch/sam2/blob/c2ec8e14a185632b0a5d8b161928ceb50197eddc/sam2/build_sam.py#L20), it is CRUCIAL that you run this command from the sam2 directory at your root directory.

4. Connect running ML backend server to Label Studio: go to your project `Settings -> Machine Learning -> Add Model` and specify `http://localhost:9090` as a URL. Read more in the official [Label Studio documentation](https://labelstud.io/guide/ml#Connect-the-model-to-Label-Studio).

## Running with Docker (coming soon)
4 changes: 2 additions & 2 deletions docs/source/tutorials/segment_anything_2_video.md
Original file line number Diff line number Diff line change
@@ -21,7 +21,7 @@ This guide describes the simplest way to start using **SegmentAnything 2** with
This repository is specifically for working with object tracking in videos. For working with images,
see the [segment_anything_2_image repository](https://github.com/HumanSignal/label-studio-ml-backend/tree/master/label_studio_ml/examples/segment_anything_2_image)

![sam2](/tutorials/Sam2Video.gif)
![sam2](./Sam2Video.gif)

## Before you begin

@@ -83,4 +83,4 @@ If you want to contribute to this repository to help with some of these limitati

## Customization

The ML backend can be customized by adding your own models and logic inside the `./segment_anything_2_video` directory.
The ML backend can be customized by adding your own models and logic inside the `./segment_anything_2_video` directory.
6 changes: 6 additions & 0 deletions docs/source/tutorials/segment_anything_model.md
Original file line number Diff line number Diff line change
@@ -280,6 +280,12 @@ to get a better understanding of the workflow when annotating with SAM.

Use the `Alt` hotkey to alter keypoint positive and negative labels.

First, select either the auto-keypoints or auto-rectangle tool, then choose a label. You can now draw keypoints or rectangles on the canvas.
Watch this video:

https://github.com/user-attachments/assets/28acf6ae-a83f-4919-9722-3c82a4b6dab6


### Notes for AdvancedSAM

* _**Please watch [this video](https://drive.google.com/file/d/1OMV1qLHc0yYRachPPb8et7dUBjxUsmR1/view?usp=sharing) first**_
2 changes: 1 addition & 1 deletion docs/source/tutorials/tesseract.md
Original file line number Diff line number Diff line change
@@ -173,5 +173,5 @@ Example below:
![ls_demo_ocr](https://user-images.githubusercontent.com/17755198/165186574-05f0236f-a5f2-4179-ac90-ef11123927bc.gif)

Reference links:
- https://labelstud.io/blog/improve-ocr-quality-for-receipt-processing-with-tesseract-and-label-studio
- https://labelstud.io/blog/Improve-OCR-quality-with-Tesseract-and-Label-Studio.html
- https://labelstud.io/blog/release-130.html
86 changes: 66 additions & 20 deletions docs/source/tutorials/yolo.md

Large diffs are not rendered by default.