Skip to content

Commit

Permalink
Add link to netron display, minor cleanup and addition of library loa…
Browse files Browse the repository at this point in the history
…ding message
  • Loading branch information
rcurrie committed Dec 8, 2024
1 parent a4520b8 commit 374ac6e
Show file tree
Hide file tree
Showing 3 changed files with 14 additions and 4 deletions.
12 changes: 12 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,35 +1,45 @@
# sims-web

Run [SIMS](https://github.com/braingeneers/SIMS) in the browser using [h5wasm](https://github.com/usnistgov/h5wasm) to read local AnnData (.h5ad) files and [ONNX](https://onnxruntime.ai/) to run the model.

# [Demo](https://braingeneers.github.io/sims-web)

Opens an h5ad in the browser and runs a selected SIMs model and displays predictions.

You can view the default ONNX model via [netron](https://netron.app/?url=https://github.com/braingeneers/sims-web/raw/refs/heads/main/models/default.onnx)

![Alt text](screenshot.png?raw=true "SIMS Web Screenshot")

# Developing

Export a SIMS checkpoint to an onnx file and list of genes. Note this assumes you have the SIMS repo as a peer to this one so it can load the model definition.

```
python scripts/sims-to-onnx.py models/default.ckpt
```

Check the core model for compatibility with onnx

```
python -m onnxruntime.tools.check_onnx_model_mobile_usability models/default.onnx
```

Serve the web app and models locally

```
make serve
```

# Memory Requirements

[worker.js](worker.js) uses h5wasm slice() to read data from the cell by gene matrix (i.e. X). As these data on disk are typically stored row major (i.e. all data for a cell is contiguous) we can process the sample incrementally keeping memory requirements to a minimum. Reading cell by cell from a 5.3G h5ad file consumed just under 30M of browser memory. YMMV.

# Performance

Processing a test sample with 2638 cells took 67 seconds in the browser vs. 34 seconds in python on the same machine.

# References

[Open Neural Network Exchange (ONNX)](https://onnx.ai/)

[ONNX Runtime Web (WASM Backend)](https://onnxruntime.ai/docs/get-started/with-javascript/web.html)
Expand All @@ -38,6 +48,8 @@ Processing a test sample with 2638 cells took 67 seconds in the browser vs. 34 s

[ONNX Runtime Javascript Examples](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/js)

[Netron ONNX Graph Display Website](https://netron.app/)

[Graphical ONNX Editor](https://github.com/ZhangGe6/onnx-modifier)
[Classify images in a web application with ONNX Runtime Web](https://onnxruntime.ai/docs/tutorials/web/classify-images-nextjs-github-template.html)

Expand Down
2 changes: 0 additions & 2 deletions scripts/sims-to-onnx.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,6 @@
import numpy as np
import torch
import torch.onnx
import anndata as ad
from onnx import helper, TensorProto
import sclblonnx as so

Expand All @@ -23,7 +22,6 @@
args = parser.parse_args()

model_name = args.checkpoint.split("/")[-1].split(".")[0]
# model_path = "/".join(args.checkpoint.split("/")[:-1])
model_path = args.destination

# Load the checkpoint
Expand Down
4 changes: 2 additions & 2 deletions worker.js
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ function inflateGenes(

self.onmessage = async function (event) {
try {
self.postMessage({ type: "status", message: "Loading model" });
self.postMessage({ type: "status", message: "Loading libraries..." });
const { FS } = await h5wasm.ready;
console.log("h5wasm loaded");

Expand All @@ -59,7 +59,7 @@ self.onmessage = async function (event) {
const currentModelGenes = (await response.text()).split("\n");

// Load the model
self.postMessage({ type: "status", message: "Loading model" });
self.postMessage({ type: "status", message: "Loading model..." });
ort.env.wasm.wasmPaths =
"https://cdn.jsdelivr.net/npm/onnxruntime-web/dist/";
// ort.env.numThreads = 16;
Expand Down

0 comments on commit 374ac6e

Please sign in to comment.