Skip to content

Commit

Permalink
applied suggestions; updated readme
Browse files Browse the repository at this point in the history
  • Loading branch information
KevinMenden committed Dec 21, 2020
1 parent 76c9795 commit 0e6fe62
Show file tree
Hide file tree
Showing 4 changed files with 57 additions and 44 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/python-publish.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ jobs:
python-version: "3.x"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install --upgrade pip setuptools wheel twine
pip install setuptools wheel twine
- name: Build and publish
env:
Expand Down
81 changes: 46 additions & 35 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,29 +25,24 @@ Scaden overview. a) Generation of artificial bulk samples with known cell type c
of Scaden model ensemble on simulated training data. c) Scaden ensemble architecture. d) A trained Scaden model can be used
to deconvolve complex bulk mixtures.

### 1. System requirements
Scaden was developed and tested on Linux (Ubuntu 16.04 and 18.04). It was not tested on Windows or Mac, but should
also be usable on these systems when installing with Pip or Bioconda. Scaden does not require any special
hardware (e.g. GPU), however we recommend to have at least 16 GB of memory.

Scaden requires Python 3. All package dependencies should be handled automatically when installing with pip or conda.

### 2. Installation guide
## Installation guide
Scaden can be easily installed on a Linux system, and should also work on Mac.
There are currently two options for installing Scaden, either using [Bioconda](https://bioconda.github.io/) or via [pip](https://pypi.org/).

## pip
### pip
To install Scaden via pip, simply run the following command:

`pip install scaden`


## Bioconda
### Bioconda
You can also install Scaden via bioconda, using:

`conda install -c bioconda scaden`

## GPU
### GPU
If you want to make use of your GPU, you will have to additionally install `tensorflow-gpu`.

For pip:
Expand All @@ -58,7 +53,7 @@ For conda:

`conda install tensorflow-gpu`

## Docker
### Docker
If you don't want to install Scaden at all, but rather use a Docker container, we provide that as well.
For every release, we provide two version - one for CPU and one for GPU usage.
To pull the CPU container, use this command:
Expand All @@ -76,38 +71,54 @@ Additionally, we now proivde a web tool:

It contains pre-generated training datasets for several tissues, and all you need to do is to upload your expression data. Please note that this is still in preview.

### 3. Demo
We provide several curated [training datasets](https://scaden.readthedocs.io/en/latest/datasets/) for Scaden. For this demo,
we will use the human PBMC training dataset, which consists of 4 different scRNA-seq datasets and 32,000 samples in total.
You can download it here:
[https://figshare.com/s/e59a03885ec4c4d8153f](https://figshare.com/s/e59a03885ec4c4d8153f).
## Usage
We provide a detailed instructions for how to use Scaden at our [Documentation page](https://scaden.readthedocs.io/en/latest/usage/)

A deconvolution workflow with Scaden consists of four major steps:
* data simulation
* data processing
* training
* prediction

If training data is already available, you can start at the data processing step. Otherwise you will first have to process scRNA-seq datasets and perform data simulation to generate a training dataset. As an example workflow, you can use Scaden's function `scaden example` to generate example data and go through the whole pipeline.

First, make an example data directory and generate the example data:
```bash
mkdir example_data
scaden example --out example_data/
```
This generates the files "example_counts.txt", "example_celltypes.txt" and "example_bulk_data.txt" in the "example_data" directory. Next, you can generate training data:

For this demo, you will also need to download some test samples to perform deconvolution on, along with their associated labels.
You can download the data we used for the Scaden paper here:
[https://figshare.com/articles/Publication_Figures/8234030](https://figshare.com/articles/Publication_Figures/8234030)
```bash
scaden simulate --data example_data/ -n 100 --pattern "*_counts.txt
```
We'll perform deconvolution on simulated samples from the data6k dataset. You can find the samples and labels in 'paper_data/figures/figure2/data/data6k_500_*'
once you have downloaded this data from the link mentioned above.
This generates 100 samples of training data in your current working directory. The file you need for your next step is called "data.h5ad". Now you need to perform the preprocessing using the training data and the bulk data file:
The first step is to perform preprocessing on the training data. This is done with the following command:
```bash
scaden process data.h5ad example_data/example_bulk_data.txt
```
`scaden process pbmc_data.h5ad paper_data/figures/figure2/data/data6k_500_samples.txt`
As a result, you should now have a file called "processed.h5ad" in your directory. Now you can perform training. The following command performs training for 5000 steps per model and saves the trained weights to the "model" directory, which will be created:
This will generate a file called 'processed.h5ad', which we will use for training. The training data
we have downloaded also contains samples from the data6k scRNA-seq dataset, so we have to exclude them from training
to get a meaningfull test of Scaden's performance. The following command will train a Scaden ensemble for 5000 steps per model (recommended),
and store it in 'scaden_model'. Data from the data6k dataset will be excluded from training. Depending on your machine, this can take about 10-20 minutes.
```bash
scaden train processed.h5ad --steps 5000 --model_dir model
```
`scaden train processed.h5ad --steps 5000 --model_dir scaden_model --train_datasets 'data8k donorA donorC'`
Finally, you can use the trained model to perform prediction:
Finally, we can perform deconvolution on the 500 simulates samples from the data6k dataset:
```bash
scaden predict --model_dir model example_data/example_bulk_data.txt
```
`scaden predict paper_data/figures/figure2/data/data6k_500_samples.txt --model_dir scaden_model`
Now you should have a file called "scaden_predictions.txt" in your working directory, which contains your estimated cell compositions.
This will create a file named 'cdn_predictions.txt' (will be renamed in future version to 'scaden_predictions.txt'), which contains
the deconvolution results. You can now compare these predictions with the true values contained in
'paper_data/figures/figure2/data/data6k_500_labels.txt'. This should give you the same results as we obtained in the Scaden paper
(see Figure 2).
### 4. Instructions for use
For a general description on how to use Scaden, please check out our [usage documentation](https://scaden.readthedocs.io/en/latest/usage/).
### 1. System requirements
Scaden was developed and tested on Linux (Ubuntu 16.04 and 18.04). It was not tested on Windows or Mac, but should
also be usable on these systems when installing with Pip or Bioconda. Scaden does not require any special
hardware (e.g. GPU), however we recommend to have at least 16 GB of memory.
Scaden requires Python 3. All package dependencies should be handled automatically when installing with pip or conda.
2 changes: 1 addition & 1 deletion scaden/__main__.py
Original file line number Diff line number Diff line change
Expand Up @@ -155,7 +155,7 @@ def process(data_path, prediction_data, processed_path, var_cutoff):
help="Number of samples to simulate [default: 1000]")
@click.option(
'--pattern',
default="*_norm_counts_all.txt",
default="*_counts.txt",
help="File pattern to recognize your processed scRNA-seq count files")
@click.option(
'--unknown',
Expand Down
16 changes: 9 additions & 7 deletions scaden/preprocessing/bulk_simulation.py
Original file line number Diff line number Diff line change
Expand Up @@ -170,21 +170,24 @@ def filter_matrix_signature(mat, genes):
mat = mat[genes]
return mat


def load_celltypes(path, name):
""" Load the cell type information """
try:
y = pd.read_table(path)
# Check if has Celltype column
if not 'Celltype' in y.columns:
logger.error(f"No 'Celltype' column found in {name}_celltypes.txt! Please make sure to include this column.")
logger.error(
f"No 'Celltype' column found in {name}_celltypes.txt! Please make sure to include this column."
)
sys.exit()
except FileNotFoundError as e:
logger.error(f"No celltypes file found for {name}. It should be called {name}_celltypes.txt.")
logger.error(
f"No celltypes file found for {name}. It should be called {name}_celltypes.txt."
)
sys.exit(e)

return y


return y


def load_dataset(name, dir, pattern):
Expand Down Expand Up @@ -338,10 +341,9 @@ def simulate_bulk(sample_size, num_samples, data_path, out_dir, pattern,

if len(datasets) == 0:
logging.error(
"No datasets fround! Have you specified the pattern correctly?")
"No datasets found! Have you specified the pattern correctly?")
sys.exit(1)


print("Datasets: " + str(datasets))

# Load datasets
Expand Down

0 comments on commit 0e6fe62

Please sign in to comment.