Skip to content

Commit

Permalink
Updated readme. Setup.py will not include documentation from root dir…
Browse files Browse the repository at this point in the history
…ectory by using a simple file copy to docs/ folder in muselsl package.
  • Loading branch information
kowalej committed May 13, 2018
1 parent 9a771b2 commit 8f660e2
Show file tree
Hide file tree
Showing 3 changed files with 39 additions and 71 deletions.
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -16,3 +16,5 @@ __pycache__/
.cache
.vs
env/

muselsl/docs
94 changes: 24 additions & 70 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,98 +1,46 @@
# Muse LSL

This is a collection of Python scripts to use the Muse 2016 BLE headset with LSL.
This is a Python package for streaming and visualizing EEG data from the Muse 2016 headset.

![Blinks](blinks.png)

## Requirements

The code relies on [pygatt](https://github.com/peplin/pygatt) for the BLE communication.
pygatt works on Linux and should work on Windows and macOS provided that you have a BLED112 dongle.
You have to use the development version of pygatt, that can be installed with pip using:
The code relies on [pygatt](https://github.com/peplin/pygatt) for the BLE communication. pygatt works on Linux and should work on Windows and macOS provided that you have a BLED112 Bluetooth dongle.

`pip install git+https://github.com/peplin/pygatt`
*Note: Another option for connecting to a Muse on Windows is via [BlueMuse](https://github.com/kowalej/BlueMuse/tree/master/Dist) which will output the same LSL stream format as muse-lsl.

Note: Another option for connecting to Muse on Windows is [BlueMuse](https://github.com/kowalej/BlueMuse/tree/master/Dist)
You will need to find the MAC address or name of your Muse headset.

You will also need to find the MAC address of your Muse headset. **This code is
only compatible with the 2016 version of the Muse headset.**

Finally, the code for streaming and recording data is compatible with Python
2.7 and Python 3.x. However, the code for stimulus presentation relies on
psychopy and therefore only runs with Python 2.7.
**This code is only compatible with the 2016 version of the Muse headset.**

## Usage

*Everything can be run using muse-lsl.py or you may integrate into other packages.

To stream data with LSL:

`python muse-lsl.py`
`python muse-lsl.py stream`

The script will auto detect and connect to the first Muse device. In case you want
a specific device or if the detection fails, find the name of the device and pass it to the script :
a specific device or if the detection fails, find the name of the device and pass it to the script:

`python muse-lsl.py --name YOUR_DEVICE_NAME`
`python muse-lsl.py stream --name YOUR_DEVICE_NAME`

You can also directly pass the MAC address (this option is also faster at startup):

`python muse-lsl.py --address YOUR_DEVICE_ADDRESS`

Once the stream is up and running, you can visualize it with

`python lsl-viewer.py`

## Available experimental paradigms

The following paradigms are available:

Paradigm | Stimulus presentation | Data | Analysis
---------|-----------------------|------|---------
Visual P300 | `stimulus_presentation/generate_Visual_P300.py` `stimulus_presentation/generate_Visual_P300_stripes.py`| `data/visual/P300/` | [click here](https://github.com/alexandrebarachant/muse-lsl/blob/master/notebooks/P300%20with%20Muse.ipynb)
Auditory P300 | `stimulus_presentation/generate_Auditory_P300.py` | `data/auditory/P300` | [click here](https://github.com/alexandrebarachant/muse-lsl/blob/master/notebooks/Auditory%20P300%20with%20Muse.ipynb)
N170 | `stimulus_presentation/generate_N170.py` | `data/visual/N170` | [click here](https://github.com/alexandrebarachant/muse-lsl/blob/master/notebooks/N170%20with%20Muse.ipynb)
SSVEP | `stimulus_presentation/generate_SSVEP.py` | `data/visual/SSVEP` | [click here](https://github.com/alexandrebarachant/muse-lsl/blob/master/notebooks/SSVEP%20with%20Muse.ipynb)
SSAEP | `stimulus_presentation/generate_SSAEP.py` | `data/auditory/SSAEP` | [click here](https://github.com/alexandrebarachant/muse-lsl/blob/master/notebooks/SSAEP%20with%20Muse.ipynb)
Spatial frequency | `stimulus_presentation/generate_spatial_gratings.py` | `data/visual/spatial_freq` | [click here](https://github.com/alexandrebarachant/muse-lsl/blob/master/notebooks/Spatial%20Frequency%20Task%20with%20Muse.ipynb)

The stimulus presentation scripts can be found under `stimulus_presentation/`.
Some pre-recorded data is provided under `data/`, alongside analysis notebooks under `notebooks`.

### Visual P300

The task is to count the number of cat images that you see. You can add new jpg images inside the [stimulus_presentation](stimulus_presentation/) directory: use the `target-` prefix for cat images, and `nontarget-` for dog images.

### Auditory P300

The task is to count the number of high tones that you hear.

### N170

The task is to mentally note whether a "face" or a "house" was just presented.

### SSVEP

The task is to passively fixate the center of the screen.

### SSAEP

The task is to passively fixate the center of the screen while listening to the sounds you hear.

### Spatial frequency gratings

The task is to passively fixate the center of the screen.

## Running an experiment

First, you have to run the muse-lsl script as described above.

In another terminal, run
`python muse-lsl.py stream --address YOUR_DEVICE_ADDRESS`

`python stimulus_presentation/PARADIGM.py -d 120 & python lsl-record.py -d 120`
Once the stream is up and running, from another prompt, you can visualize it with:

where `PARADIGM.py` is one of the stimulus presentation scripts described above (e.g., `generate_Visual_P300.py`).
`python muse-lsl.py lslview`

This will launch the selected paradigm and record data for 2 minutes.
### Backends
You can choose between gatt, bgapi, and bluemuse backends.

For data analysis, check out [these notebooks](https://github.com/alexandrebarachant/muse-lsl/blob/master/notebooks/).
* gatt - used on unix systems, interfaces with native Bluetooth stack.
* bgapi - used with BLED112 dongle.
* bluemuse - used on Windows 10, native Bluetooth stack, requires [BlueMuse](https://github.com/kowalej/BlueMuse/tree/master/Dist) installation.

## Common issues

Expand All @@ -102,3 +50,9 @@ For data analysis, check out [these notebooks](https://github.com/alexandrebarac

2. `pygatt.exceptions.BLEError: No characteristic found matching 273e0003-4c4d-454d-96be-f03bac821358` (Linux)
- There is a problem with the most recent version of pygatt. Work around this by downgrading to 3.1.1: `pip install pygatt==3.1.1`


3. Connection issues with BLED112 dongle (Windows):
- You may need to use the --interface argument to provide the appropriate COM port value for the BLED112 device. The default value is COM9. To setup or view the device's COM port go to:
`Control Panel\Hardware and Sound\Devices and Printers > Right Click > Bluetooth settings > COM Ports > (Add > Incoming)`

14 changes: 13 additions & 1 deletion setup.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,16 @@
from setuptools import setup, find_packages
from shutil import copyfile
import os

def copy_docs():
docs_dir = 'muselsl/docs'
if not os.path.exists(docs_dir):
os.makedirs(docs_dir)

copyfile('README.md', docs_dir + '/README.md')
copyfile('blinks.png', docs_dir + '/blinks.png')

copy_docs()

setup(name='muse-lsl',
version='1.0.0',
Expand All @@ -10,7 +22,7 @@
license='BSD (3-clause)',
scripts=['muselsl/muse-lsl.py'],
packages=find_packages(),
package_data={'muselsl': ['README.md', 'LICENSE.txt']},
package_data={'muselsl': ['docs/blinks.png', 'docs/README.md']},
include_package_data=True,
zip_safe=False,
install_requires=['bitstring', 'pylsl', 'pygatt', 'pandas', 'scikit-learn', 'numpy', 'seaborn', 'pexpect'],
Expand Down

0 comments on commit 8f660e2

Please sign in to comment.