Decoding auditory representation of the brain using natural speech stimuli
- Script
- Audio
- about 30-40 normal subjects scanned at Caltech
- about 15 epileptic patients scanned at Caltech + recorded intracranially in the hospital(sEEG: patients except 6 and 8)
- about 700 subjects scanned in Cambridge
- Script
- fMRI
- sEEG (patient number 6 and 8)
features | regions | refrences | notes |
---|---|---|---|
Word level semantics | MTG, MFG, IFG | de Heer 2017 | |
Sentence level semantics | MTG, MFG, IFG | Fedorenko 2011, Huth 2016 | |
Syntax and discourse, especially about identities of characters in a story | MTG, IFG | Wehbe 2014 | |
Onset of sentences | MTG | Hamilton 2018 | |
Intonation | MTG, MFG, IFG | Tang 2017 | Need synthesized stimuli |
Coherence of a word, sentence & paragraph order | MTG, MFG, IFG | Lerner 2011 | Need synthesized stimuli |
To train DA tagger
python DialogueAct-Tagger/scripts/train.py
To transcribe the audio file
python src/transcribe.py <path_to_the_audio_file>
To extract features, load the brain data and fit the encoding models
python src/main.py