MEASURES is a flexible web based, AI driven eco-system which lends itself to live performance,
installation in a gallery museum, festival or alone in one's bedroom while being simple enough
for anyone with WIFI and a smartphone to use.
It fuses technology and emotion where technology is often sterile and intellectual and offers
tools for self reflection while creating a method of studying the phenomenon of empathy as a whole.
Ultimately, MEASURES offers a new way to feel connected and to experience one another across borders,
cultures, socioeconomic classes and other zones of isolation.
Users wear an electronic wristband which provides audio cues and measures the participant's electrodermal
activity. The data is transferred over WIFI to a server which processes the data and generates a unique sound work.
Each sound work produced by a participant is subsequently analyzed for point of “excitation” using Opensmile
feature extraction trained by a custom piece specific data set. The results are used as the basis of the following
user's auditory stimuli and physiological data comparison to generate the next sound piece and so on. The sound
works are added to a public online archive. Participants can choose to disclose their identity or remain anonymous.
To my knowledge, MEASURES is the first system in the artistic and scientific communities designed with the
specific purpose of quantifying and measuring empathy from auditory cues. MEASURES content originates from an
infinitely expansive pool of participant data is comprised of empathetic responses for an AI driven art work.
The goals of this piece put the wealth seeking uses of AI such as advertising, facial/voice recognition, fraud
prevention and other commercial applications in the background. It focuses on AI as a tool to witness people in
the most human way possible – through experiencing a person feeling empathy.
The form of MEASURES is also unique because the data collection device exists as an electronic art object and
generates an expansive bio-data driven archive of individual sound works, and is designed to be extremely
accessible as a tool for installation, real-time performance and solitary use.
The hardware uses a custom EDA sensor based on the AD8608 op-amp and handmade carbon silicone sensor pads.
MP3 decoding is handled by the VS1053b chip connected to the ESP32WROVER over SPI.
For ADC it uses an ADS1115 IC over the I2C Bus.
This repo will include
- Arduino firmware code for the ESP32-WROVER,
- Node-Red code with respective shell scripts,
- PureData patches for sonification,
- EAGLE CAD files of schematics and PCB layout,
- TensorFlow and Pytorch code
Server Requirements:
- Ubuntu or similar
- Icecast2 // icecast.org
- FFMPEG // ffmpeg.org
- PureData // puredata.info
- Node-Red // nodered.org
- inotifytools // https://github.com/inotify-tools/inotify-tools
Device Firmware Requirements:
- Arduino IDE + ESP32 Arduino
- External Libraries:
- Adafruit_VS1053 // https://github.com/adafruit/Adafruit_VS1053_Library
- WiFi Manager // https://github.com/tzapu/WiFiManager
- ADS1X15 // https://github.com/RobTillaart/ADS1X15
Detailed system description:
TODO
Relavent links/Bibliography:
TODO
https://arxiv.org/pdf/2008.09743.pdf Guanghao Yin, Shouqian Sun, Dian Yu and Kejun Zhang*, Member,
IEEE in their paper A Efficient Multimodal Framework for Large Scale Emotion Recognition by Fusing Music
and Electrodermal Activity Signals. openSMILE https://audeering.github.io/opensmile/ to obtain external feature
vectors of audio cues
Measures is made possible in part by the New York State Council on the Arts with the support of the Office of
the Governor and the New York State Legislature and Wave Farm.