This toolkit is in Beta! We welcome any suggestions! Please provide them through the SoundCusVR Developer Study or email Xinyun Cao at [email protected] directly.
This is a Unity Toolkit that contains nine features for Sound Customization for DHH (Deaf and Hard of Hearing) Users. It is designed with VR development in mind, but it might be used for other development as well.
Sound Customization: Changing aspects of the sounds used in the VR app such as shifting frequencies of the sound or prioritizing certain sounds to satisfy the needs of individual DHH users.
Unity Audio Mixer: You can access the AudioMixer window from Window->Audio->AudioMixer. For more information about AudioMixer, please see Unity documentation: AudioMixer and AudioMixer Scripting.
This is the list of features. They are divided into 4 categories - Prioritization, Sound Parameter Changes, Spatial Assistance, and Add-on Sounds. Each feature should work independently, and some features could work together.
Group Prioritization: When having multiple groups of sounds, focus on a specific group and lower sounds of all other groups.
Keyword Prioritization: set specific keywords you're interested in, and the program will play a notification and increase speech volume when it's played.
Sound Prioritization: lower environment sounds during speech/important sounds.
Direction-Based Prioritization: amplifies the sounds in the direction the user faces while simultaneously reducing the volume of sounds coming from other directions.
Volume and Pitch Adjustment: set volume and pitch both for the system and for individual sound sources in the environment.
Speech Speed Adjustment: adjust speed for individual speech sound sources.
Frequency Contrast Enhancement: adjusts the frequencies of adjacent sound sources, elevating one while lowering the other to enhance their distinction.
Beat Enhancement: boosts the rhythm of music sounds by dynamically increasing and decreasing the volume along with the beats.
Shoulder Localization Helper: To alert you when an important sound is played/ by your request, and if itβs on your left/right.
Live Listen Helper: single out sounds when you move them close to sound sources.
Left-Right Balance: adjust the system sound balance to either the left or right.
Hearing Range Adjustment: allows users to adjust the range for sound activation.
Sound Distance Assistance: aids in perceiving distance by modulating sound pitch based on the userβs proximity to the source: pitch decreases as distance increases and vice versa.
Silence Zone: increases the contrast between spatial sounds by including a silence zone between them.
Smart Notification: Playing a notification before an important sound, like speech or feedback.
Custom Feedback Sound: Change the feedback sounds in the system to your individual preferences/needs.
Calming Noise: enables users to select among white noise, pink noise, and rain sounds to add to the VR environment.
( π Recommended to use in situations where multiple groups of conversation/sounds are happening concurrently.)
π π π See GroupPrioritizationExampleScene for example.
Use GroupPrioritizationmanager, GroupManager and SpeechSource scripts for this feature.
- The GroupPrioritizationManager Script has a property
Group Managers List
where you can add GroupManager objects. If these are not assigned, it will automatically search for GroupManager in their children Game Objects. - Each GroupManager will have a property
Speech Source List
you can add SpeechSource objects into. If these are not assigned, it will automatically search for SpeechSource in their children Game Objects. - Each SpeechSource should have a corresponding AudioSource, and initially
is Not Focused
is true. Theis Keyword Detected
is used in combination with the Keyword Prioritization Feature.
Public Functions:
OnSelectedGroupChange()
: This function takes in the number corresponding to the selected group. It makes the sound volume for the newly selected group higher and sets all other groups to a lower volume. If the selected group number is 0, then all groups will be reset to the original volume. There is no output for this function.
Implementation Steps:
- Create a Group Prioritization Manager Object, and create several groups as its children.
- Add Script GroupPrioritizationManager to the Group Prioritization Manager Object, and attach a GroupManager Script to each Group Object.
- Add SpeechSource Script to different AudioSources involved in the different groups, and attach these SpeechSource scripts to the
Speech Source List
in their corresponding GroupManager Script. - Add the GroupManger Scripts to the
Group Managers List
in the corresponding GroupPrioritizationManager Script. - Call
OnSelectedGroupChange()
when the selected group is changed. See the documentation of the function above for more details.
GroupPrioritizationCompressed.mp4
( π Recommended to use in situations where important speech is delivered.)
π π π See KeywordPrioritizationExampleScene for example.
The KeywordDetectionManager is used to control this feature.
- It has a property,
keywords
, which is a List of strings that identify the keywords to look out for. - It also allows you to set a notification sound by setting the
Notification Clip
variable. You could use the notification clip inSounds->notif1_inTheEnd.mp3
or any sound clip for the notification clip.
Public Functions:
AddKeyword(string keyword)
: This function takes in a string and adds it to the list of keywords. If such a keyword already exists, it will log an error.
SubtractKeyword(string keyword)
: This function takes in a string and removes it from the list of keywords. If such a keyword is not in the keyword list, it will log an error.
Public Coroutines:
detectKeywordAndPlay(string script, SpeechSource speechSource)
: This coroutine locates keywords in the script and plays a notification sound. If the sentence that the keyword is a part of has lowered volume because of group prioritization, it also increases the volume of the sentence. When the sentence is done, it reverts to the volume before the keyword is detected. The inputs include the script, wherein the keyword might be detected, and the SpeechSource which will be used to output the sounds. There is an IEnumerator returned.
Implementation Steps:
- Add a Keyword Detection Manager to the scene and attach a KeywordDetectionManager Script. Select the
notification clip
. - To enable the user to add or remove a keyword, call
AddKeyword(string)
andSubtractKeyword(string)
as documented above. - Attach a SpeechSource Script to the AudioSource that should play the sentences and attach the AudioSource to the SpeechSource.
- When playing a sentence that might contain the keyword and you want to detect it, instead of playing it with the audio source, start a
detectKeywordAndPlay
coroutine with the sentence's script, corresponding SpeechSource, and have the corresponding AudioSource contain the right AudioClip.
KeywordPrioritizationCompressed.mp4
( π Recommended to use in situations where character speech/ important sound and environment sounds/background music are concurrent.)
π π π See SoundPrioritizationExampleScene for example.
The SoundPrioritizationManager is used to control this feature. It has several properties to set before use, including the Audio Mixer
, the Env Vol Label
, the Character Audio Source List
, and the Lower Env On Speech Setting
.
- The
Audio Mixer
is the audio mixer that contains the mixer group for the environment sounds. - The
Env Vol Label
is the exposed parameter from the environment audio mixer group. To do this, right-click on the volume in the audio mixer inspector of that audio mixer group and select βExpose ββ to scriptβ. It will then be accessible in exposed parameters. - The
Character Audio Source List
should contain all of the character audio sources to keep track of. - The boolean Lower Env On Speech Setting is default as true. It controls whether the feature will be turned on or off. It can be controlled by the developer, or by the user through ChangeLowerEnvOnSpeechSetting(), as described below.
Right before the character speech, call LowerEnvSoundsVolume(audioSource)
, as described below.
Public Functions:
ChangeLowerEnvOnSpeechSetting()
: This takes in the boolean input from the associated toggle. If the input is True, the environment volume will change when character audio is played, and if any are currently playing, the volume of the environment is decreased. If the input is False, the environment volume won't change when character audio is played, and if character audio is currently playing, it will reset the environment volume to normal. There is no output.
LowerEnvSoundsVolume(AudioSource)
: This takes the input of an AudioSource, which should be the character audio source that is triggering this feature. This lowers the environment volume -30 dB and gives no output.
Implementation Steps:
- Add a Sound Prioritization Manager to the scene and attach a SoundPrioritizationManager Script.
- Create a new Mixer Group in the AudioMixer tab, and for all AudioSoruce that should be considered the Environment Sound, assign their Mixer group to be this new mixer group.
- Go to the Inspector of the music mixer group controller, right-click on Attenuation - Volume, and select "Expose ... to script"
- In the "Exposed parameters" list in the Audio Mixer tab, get the name of the newly created parameter.
- Assign the Mixer that contains the new mixer group to the
Environment_Mixer
field in SpeechPrioritizationManager, and input the name of the new parameter toEnv Mixer Vol Label
field. - Add all AudioSource of Character speech to the
Character Audio Source List
. - Before the character speech, call
lowerEnvSoundsVolume(AudioSource)
, documented above.
SoundPrioritizationCompressed.mp4
( π Recommended to use in situations where users should prioritize the sound they are facing.)
The DirectionPrioritizationManager is used to control this feature. It has several properties to set before use, including the Prioritization Audio Mixer
, the General Audio Mixer
, the General Audio Volume Label
, Main Camera
, and the Degree Threshold
.
- The
Prioritization Audio Mixer
is the AudioMixerGroup that will be used for the sound to prioritize. - The
General Audio Mixer
is the AudioMixerGroup that will be used for all other sounds. - The
General Audio Volume Label
is the exposed parameter from the general audio mixer group. To do this, right-click on the volume in the audio mixer inspector of that audio mixer group and select βExpose ββ to scriptβ. It will then be accessible in exposed parameters. - The Main Camera should be assigned to the
Main Camera
field. - The
Degree threshold
is used to determine whether a user is facing a sound source. It is the maximum degree of angle between their line of sight and the line connecting the location of the source to the player.
You will also need an AudioManager Script in the scene. Please see the implementation details below.
Public Functions:
ToggleOnOff(bool toggle)
: This function toggles the function on and off.
Implementation Steps:
- Add an AudioManager to the scene and attach a AudioManager Script.
- Add an DirectionPrioritizationManager to the scene and attach a DirectionPrioritizationManager Script.
- Create two new Mixer Group in the AudioMixer tab. Assign one to be the Prioritization Audio Mixer and the other to be the General Audio Mixer
- Go to the Inspector of the music mixer group controller of the General Audio Mixer, right-click on Attenuation - Volume, and select "Expose ... to script"
- In the "Exposed parameters" list in the Audio Mixer tab, get the name of the newly created parameter.
- Input the name of the new parameter to
General Audio Volume Label
field. - Assign thee Main Camera to be the main camera that is linked to the player's head movement.
- Set the Degree threshold. The default is 10 degrees.
DirectionbasedPrio_short.1.mp4
( π Recommended to use for any sound, especially sounds on the lower/higher registry.)
π π π See VolPitchShiftExampleScene for example.
The VolPitchShiftManager is used to control this feature. It has several properties to set before use, including Audio Mixer Group
, Audio Sources List
, Pitch Label
, and Volume Label
.
- The Audio Mixer Group allows you to set the audio mixer group to be used.
- The Audio Sources List compiles all audio sources to be impacted by this volume and pitch control, with an empty or unset list resulting in system-wide changes.
- The Pitch and Volume Labels are the exposed parameters from the audio mixer. To do so, right-click on the pitch and volume in the audio mixer and select βExpose ββ to scriptβ. These will then be accessible in exposed parameters.
Public Functions:
ShiftPitch(float val)
: This function changes the pitch to the float input value for each audio mixer in the group. There is no output.
ShiftVolume(float val)
: This function changes the volume to the float input value for each audio mixer in the group. There is no output.
Implementation Steps:
- Add a VolPitchShiftManager Script to the Scene.
- Add all AudioSources affected by this shift to the
Audio Sources List
. For system-level control, you can leave this field empty. - Create a new Mixer Group in the AudioMixer tab, and for all AudioSoruce that should be controlled by this feature, assign their Mixer group to be this new mixer group. Also, assign this to the
Audio Mixer Group
field in VolPitchShiftManager - Go to the Inspector of the music mixer and add Pitch shifter using Add Effect. Then, right-click on Volume in Attenuation and Pitch in Pitch Shifter, and select "Expose ... to script" for both.
- In the "Exposed parameters" list in the Audio Mixer tab, get the names of the newly created parameters.
- Enter the new parameter names in the
Pitch Label
andVolume Label
fields in the VolPitchShiftManager Script. - To shift the volume and pitch, use
ShiftPitch()
andShiftVolume()
documented above.
SoundVolumeAndFrequencyAdjustmentCompressed.mp4
SystemVolumeAndPitchAdjustmentCompressed.mp4
( π Recommended to use for character/system speech that contains important information.)
π π π See SpeechSpeedAdjustmentExampleScene for example.
The SpeechSpeedManager is used to control this feature.
- It requires you to assign an AudioSource to the script in the
Audio Source
field, otherwise, it will assume it is attached to a game object with an AudioSource. - You also need to set the audio mixer to be used in the Master Mixer field. You then need to expose the Pitch element. To do so, right-click on the pitch in the audio mixer group controller and select βExpose ββ to scriptβ. These will then be accessible in exposed parameters. Then, add the parameter name of the pitch parameter to the
Audio Mixer Pitch Label
field.
Public Functions
ShiftSpeed(float)
: This function changes the speed of the AudioSource linked to this manager by the value input into the function.
Implementation Steps:
- Add a SpeechSpeedAdjustmentManager Script to the Scene.
- Assign the AudioSoruce that should be controlled by this feature to the
Audio Source
field. - Create a new Mixer Group in the AudioMixer tab, and for the AudioSoruce that should be controlled by this feature, assign its Mixer group to be this new mixer group.
- Go to the Inspector of the audio mixer and add Pitch shifter using Add Effect. Right-click on Pitch in Pitch Shifter, and select "Expose ... to script".
- In the "Exposed parameters" list in the Audio Mixer tab, get the name of the newly created parameters.
- Assign the Mixer that contains this new mixer group to the field
Audio Mixer
. - Enter the new parameter names in the
Audio Mixer Pitch Label
fields in the SpeechSpeedAdjustmentManager Script. - Use the
ShiftSpeed()
function documented above to shift the speed of the character's speech.
SpeechSpeedAdjustmentCompressed.mp4
( π Recommended to use for voices similar in pitch.)
The ContrastEnhancementManager is used to control this feature. It has the following variables:
- The
Frequency Threshold
is the threshold for the frequency difference between characters for this tool to be triggered. - The
Dist Threshold
is the maximum distance between characters for this tool to be triggered. - The
Master Mixer
is the mixer group that all sound sources that this tool could affect should be assigned to.
Public Functions
OnContrastToggle(float)
: This function toggles the functionality of this tool on and off.
Implementation Steps:
- Add a SpeechSource Script to each of the sources of speech that you wish to be modified by this tool, assign the
AudioSource
to the field. - Create an
AudioMixer
for each SpeechSource, and assign them to the corresponding SpeechSource Script. - Expose the pitch of each AudioMixer and input the name of that parameter into the corresponding
Mixer Value Name
field of eachSpeechSource
- The
Sample Size
,Min Freq
, andMax Freq
could be adjusted, but the default is 8192, 50, and 450, correspondingly. - Add a ContrastEnhancementManager Script to the Scene.
- Adjust
Frequency Threshold
andDist Threshold
if needed. - Assign the
Audio Mixer Group
that contains theAudio Mixer
of the targetSpeechSource
to theMaster Mixer
field.
FrequencyContrastEnhancementShort.1.mp4
( π Recommended to use for scenes where the music beat is an important part of the experience.)
The BeatEnhancementManager is used to control this feature. It has the following variables:
- The
Audio Source
is the Audio Source that is playing the music. - The
Beat Mixer Group
is the AudioMixer Group used for the change of volume of the music. - The
bpm
is the beats per minute of the music.
Public Functions
StartBeatEnhancement()
: This function should be called to start the Beat Enhancement cycles, usually when the music starts playing.
ChooseBeatEnhancementPattern(int)
: Choose the Beat Enhancement pattern. 0 is no Beat Enhancement. Input should be in {0, 1, 2}.
Implementation Steps:
- Add a BeatEnhancementManager Script to the Scene and assign the
AudioSource
of the music to theAudio Source
field. - Create an
Audio Mixer Group
for the Beat Enhancement modifications, and assign it to theBeat Mixer Group
field. - Set the
bpm
field to the BPM of the music. - Call the
StartBeatEnhancement
function when the music starts playing. - Use
ChooseBeatEnhancementPattern
to turn the feature on/off and choose between patterns.
Untitled.video.-.Made.with.Clipchamp.2.1.mp4
( π Recommended to use in situations where the directional location of a sound-producing object is important to the experience.)
π π π See ShoulderLocalizationExampleScene for example.
The ShoulderLocalizationManager Script is used for this feature.
- The
Audio Source
field will be used to play the Shoulder Localization Helper notification sounds. - The
Main Camera
field should contain the main camera of the user. - An optional field is
targetAudioSource
, for the case where there's only one target. - The last two fields are
leftAudioClip
andrightAudioClip
, where the developer can input their own direction indicator sounds or the default sounds inSounds->Left.wav
andSounds->Right.wav
.
Public Functions
PlayLocationAlert(Vector3)
: This function takes in the location of the target sound source location, determines if that target sound source is on the left side or right side of the camera, and plays the corresponding audio clip using the detected AudioSoruce in this script.
PlayAlertWithDefinedTarget()
: This function takes in no parameter, and will call PlayLocationAlert using the optional targetAudioSource
field as the sound source.
Implementation Steps:
- Add a ShoulderLocalizationManager Scritp to the Scene.
- Add an AudioSource to this object that has ShoulderLocalizationManager attached, or assign an AudioSource to the `Audio Source. field
- Attach the main camera of the scene to the
Main Camera
Field. - If the feature is used for a fixed AudioSource, you could attach that AudioSource to the
Target Audio Source
field. - Attach the audio clips for indicating "To your left" and "To your right", in the
Left Audio Clip
andRight Audio Clip
fields. The developer can input their own direction indicator sounds or use the default sounds inSounds->Left.wav
andSounds->Right.wav
. - When you want to play the ShoulderLocalizationManager alerts, call the function
PlayLocationAlert()
orPlayAlertWithDefinedTarget()
as documented above.
ShoulderLocalizationHelperCompressed.mp4
( π Recommended to use in situations where user is needs to locate sound source inside a small, reachable environment)
π π π See LivelistenHelperExampleScene for example.
The LiveListenHelperManager Script is used for this feature.
- It should be attached to a game object that the user can grab and move around the scene.
- On the same object, there should be an AudioListener Component attached and should initially be set as a disabled component.
- The developer should add the list of AudioSource that they want the Live Listen Helper to apply to into the field
Audio Source List
. - The developer could also edit the cutoff of the sound single-out effect, the default of the cutoff is 0.5f.
- To enable the user to start using and finish using the feature, the developer should call the
StartUsingLiveListenHelper
andStopUsingLiveListenHelper
functions as the user picks up and drops the object.
Public Functions
StartUsingLiveListenHelper()
: This function takes in no parameters. It sets the AudioListener of the game from the default listener on the player camera to the AudioListener attached to the Live Listen Helper object. It will also start the single-out sound effects of the Live Listen Helper.
StopUsingLiveListenHelper()
: This function stops using the Live Listen Helper by switching back the AudioListener to the original and stops the effect of the Live Listen Helper.
Implementation Steps:
- Instantiate a ball (or other grabbable objects of your choice) in the Scene, and attach LiveListenHelperManager Script to the ball/object.
- Add all the AudioSources to be affected by the Live Listen Helper feature to the
AudioSourceList
field. - Add an Audio Listener to this ball/object and disable this component.
- Change the
Cutoff
field if needed. - Call
StartUsingLiveListenHelper()
andStopUsingLiveListenHelper()
as documented above when you want to start and stop using this ball as the Live Listening Tool. One way is to start it when the ball is grabbed and stop when the ball is released (as shown in the video below).
LiveListenHelperCompressed.mp4
( π Recommended to accommodate users with unilateral hearing. Should be used in tandem with other sound localization guidance tools.)
The StereoSoundManager Script is used for this feature. The scene would also need to have an AudioManager Script.
Public Functions
SetAllSoundSourceWithStereo(float)
: this function shifts the stereoPanVal to the input float value.
SetAllSoundSource2D_3D(bool bool_2D)
: switch all sounds marked by the AudioManager to be non-static between 3D sounds and 2D.
Note that only 2D sounds in Unity have Stereo Pan. So to use left-right balance we need all sounds to be switched to 2D
Implementation Steps:
- Add a ShoulderLocalizationManager Scritp to the Scene.
- Add a slider to the scene.
- Upon slider value change, Call
SetAllSoundSource2D_3D(True)
to switch all sounds to 2D and callSetAllSoundSourceWithStereo
to shift the Left-Right Balance. - To reset all sounds, use
SetAllSoundSourceWithStereo(0)
to reset left-right balance, and useSetAllSoundSource2D_3D(False)
to switch the non-static sounds back to 3D.
Left-Right.Balance.mp4
( π Recommended to use in scenarios with important sound sources scattered around the scene)
The HearingRangeManager Script is used for this feature.
- It has an optional variable named
Audio Source List
. You can manually assign the Audio Source that should be modified by this tool. If unassigned, this script will look for AudioSource in its children objects.
Public Functions
ChangeHearingRange(float)
: This function is called with the float value for the range of the hearing. The larger the range number, the further the user can hear.
Implementation Steps:
- Add a HearingRangeManager Script to the scene.
- Assign all the AudioSource to modify to the
Audio Source List
field, or attach this manager to a mutual parent of all such AudioSource objects. - Upon slider value change, call
ChangeHearingRange(float)
to change the hearing range.
Untitled.video.-.Made.with.Clipchamp.3.1.mp4
( π Recommended to use when user need to search a scene for a certain sound source)
The SoundDistanceAssistanceManager Script is used for this feature. It has the following variables.
- The
Target Audio Source
field should contain theAudio Source
of the target. This provides its location as well as access to theAudioSource
. - The
Starting Point
field should be the game object that marks the starting point so that this tool can interpolate the audio pitch between this point and the target point. - The
Main Camera
field should contain the Main Camera that indicates where the user is.
Public Functions
ToggleOnOff(bool)
: This function toggles this tool on and off.
Implementation Steps:
- Add a SoundDistanceAssistanceManager Script to the scene.
- Assign the target AudioSource to the
Target Audio Source
field. - Make an empty game object and put it at the starting point. Then, assign this game object to the
Starting Point
field. - Attach the main camera of the player to the
Main Camera
field. - Use
ToggleOnOff(bool)
to turn the tool on or off.
Untitled.video.-.Made.with.Clipchamp.4.1.mp4
( π Recommended to use in a scene with overlapping ambient sounds)
The SilenceZoneManager Script is used for this feature. It has the following variables.
- The
Audio Source List
field should contain the list ofAudio Source
that should be modified by this tool.
Public Functions
SilenceZoneToggle(bool)
: This function toggles this tool on and off.
Implementation Steps:
- Add a SilenceZoneManager Script to the scene.
- Assign the list of AudioSource to the
Audio Source List
field. - Use
SilenceZoneToggle(bool)
to turn the tool on or off.
Untitled.video.-.Made.with.Clipchamp.5.1.mp4
( π Recommended to use before important sound message is played.)
π π π See SmartNotificationExampleScene for example.
This feature uses the script SmartNotificationManager to control the on and off, and the sounds played as the smart notification.
- The script has a public boolean variable
smartNotificationOn
to indicate whether the smart notification feature is on or off. - The developer also needs to put the notification clip in the Notification Clip field, with the default provided in the toolkit in
Sounds->notif1_inTheEnd.mp3
. - Before playing an important sound, the developer could start a
PlaySmartNotification()
Coroutine.
Public Functions
ToggleSmartNotification(bool)
: This function will turn the Smart notification feature on/off by changing the public flag smartNotificationOn
variable.
Public Coroutine
PlaySmartNotification(AudioSource)
: You need to pass in the AudioSource that will be used to play the following important sound. This Coroutine will play the notification sound selected followed by the audio clip of the AudioSource that is passed in.
Implementation Steps:
- Add a SmartNotificationManager Script to the scene.
- Add the default notification clip in
Sounds->notif1_inTheEnd.mp3
or another clip of the developer's choice into theNotification Clip
Field. - Use
ToggleSmartNotification
to toggle the Smart Notification Feature on or off. - When playing a sound where you want a notification played before the sound, start the
PlaySmartNotification
Coroutine with the AudioSource of the important sound passed in as the parameter.
SmartNotificationCompressed.mp4
( π Recommended to use when there are feedback sounds in the program.)
π π π See CustomFeedbackSoundExampleScene for example.
This feature is managed by the Script CustomFeedbackManager. It currently supports a correct and incorrect feedback sound.
- The developer should input the AudioClip Files of the correct and incorrect feedback sounds in the
Correct/Incorrect Feedback Clips List
. - Then, the developer should put the index of the default feedback sounds into
correctFeedbackIndex
andincorrectFeedbackIndex
fields. The default indices would be 0. - The
audioSource
field will be used to play the feedback sounds. See some example Feedback sounds in the Sounds-FeedbackSounds folder.
Public Functions
SelectCorrectFeedback(int)
: This function sets the correct feedback index to be the input int.
SelectIncorrectFeedback(int)
: This function sets the incorrect feedback index to be the input int.
PlayCorrectFeedbackSound()
: This function loads and plays the correct notification sound from the audio source linked to the script.
PlayIncorrectFeedbackSound()
: This function loads and plays the correct notification sound from the audio source linked to the script.
Implementation Steps:
- Add a CustomFeedbackManager Script to the Scene.
- Enter the List of the AudioClips for the correct and incorrect feedback.
- Add the AudioSource where the feedback sounds should be played to the
Audio Source
field. - To enable the user to change the feedback sounds used, use the
SelectCorrectFeedback()
andSelectIncorrectFeedback()
as documented above. - To play the feedback sounds, use
PlayCorrectFeedbackSound()
orPlayIncorrectFeedbackSound()
as documented above.
CustomFeedbackSoundCompressed.mp4
( π Recommended to use in a quiet scene)
The WhiteNoiseManager Script is used for this feature. It has the following variables.
- The
Noise List
field should contain the list ofAudio Clip
that should be the noises used in this tool. - The noise samples are included in the Sounds folder.
Public Functions
OnNoiseSelection(int val)
: Use this function to choose which calming noise to play. The val is the index for the choice in the Noise List (1 indexed). val=0 means no sounds should be playing.
Implementation Steps:
- Add a WhiteNoiseManager Script to the scene.
- In the object that has WhiteNoiseManager script, add an AudioSource.
- Assign the list of AudioClip of the noises to the
Noise List
field. - Use
OnNoiseSelection(val)
to turn the tool on/off and to choose the noise clip to play.