-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Access to raw audio data #80
Comments
Wouldn't this also be addressed by adding audio to mediacapture-transform, as discussed in w3c/mediacapture-transform#29 ? |
May I suggest that the need goes beyond just obtaining access to raw audio, but also to integrating the WASM codecs without having to trick WebRTC into thinking that it is sending and receiving another codec, such as Opus. IMHO, the use of custom audio codecs is becoming increasingly common, particularly in cloud gaming (Section 3.2.1). |
Custom audio codecs requires SDP munging, but should otherwise be doable with proposed extensions to encoded-transform. |
Discussed at TPAC 2023. It was noted that custom audio codecs have uses beyond streaming and so may deserve a distinct use case. |
This issue had an associated resolution in WebRTC TPAC 2023 meeting – 12 September 2023 (Low Latency Streaming: Game Streaming use case):
|
This issue was mentioned in WEBRTCWG-2023-12-05 (Page 19) |
I've been tinkering with WASM codecs a bit, and one thing I figured out (well, others did before me, I only realized it looking what they did!) was that I had to "highjack" the hidden support for L16 in browsers to get access to uncompressed audio (via SDP munging, since it's officially unsupported) and then use Insertable Streams to take care of the encoding/decoding at the JS level. @fippo suggested I open an issue here, since standardizing access to raw audio data may indeed facilitate a lot of the audio-related use cases that have started appearing.
The text was updated successfully, but these errors were encountered: