-
Notifications
You must be signed in to change notification settings - Fork 146
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Continuous decoding #146
Comments
New issues do get attention, but it takes a while as the maintainers have to do it in their free time. |
I understand the "in their free time" thing but can we get some update on this? |
I had a similar issue and am happy to share my solution. First, I was not able to XHR because in order to get chunks of the stream, the endpoint needs to support range based requests. In my case, the source is live and so this was not an option. Consequently, I switched to using websockets. To accomplish this, I integrated the aurora-websockets library as a source. This delivered the content quickly and as expected, but I had the same result as @agiliator in that it would begin to play and then stop. It turns out that this occurs because of the way that player#refill handles an underflow condition (at least it did in my case). I addressed this by adding the ability to Queue to switch back to a "buffering" state. The player constructor was updated as follows:
However, this was not the only issue. As I was decoding mp3 audio, I discovered a bug in the mp3.js decoder. Essentially, for any given batch of data (consisting of one or more mp3 frames), it would drop the last frame if that was the last frame currently available (recall that my audio is live so the data came in in batches of 3-4 frames every 100ms or so but the decoder worked much faster than that). The fix is more involved than I can post here, but I am putting together a pull request as I have some spare time. I can post my changes to Queue if anyone is interested. |
Awesome work @DeusExLibris! |
I have made all my updates available in my fork - Aurora with WS. This includes the items above as well as integration of aurora-websocket as a native source and a version of aurora-websocket that handles the websockets in a web worker. |
Thanks a lot @DeusExLibris! Trying out your changes is the first thing when the player project continues on our side (XHR and WS). Your fork still works also with with XHR, right? I think our setup has that side covered. |
@agiliator - it should. Outside of changes needed to integrate websockets, I only made minor changes. I will add a note to the repository that describes them and the reasoning behind each. One other note - websockets require methods for exchanging messages with the backend are somewhat arbitrary. For example, using {file:"/path/to/file/on/server"} to identify the asset you want sent. Because of this, the backend will need to support handling those messages or you will need to modify the websocket.coffee to use whatever standard your backend supports. |
I tried your fork, @DeusExLibris. I'm assuming I should emit 'end' after each emit of data, as without 'end' the decoding process does not fire? So basically data-end pairs to get continuous audio? Does the data need to contain whole, healthy MP3 frames (not a problem if yes, just need to know..)? I seem to also have "TypeError: Cannot read property '0' of undefined" at asset.on('error'..), even though I think I emit whole and healthy MP3 frames. Though, as asset.on('data'..) gets called first. However the second batch of data does not get decoded after that error. Quite a few questions at the same comment here.. Sorry about that. |
You shouldn't have to emit 'end' with each batch of data. The decoder is On Fri, Feb 26, 2016 at 7:12 AM, agiliator [email protected] wrote:
Joe Wilson |
Thanks for a quick response, that clarifies a lot. Any other way to engage the decoder than using the player? Basically I'd like to play it myself via WebAudioApi, so preferable way for me would be to collect pieces of PCM (asset.on('data', ...), etc) and feed them to my own player (which alters the audio somewhat). Btw, do you know if the audio received this way compatible with WebAudioApi? |
I think if you your Asset#start, that will begin the decoding and then you On Fri, Feb 26, 2016 at 8:34 AM, agiliator [email protected] wrote:
Joe Wilson |
Thanks, that makes perfect sense. |
Had another try. This time the there chunks are full frames and they are fed to aurora like below. add():ing consecutively.
First notion was that first chunk didn't trigger decoding/play. Thats not an issue, but just mentioning it if it helps any. Once consequent frames are added, only the first bunch seems to be decoded, following an error 'bad main_data_begin pointer'. I wonder if this is related to the reservoir bits, as I get exactly the same error about the first bunch (once the second is added), if I start from the middle of mp3-stream instead (still at the frame border, though). |
Yes - the "bad main_data_begin pointer" message occurs if you start in the On Mon, Feb 29, 2016 at 8:34 AM, agiliator [email protected] wrote:
Joe Wilson |
Seems to be the case, yes - removing reservoir makes the issue disappear. AAC does not appear to suffer from that issue, at least with files/streams I tested. |
Hi DeusExLibris & agiliator I was trying to use the aurora-websocket library written by @fabienbrooke without any success. The situation is as follow: My command look something like this: ffmpeg -y -i "rtmp://xxx.comt/app/streamname live=11" -vn -codec:a libmp3lame -b:a 128k -f mp3 -content_type audio/mpeg -reservoir 0 - | node aurora-ws-server.js The webpage html look like this:
When opening the web page I can hear the audio in firefox;chrome; and even safari but after few second audio stop. Looking at the web socket frames I see something I getting a pause command from the client. When playing the same from a file, it works well. I'm not a node.js programer but your websocketWorker.js looks broken. Could you provide in the WS_NOTES.md of yours how to use the code? FYI, The audiocogs.js and MP3,js file were taken for the main repo. Would welcome to get a feedback from you. Thank You. |
@sassyn - in order to stream the audio live, you need to use my fork of the aurora.js code found here. As explained in the WS_NOTES.md, there are several changes that I needed to make to aurora in order to support live streaming. This is especially true for the mp3.js decoder as it expects multiple MPEG frames to be available when it initially starts decoding. Also, if you modify the standard aurora.js using the websocket code from @fabienbrooke, this would only support the websocket code running in the main browser thread. You need to add the code for the wsWebWorker as well. This is also part of my fork of aurora. I originally submitted this code as a pull request to the aurora team, but closed it after reviewing the discussion with @fabienbrooke after PR 32 was submitted. |
Hi, trying to play an OPUS live stream using aurora.js and opus.js decoder. A stream is going from a Firebase location (in base64 encoded chunks). So i beleive a flow is pretty similar as example from @DeusExLibris for WebSocket transport, because i was successfully adapted server and client sides to play stream (files from server) in base64 format. Sending as string data, decoding to Uint8Array in a browser and playing audio. But when tried to use a Firebase location as a stream source having a problem with error:
My code is following:
` |
@agiliator according to code , it seem to work,but result is all zero. len=1152,any help would b appreciated~ |
Hi,
I'm experimenting on decoding and playing a radio stream using aurora.js. So basically, I get the chunks of mp3/aac (XHR possibly the chunks of HLS) which I need to get decoded before playing. However, I hit some issues during the process:
My code to experiment auroras feasibility for this project is as follows.
Any help is greatly appreciated!
The text was updated successfully, but these errors were encountered: