Replies: 11 comments 1 reply
-
If I remember correctly the upload limit is set to 8mb, and the server replies with one of the 4xx codes (don't remember now which, but the one about too big request body). The client is also prepared to handle this code gracefully, showing proper message. This broken pipe thing suggests that maybe some proxy is killing the request when it reaches certain body size 🤔 I'll double check the server and load balancer settings though. |
Beta Was this translation helpful? Give feedback.
-
@sickill I don't actually use any proxy. |
Beta Was this translation helpful? Give feedback.
-
Just an aside: These files compress really well. For example, I zipped @neochar's 6MB cast, and the result is only 430KB. The server could just store and serve compressed binaries. |
Beta Was this translation helpful? Give feedback.
-
Exactly, they compress like crazy :) The server is actually storing the recordings gzipped, and serving them to the browser also gzipped (with Anyway, the web player at the moment needs to load the whole recording into memory to start the playback (it cannot load it in chunks). For 6mb cast file it probably takes 60mb of RAM (maybe even more) after it's parsed and JS data structures are created. So the server-side size limit is to prevent someone from uploading 100MB recording, which would use 1GB of RAM on someone's computer, which would not be cool. Another thing is that asciinema.org is a community service, covered by donations. We store files on S3, and Amazon wants their money each month. By limiting the file size we also limit the bill ;) |
Beta Was this translation helpful? Give feedback.
-
This might help asciinema/asciinema#378, Added ability to read from
|
Beta Was this translation helpful? Give feedback.
-
You can also dramatically reduce raw size with 'aggregating' events. Consider these following events from an ASA output compiled to 20ms resolution instead (below the original):
original:
[219.540328, "o", "r"]
[219.541382, "o", "e"]
[219.542385, "o", "v"]
[219.543423, "o", "1"]
[219.544465, "o", "6"]
[219.545528, "o", " "]
[219.546538, "o", "G"]
[219.547569, "o", "i"]
[219.548653, "o", "g"]
[219.549689, "o", "a"]
[219.550757, "o", "b"]
[219.551779, "o", "i"]
[219.552863, "o", "t"]
[219.555584, "o", " E"]
[219.556007, "o", "t"]
[219.556965, "o", "h"]
[219.558032, "o", "e"]
[219.559021, "o", "r"]
[219.560061, "o", "n"]
[219.561105, "o", "e"]
[219.562144, "o", "t"]
[219.563172, "o", " "]
[219.564215, "o", "@"]
[219.565266, "o", " "]
[219.566363, "o", "i"]
[219.567338, "o", "n"]
[219.568377, "o", "d"]
[219.569611, "o", "e"]
[219.5706, "o", "x"]
[219.572335, "o", " "]
[219.573987, "o", "00"]
Above aggregated to 20 ms (aka. 50 FPS) resolution:
[219.540328, "o", "rev16 Gigabit Ether"]
[219.560061, "o", "net @ index 00"]
…--
Endre
On Wed, Nov 6, 2019, at 9:18 PM, Tom wrote:
This might help asciinema/asciinema#378 <asciinema/asciinema#378>
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#51?email_source=notifications&email_token=AACWQ2JOBHJDLIXV77TWNS3QSMYBHA5CNFSM4FI3EKD2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEDIAP5A#issuecomment-550504436>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AACWQ2NXJWEJWWPG52MY2LLQSMYBHANCNFSM4FI3EKDQ>.
|
Beta Was this translation helpful? Give feedback.
-
@endreszabo is this functionality already built-in, or is this post-processing that has to be done by a custom script? |
Beta Was this translation helpful? Give feedback.
-
Nope, I don't know any such script existing. Should not be hard to make a script for this. |
Beta Was this translation helpful? Give feedback.
-
Related: asciinema/asciinema#515 |
Beta Was this translation helpful? Give feedback.
-
In asciinema v2.3.0 we read data in much bigger chunks, see asciinema/asciinema@61be1f8 and asciinema/asciinema@07310e1. This should help in decreasing file size when there's a lot of data printed at once. A small improvement, an improvement nonetheless :) That ASA output above must have been written one character at a time, which is rather poor behavior of that program. Things like that must be fixed "in post", unless we implement aggregation (and therefore buffering) in the recorder. |
Beta Was this translation helpful? Give feedback.
-
can we upload gzip one so that the size limit would be escaped? |
Beta Was this translation helpful? Give feedback.
-
I recorded session which size now is 5980725 which is 5.7 MB.
I get broken pipe error when I'm trying to upload the file and I guess it's because of the size.
I tried to do like this:
asciinema rec -c "asciinema -s 2 my.cast" my2.cast
But found that my2.cast which is 2 times shorter has the same size.
Moreover 100 times faster playback recorded with asciinema has the same size too.
I guess it's because it doesn't skip the frames, but decreases the time between them while playing in faster speed. But is there any possibility to upload that file or decrease it size to satisfy the requirements?
Beta Was this translation helpful? Give feedback.
All reactions