You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
what happens in case the first write fails, and the second succeeds? Now there's suddenly a torn write involved here [...]
I think it would be good to return after the first failing write, so that it's at least knowable that a valid prefix has been written (if the return value is nonzero).
That comment is about write(::IO, ::Char). This also happens in our write implementation for AbstractArray:
functionunsafe_write(s::IO, p::Ptr{UInt8}, n::UInt)
written::Int=0for i =1:n
written +=write(s, unsafe_load(p, i))
endreturn written
end
If any of those writes execute partially, we blindly continue writing all following elements and can "tear" the result, leading to incomplete and/or invalid text and data. This is not an issue for many IO types which do a complete write or throw an error, but IOBuffers are a notable exception and one can imagine in-memory buffers that asynchronously drain which could hit this pretty easily as well.
A related defect is that a caller of print(io, ...) has no way to detect partial writes of a text representation of an object. In the past, I've had to leave a spare byte in the fixed-size IOBuffer and check whether it reached capacity to work around this.
The text was updated successfully, but these errors were encountered:
In the past, I've had to leave a spare byte in the fixed-size IOBuffer and check whether it reached capacity to work around this.
In the general case, this kind of in-band messaging is not sufficient because there may be a valid message containing that exact spare byte as well. If whatever in-band escape mechanism you use, as well as any data after the fake End Of Message marker, is dropped, you still get a corrupted message on the reading end.
This is probably fine for IOBuffer (which only starts dropping data once its full), but may be problematic for some other non-blocking IO.
I think it isn't typically an issue for IOBuffer, since once it is full, it remains full, so the remaining text is also dropped, similar to if you had written to /dev/null, which also "writes" all data by discarding it. This could be an issue for PipeBuffer / BufferStream though, which set the append flag on IOBuffer, if the user also manages to pass in a maxsize. That is probably an implementation bug in BufferStream, since unsafe_write should report how many of the requested bytes were successfully output, and then block on its internal cond and wait for a reader to make some space (in classic non-blocking implementation fashion).
Originally posted by @Seelengrab in #56980 (comment)
That comment is about
write(::IO, ::Char)
. This also happens in ourwrite
implementation for AbstractArray:If any of those writes execute partially, we blindly continue writing all following elements and can "tear" the result, leading to incomplete and/or invalid text and data. This is not an issue for many IO types which do a complete write or throw an error, but IOBuffers are a notable exception and one can imagine in-memory buffers that asynchronously drain which could hit this pretty easily as well.
A related defect is that a caller of
print(io, ...)
has no way to detect partial writes of a text representation of an object. In the past, I've had to leave a spare byte in the fixed-sizeIOBuffer
and check whether it reached capacity to work around this.The text was updated successfully, but these errors were encountered: