Conversation
fs.write(v) is not guaranteed to write everything in a single call. Make sure we don't assume so.
lib/internal/fs/streams.js
Outdated
| retries = bytesWritten ? 0 : retries + 1 | ||
|
|
||
| if (retries > 5) { | ||
| return cb(new Error('writev failed')); |
There was a problem hiding this comment.
What are we trying for 5 times here? Writing a non-zero number of bytes?
lib/internal/fs/streams.js
Outdated
| retries = bytesWritten ? 0 : retries + 1 | ||
|
|
||
| if (retries > 5) { | ||
| return cb(new Error('writev failed')); |
lib/internal/fs/streams.js
Outdated
| } | ||
|
|
||
| if (bytesWritten < size) { | ||
| writevAll([Buffer.concat(buffers).slice(bytesWritten)], pos + bytesWritten, cb, retries); |
There was a problem hiding this comment.
I would recommend being a bit more subtle here and avoid the massive slowdown. Worst case you are allocating 2x memory to pick one byte. I would recommend just concatenating the buffers that have not been completely written.
There was a problem hiding this comment.
This is a cold path though? It should almost never happen. Is it worth optimizing for?
| pos += bytesWritten; | ||
|
|
||
| if (retries > 5) { | ||
| cb(new Error('writev failed')); |
There was a problem hiding this comment.
I need a little help here... not sure what error to use
There was a problem hiding this comment.
Checked the existing codes, no idea either 😅
| if (retries > 5) { | ||
| cb(new Error('writev failed')); | ||
| } else if (size) { | ||
| writevAll([Buffer.concat(buffers).slice(bytesWritten)], size, pos, cb, retries); |
There was a problem hiding this comment.
If we're going to be concatenating, we may as well just do it once and call writeAll() with it instead. Otherwise like @mcollina said, we should handle this more efficiently.
Ideally it would be great if we didn't have to even keep recreating arrays on retry. That might be a nice addition to the fs.writev() API to make it more like fs.write() with its offset parameter, so we could just pass a starting index or something. With something like that all we'd have to do is possibly slice() a single buffer (or simply increasing the starting index) before retrying.
There was a problem hiding this comment.
This is a cold path though? It should almost never happen. Is it worth optimizing for?
There was a problem hiding this comment.
It should almost never happen
It think it would be useful to get an idea of how often it happens (I personally have no idea). How cold is it?
There was a problem hiding this comment.
Well, if it does happen often then we have a lot of seriously broken software out there... I guess we would have quite a bit of reports....
There was a problem hiding this comment.
or people have learned to handle retries on their own?
There was a problem hiding this comment.
I don’t think you can actually detect that in this interface… there is nothing a user can do… it will just be corrupt w/o any way to detect or recover… so no
There was a problem hiding this comment.
Well, if it does happen often then we have a lot of seriously broken software out there... I guess we would have quite a bit of reports....
makes sense, yeah sounds like a cold path to me.
|
Needs a test as well. |
| pos += bytesWritten; | ||
|
|
||
| if (retries > 5) { | ||
| cb(new Error('writev failed')); |
There was a problem hiding this comment.
This needs to be a proper error with code?
Co-authored-by: Antoine du Hamel <duhamelantoine1995@gmail.com>
| } | ||
|
|
||
| if (this.destroyed || er) { | ||
| return cb(er); |
There was a problem hiding this comment.
will er here be something if this.destroyed is truthy? Otherwise, this could signal to the caller that the call succeeded?
|
@nodejs/fs someone willing to help get this over the finish line? I still think it's a potential bug. |
|
Maybe @rluvaton or @atlowChemi ? |
|
What help is needed? |
|
Linting and Tests |
@ronag Can you check the |
Hm. I don't know how to do that. Can you just push to a new branch/PR? |
|
Superseded by #49211 |
fs.write(v) is not guaranteed to write everything in a single
call. Make sure we don't assume so.