You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Not sure if this is the place for asking questions, but I see that for network robustness, when transferring a file, the hash is checked to make sure the destination matches the source for GCS and S3. Is there a reason why this check isn't also done for local file system and https transfers?
The text was updated successfully, but these errors were encountered:
This is certainly a place for asking questions! The only reason these checks aren't done is:
a) For local files, there's no hash metadata to compare to. We could add a metadata file, but it would be an extra cost. This library was designed in a context where hundreds of millions of files could be generated and that was already wrecking filesystems, so adding another file would have doubled the load. We could add information somehow as an option. Did I understand your question correctly?
b) For HTTPS transfers, the main reason is that I haven't really looked into it. Is there a standard most web servers follow?
Yes that answers my question and makes sense for local file systems. I am not sure if there is a standard that most web servers follow, so I think sounds like not checking for now is the way to go
Not sure if this is the place for asking questions, but I see that for network robustness, when transferring a file, the hash is checked to make sure the destination matches the source for GCS and S3. Is there a reason why this check isn't also done for local file system and https transfers?
The text was updated successfully, but these errors were encountered: