You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The classes in reader take a path to a file on disk, read that file and then parse the contents. For example:
publicfinalclassKeyValueReader {
/** * Generic method to read key value pairs from the bagit files, like bagit.txt or bag-info.txt * * @param file the file to read * @param splitRegex how to split the key from the value * @param charset the encoding of the file * * @return a list of key value pairs */publicstaticList<SimpleImmutableEntry<String, String>> readKeyValuesFromFile(finalPathfile, finalStringsplitRegex, finalCharsetcharset) throwsIOException, InvalidBagMetadataException{
finalList<SimpleImmutableEntry<String, String>> keyValues = newArrayList<>();
try(finalBufferedReaderreader = Files.newBufferedReader(file, charset)){
...
}
returnkeyValues;
}
}
For the Wellcome storage service (https://github.com/wellcometrust/storage-service), we aren’t keeping bags on the local disk, but in S3. If we want to read a file, we make a GetObject call to the S3 SDK, which returns an InputStream.
We could download the bag files to disk, and read them from there, but that seems a bit icky – would you be open to some pull requests that add allow parsing files even if they aren’t local files? Something like:
So the existing API is preserved, and calls into the new method that takes any BufferedReader – and now we can call that rather than round-tripping to the filesystem first.
Thoughts?
The text was updated successfully, but these errors were encountered:
I checked out your links, but I'm afraid I still don't understand your use case. Why do you need to read the bag-info.txt from S3?
It is my opinion that bagit is best suited to the transfer of large batches (which is why it was originally created), and ensure they are complete (all the files are there) and correct (none of the files have changed). I don't recommend it as a long term storage system because it does not deal with the common case of bit rot (random bits flipping and creating errors), nor does it handle storing and handling multiple copys (Lots Of Copies Keeps Stuff Safe - LOCKSS).
It is my opinion that bagit is best suited to the transfer of large batches
That is exactly what our cloud storage service is doing - especially during replication for multiple copies "Lots Of Copies Keeps Stuff Safe - LOCKSS".
To store the things, you have to move them into storage - and to move the things between replication locations between object stores you have to move them in large batches.
It might be useful for us to communicate about this not via GitHub issues - can we invite you to our Slack perhaps?
The classes in
reader
take a path to a file on disk, read that file and then parse the contents. For example:For the Wellcome storage service (https://github.com/wellcometrust/storage-service), we aren’t keeping bags on the local disk, but in S3. If we want to read a file, we make a GetObject call to the S3 SDK, which returns an
InputStream
.We could download the bag files to disk, and read them from there, but that seems a bit icky – would you be open to some pull requests that add allow parsing files even if they aren’t local files? Something like:
So the existing API is preserved, and calls into the new method that takes any BufferedReader – and now we can call that rather than round-tripping to the filesystem first.
Thoughts?
The text was updated successfully, but these errors were encountered: