Skip to content

Commit

Permalink
getfile method and docs (#20)
Browse files Browse the repository at this point in the history
* getfile method and docs

* grammer

* version bump
  • Loading branch information
willmcgugan authored Jan 31, 2018
1 parent c9627ce commit 579b33c
Show file tree
Hide file tree
Showing 5 changed files with 49 additions and 4 deletions.
11 changes: 11 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,17 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](http://keepachangelog.com/)
and this project adheres to [Semantic Versioning](http://semver.org/).

## [0.1.6] - 2018-01-31

### Added

- implemented new getfile method

### Changed

- Updated fs for more efficient directory walking
- Relaxed boto requirement

## [0.1.5] - 2017-10-21

### Added
Expand Down
23 changes: 22 additions & 1 deletion docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,6 @@ Amazon S3 cloud storage.
As a PyFilesystem concrete class, S3FS allows you to work with S3 in the
same as any other supported filesystem.


Installing
==========

Expand Down Expand Up @@ -49,6 +48,28 @@ S3FS Constructor
:members:


Limitations
===========

Amazon S3 isn't strictly speaking a *filesystem*, in that it contains
files, but doesn't offer true *directories*. S3FS follows the convention
of simulating directories by creating an object that ends in a forward
slash. For instance, if you create a file called `"foo/bar"`, S3FS will
create an S3 object for the file called `"foo/bar"` *and* an
empty object called `"foo/"` which stores that fact that the `"foo"`
directory exists.

If you create all your files and directories with S3FS, then you can
forget about how things are stored under the hood. Everything will work
as you expect. You *may* run in to problems if your data has been
uploaded without the use of S3FS. For instance, if you create a
`"foo/bar"` object without a `"foo/"` object. If this occurs, then S3FS
may give errors about directories not existing, where you would expect
them to be. The solution is to create an empty object for all
directories and subdirectories. Fortunately most tools will do this for
you, and it is probably only required of you upload your files manually.


Authentication
==============

Expand Down
13 changes: 13 additions & 0 deletions fs_s3fs/_s3fs.py
Original file line number Diff line number Diff line change
Expand Up @@ -624,6 +624,19 @@ def getbytes(self, path):
)
return bytes_file.getvalue()

def getfile(self, path, file, chunk_size=None, **options):
self.check()
if self.strict:
info = self.getinfo(path)
if not info.is_file:
raise errors.FileExpected(path)
_path = self.validatepath(path)
_key = self._path_to_key(_path)
with s3errors(path):
self.client.download_fileobj(
self._bucket_name, _key, file
)

def exists(self, path):
self.check()
_path = self.validatepath(path)
Expand Down
2 changes: 1 addition & 1 deletion fs_s3fs/_version.py
Original file line number Diff line number Diff line change
@@ -1 +1 @@
__version__ = "0.1.5"
__version__ = "0.1.6"
4 changes: 2 additions & 2 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,8 @@
DESCRIPTION = f.read()

REQUIREMENTS = [
"boto3~=1.4.0",
"fs~=2.0.12",
"boto3~=1.4",
"fs~=2.0.18",
"six~=1.10.0"
]

Expand Down

0 comments on commit 579b33c

Please sign in to comment.