argopy.stores.s3store

argopy.stores.s3store#

class s3store(*args, **kwargs)[source]#

Argo s3 file system

Inherits from httpstore but rely on s3fs.S3FileSystem through the fsspec ‘s3’ protocol specification.

By default, this store will use AWS credentials available in the environment.

If you want to force an anonymous session, you should use the anon=True option.

In order to avoid a no credentials found error, you can use:

>>> from argopy.utils import has_aws_credentials
>>> fs = s3store(anon=not has_aws_credentials())
__init__(*args, **kwargs)#

Create a file storage system for Argo data

Parameters:

Methods

__init__(*args, **kwargs)

Create a file storage system for Argo data

cachepath(uri[, errors])

Return path to cached file for a given URI

clear_cache()

Remove cache files and entry from uri open with this store instance

curateurl(url)

Register and possibly manipulate an url before it's accessed

download_url(url[, max_attempt, cat_opts, ...])

Resilient URL data downloader

exists(path, *args, **kwargs)

expand_path(path)

first(path[, N])

Read first N bytes of a path

full_path(path[, protocol])

Return fully developed path

glob(path, **kwargs)

info(path, *args, **kwargs)

ls(path, **kwargs)

open(path, *args, **kwargs)

open_dataset(url[, errors, lazy, dwn_opts, ...])

Create a xarray.Dataset from an url pointing to a netcdf file

open_json(url[, errors])

Download and process a json document from an url

open_mfdataset(urls[, max_workers, method, ...])

Download and process multiple urls as a single or a collection of xarray.Dataset

open_mfjson(urls[, max_workers, method, ...])

Download and process a collection of JSON documents from urls

read_csv(url, **kwargs)

Read a comma-separated values (csv) url into Pandas DataFrame.

register(uri)

Keep track of files open with this instance

store_path(uri)

unstrip_protocol(path, **kwargs)

Attributes

async_impl

asynchronous

cached_files

protocol

File system name, one in fsspec.registry.known_implementations

sep

target_protocol