argopy.data_fetchers.argovis_data.ArgovisDataFetcher#

class ArgovisDataFetcher(ds: str = '', cache: bool = False, cachedir: str = '', parallel: bool = False, parallel_method: str = 'thread', progress: bool = False, chunks: str = 'auto', chunks_maxsize: dict = {}, api_timeout: int = 0, **kwargs)[source]#
__init__(ds: str = '', cache: bool = False, cachedir: str = '', parallel: bool = False, parallel_method: str = 'thread', progress: bool = False, chunks: str = 'auto', chunks_maxsize: dict = {}, api_timeout: int = 0, **kwargs)[source]#

Instantiate an Argovis Argo data loader

Parameters
  • ds (str (optional)) – Dataset to load: ‘phy’ or ‘bgc’

  • cache (bool (optional)) – Cache data or not (default: False)

  • cachedir (str (optional)) – Path to cache folder

  • parallel (bool (optional)) – Chunk request to use parallel fetching (default: False)

  • parallel_method (str (optional)) – Define the parallelization method: thread, process or a dask.distributed.client.Client.

  • progress (bool (optional)) – Show a progress bar or not when parallel is set to True.

  • chunks ('auto' or dict of integers (optional)) – Dictionary with request access point as keys and number of chunks to create as values. Eg: {‘wmo’: 10} will create a maximum of 10 chunks along WMOs when used with Fetch_wmo.

  • chunks_maxsize (dict (optional)) – Dictionary with request access point as keys and chunk size as values (used as maximum values in ‘auto’ chunking). Eg: {‘wmo’: 5} will create chunks with as many as 5 WMOs each.

  • api_timeout (int (optional)) – Argovis API request time out in seconds. Set to OPTIONS[‘api_timeout’] by default.

Methods

__init__([ds, cache, cachedir, parallel, ...])

Instantiate an Argovis Argo data loader

clear_cache()

Remove cache files and entries from resources opened with this fetcher

cname()

Return a unique string defining the constraints

dashboard(**kw)

filter_data_mode(ds, **kwargs)

filter_domain(ds)

Enforce rectangular box shape

filter_qc(ds, **kwargs)

filter_variables(ds[, mode])

init(*args, **kwargs)

Initialisation for a specific fetcher

json2dataframe(profiles)

convert json data to Pandas DataFrame

to_dataframe([errors])

Load Argo data and return a Pandas dataframe

to_xarray([errors])

Download and return data as xarray Datasets

url_encode(urls)

Return safely encoded list of urls

Attributes

cachepath

Return path to cache file for this request

sha

Returns a unique SHA for a specific cname / fetcher implementation

uri

Return the URL used to download data