Data sources#

Erddap status GDAC status Argovis status Statuspage


argopy can fetch data from several data sources. To make sure you understand where you’re getting data from, have a look at this section.

Let’s start with standard import:

In [1]: import argopy

In [2]: from argopy import DataFetcher as ArgoDataFetcher

In [3]: argopy.reset_options()

Available data sources#

argopy can get access to Argo data from the following sources:

  1. ⭐ the Ifremer erddap server (Default).

    The erddap server database is updated daily and doesn’t require you to download anymore data than what you need. You can select this data source with the keyword erddap and methods described below. The Ifremer erddap dataset is based on mono-profile files of the GDAC. Since this is the most efficient method to fetcher Argo data, it’s the default data source in argopy.

  2. 🌐 an Argo GDAC server or any other GDAC-compliant local folder.

    You can fetch data from any of the 3 official GDAC online servers: the Ifremer https and ftp and the US ftp. This data source can also point toward your own local copy of the GDAC ftp content. You can select this data source with the keyword gdac and methods described below.

  3. 👁 the Argovis server.

    The Argovis server database is updated daily and only provides access to curated Argo data (QC=1 only). You can select this data source with the keyword argovis and methods described below.

Selecting a source#

You have several ways to specify which data source you want to use:

  • using argopy global options:

In [4]: argopy.set_options(src='erddap')
Out[4]: <argopy.options.set_options at 0x7fd7126fbe80>
  • in a temporary context:

In [5]: with argopy.set_options(src='erddap'):
   ...:     loader = ArgoDataFetcher().profile(6902746, 34)
  • with an argument in the data fetcher:

In [6]: loader = ArgoDataFetcher(src='erddap').profile(6902746, 34)

Comparing data sources#


Each of the data sources have their own features and capabilities. Here is a summary:

Table of argopy data sources features#






Access Points:

🗺 region




🤖 float








User mode:

🏄 expert



🏊 standard




🚣 research




core (T/S)












Reference data for DMQC


Fetched data and variables#

You may wonder if the fetched data are different from the available data sources.
This will depend on the last update of each data sources and of your local data.

Let’s retrieve one float data from a local sample of the GDAC ftp (a sample GDAC ftp is downloaded automatically with the method argopy.tutorial.open_dataset()):

# Download ftp sample and get the ftp local path:
In [7]: ftproot = argopy.tutorial.open_dataset('gdac')[0]

# then fetch data:
In [8]: with argopy.set_options(src='gdac', ftp=ftproot):
   ...:     ds = ArgoDataFetcher().float(1900857).load().data
   ...:     print(ds)
Dimensions:          (N_POINTS: 20966)
  * N_POINTS         (N_POINTS) int64 0 1 2 3 4 ... 20962 20963 20964 20965
    TIME             (N_POINTS) datetime64[ns] 2008-02-25T04:03:00 ... 2013-0...
    LATITUDE         (N_POINTS) float64 -39.93 -39.93 -39.93 ... -44.16 -44.16
    LONGITUDE        (N_POINTS) float64 10.81 10.81 10.81 ... 92.65 92.65 92.65
Data variables: (12/15)
    CYCLE_NUMBER     (N_POINTS) int64 0 0 0 0 0 0 0 ... 192 192 192 192 192 192
    DATA_MODE        (N_POINTS) <U1 'D' 'D' 'D' 'D' 'D' ... 'D' 'D' 'D' 'D' 'D'
    DIRECTION        (N_POINTS) <U1 'D' 'D' 'D' 'D' 'D' ... 'A' 'A' 'A' 'A' 'A'
    PLATFORM_NUMBER  (N_POINTS) int64 1900857 1900857 ... 1900857 1900857
    POSITION_QC      (N_POINTS) int64 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1
    PRES             (N_POINTS) float64 17.0 25.0 35.0 ... 1.964e+03 1.987e+03
    ...               ...
    PSAL_ERROR       (N_POINTS) float64 0.02 0.02 0.02 0.02 ... 0.02 0.02 0.02
    PSAL_QC          (N_POINTS) int64 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1
    TEMP             (N_POINTS) float64 16.14 16.14 16.03 ... 2.431 2.422 2.413
    TEMP_ERROR       (N_POINTS) float64 0.002 0.002 0.002 ... 0.002 0.002 0.002
    TEMP_QC          (N_POINTS) int64 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1
    TIME_QC          (N_POINTS) int64 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1
    DATA_ID:              ARGO
    Fetched_from:         /home/docs/.argopy_tutorial_data/ftp
    Fetched_by:           docs
    Fetched_date:         2024/04/22
    Fetched_constraints:  WMO1900857
    Fetched_uri:          /home/docs/.argopy_tutorial_data/ftp/dac/coriolis/1...
    history:              Variables filtered according to DATA_MODE; Variable...

Status of sources#

With remote, online data sources, it may happens that the data server is experiencing down time. With local data sources, the availability of the path is checked when it is set. But it may happens that the path points to a disk that get unmounted or unplugged after the option setting.

If you’re running your analysis on a Jupyter notebook, you can use the argopy.status() method to insert a data status monitor on a cell output. All available data sources will be monitored continuously.


If one of the data source become unavailable, you will see the status bar changing to something like:


Note that the argopy.status() method has a refresh option to let you specify the refresh rate in seconds of the monitoring.

Last, you can check out the following argopy status webpage that monitors all important resources to the software.

Setting-up your own local copy of the GDAC ftp#

Data fetching with the gdac data source will require you to specify the path toward your local copy of the GDAC ftp server with the ftp option.

This is not an issue for expert users, but standard users may wonder how to set this up. The primary distribution point for Argo data, the only one with full support from data centers and with nearly a 100% time availability, is the GDAC ftp. Two mirror servers are available:

If you want to get your own copy of the ftp server content, you have 2 options detailed below.

Copy with DOI reference#

If you need an Argo database referenced with a DOI, one that you could use to make your analysis reproducible, then we recommend you to visit There, you will find links toward monthly snapshots of the Argo database, and each snapshot has its own DOI.

For instance, points toward the snapshot archived on February 10st 2022. Simply download the tar archive file (about 44Gb) and uncompress it locally.

You’re done !

Synchronized copy#

If you need a local Argo database always up to date with the GDAC server, Ifremer provides a nice rsync service. The rsync server “” provides a synchronization service between the “dac” directory of the GDAC and a user mirror. The “dac” index files are also available from “argo-index”.

From the user side, the rsync service:

  • Downloads the new files

  • Downloads the updated files

  • Removes the files that have been removed from the GDAC

  • Compresses/uncompresses the files during the transfer

  • Preserves the files creation/update dates

  • Lists all the files that have been transferred (easy to use for a user side post-processing)

To synchronize the whole dac directory of the Argo GDAC:

rsync -avzh --delete /home/mydirectory/...

To synchronize the index:

rsync -avzh --delete /home/mydirectory/...


The first synchronisation of the whole dac directory of the Argo GDAC (365Gb) can take quite a long time (several hours).