Contributing to argopy¶
Table of contents:
First off, thanks for taking the time to contribute!
If you seek support for your argopy usage or if you don’t want to read this whole thing and just have a question: visit the chat room at gitter.
All contributions, bug reports, bug fixes, documentation improvements, enhancements, and ideas are welcome.
We will complete this document for guidelines with regard to each of these contributions over time.
If you are brand new to argopy or open source development, we recommend going through the GitHub “issues” tab to find issues that interest you. There are a number of issues listed under Documentation and good first issue where you could start out. Once you’ve found an interesting issue, you can return here to get your development environment setup.
Please don’t file an issue to ask a question, instead visit the chat room at gitter.
Bug reports are an important part of making argopy more stable. Having a complete bug report will allow others to reproduce the bug and provide insight into fixing. See this stackoverflow article for tips on writing a good bug report.
Trying the bug producing code out on the master branch is often a worthwhile exercise to confirm the bug still exists. It is also worth searching existing bug reports and pull requests to see if the issue has already been reported and/or fixed.
Bug reports must:
Include a short, self contained Python snippet reproducing the problem. You can format the code nicely by using GitHub Flavored Markdown:
```python >>> import argopy as ar >>> ds = ar.DataFetcher(backend='erddap').float(5903248).to_xarray() ... ```
Include the full version string of argopy and its dependencies. You can use the built in function:
>>> import argopy >>> argopy.show_versions()
Explain why the current behavior is wrong/not desired and what you expect instead.
The issue will then show up to the
argopy community and be open to comments/ideas
If you’re not the developer type, contributing to the documentation is still of huge value. You don’t even have to be an expert on argopy to do so! In fact, there are sections of the docs that are worse off after being written by experts. If something in the docs doesn’t make sense to you, updating the relevant section after you figure it out is a great way to ensure it will help the next person.
The documentation is written in reStructuredText, which is almost like writing in plain English, and built using Sphinx. The Sphinx Documentation has an excellent introduction to reST. Review the Sphinx docs to perform more complex changes to the documentation as well.
Some other important things to know about the docs:
The argopy documentation consists of two parts: the docstrings in the code itself and the docs in this folder
The docstrings are meant to provide a clear explanation of the usage of the individual functions, while the documentation in this folder consists of tutorial-like overviews per topic together with some other information (what’s new, installation, etc).
The docstrings follow the Numpy Docstring Standard, which is used widely in the Scientific Python community. This standard specifies the format of the different sections of the docstring. See this document for a detailed explanation, or look at some of the existing functions to extend it in a similar manner.
The tutorials make heavy use of the ipython directive sphinx extension. This directive lets you put code in the documentation which will be run during the doc build. For example:
.. ipython:: python x = 2 x ** 3
will be rendered as:
In : x = 2 In : x ** 3 Out: 8
Almost all code examples in the docs are run (and the output saved) during the doc build. This approach means that code examples will always be up to date, but it does make the doc building a bit more complex.
Our API documentation in
docs/api.rsthouses the auto-generated documentation from the docstrings. For classes, there are a few subtleties around controlling which methods and attributes have pages auto-generated.
Every method should be included in a
api.rst, else Sphinx will emit a warning.
Make sure to follow the instructions on creating a development environment below, but
to build the docs you need to use the specific file
$ conda create --yes -n argopy-docs python=3.6 xarray dask numpy pytest future gsw sphinx sphinx_rtd_theme $ conda activate argopy-docs $ pip install argopy $ pip install -r docs/requirements.txt
Navigate to your local
argopy/docs/ directory in the console and run:
Then you can find the HTML output in the folder
The first time you build the docs, it will take quite a while because it has to run all the code examples and build all the generated docstring pages. In subsequent evocations, sphinx will try to only build the pages that have been modified.
If you want to do a full clean build, do:
make clean make html
Anyone interested in helping to develop argopy needs to create their own fork of our git repository. (Follow the github forking instructions. You will need a github account.)
Clone your fork on your local machine.
$ git clone email@example.com:USERNAME/argopy
(In the above, replace USERNAME with your github user name.)
Then set your fork to track the upstream argopy repo.
$ cd argopy $ git remote add upstream git://github.com/euroargodev/argopy.git
You will want to periodically sync your master branch with the upstream master.
$ git fetch upstream $ git rebase upstream/master
Never make any commits on your local master branch. Instead open a feature branch for every new development task.
$ git checkout -b cool_new_feature
(Replace cool_new_feature with an appropriate description of your feature.) At this point you work on your new feature, using git add to add your changes. When your feature is complete and well tested, commit your changes
$ git commit -m 'did a bunch of great work'
and push your branch to github.
$ git push origin cool_new_feature
At this point, you go find your fork on github.com and create a pull request. Clearly describe what you have done in the comments. If your pull request fixes an issue or adds a useful new feature, the team will gladly merge it.
After your pull request is merged, you can switch back to the master branch, rebase, and delete your feature branch. You will find your new feature incorporated into argopy.
$ git checkout master $ git fetch upstream $ git rebase upstream/master $ git branch -d cool_new_feature
This is how to create a virtual environment into which to test-install argopy, install it, check the version, and tear down the virtual environment.
$ conda create --yes -n argopy-tests python=3.6 xarray dask numpy pytest future gsw $ conda activate argopy-tests $ pip install argopy $ python -c 'import argopy; print(argopy.__version__);' $ conda deactivate $ conda env remove --yes -n argopy-tests
Writing good code is not just about what you write. It is also about how you write it. During Continuous Integration testing, several tools will be run to check your code for stylistic errors. Generating any warnings will cause the test to fail. Thus, good style is a requirement for submitting code to argopy.
argopy uses several tools to ensure a consistent code format throughout the project:
Flake8 for general code quality
pip install flake8
and then run from the root of the argopy repository:
to qualify your code.
If you want to add your own data fetcher for a new service, then, keep in mind that:
Data fetchers are responsible for:
loading all available data from a given source and providing at least a
making data compliant to Argo standards (data type, variable name, attributes, etc …)
Data fetchers must:
inherit from the
access_points, eg: [‘wmo’, ‘box’]
exit_formats, eg: [‘xarray’]
dataset_ids, eg: [‘phy’, ‘ref’, ‘bgc’]
provides the facade API (
argopy.fetchers.ArgoDataFetcher) methods to filter data according to user level or requests. These must includes:
It is the responsibility of the facade API (
argopy.fetchers.ArgoDataFetcher) to run
filters according to user level or requests, not the data fetcher.
A new data fetcher must comply with:
Inherit from the
This enforces minimal internal design compliance.
The new fetcher must come with the
dataset_ids properties at the top of the
access_points = ['wmo' ,'box'] exit_formats = ['xarray'] dataset_ids = ['phy', 'bgc'] # First is default
Values depend on what the new access point can return and what you want to
implement. A good start is with the
wmo access point and the
phy dataset ID. The
xarray data format is the minimum
required. These variables are used by the facade
to auto-discover the fetcher capabilities. The
property is used to determine which variables can be retrieved.
The new fetcher must come at least with a
Fetch_wmo class, basically one for each of the
listed as properties. More generally we may have a main class that
provides the key functionality to retrieve data from the source,
and then classes for each of the
access_points of your fetcher.
This pattern could look like this:
class NewDataFetcher(ArgoDataFetcherProto) class Fetch_wmo(NewDataFetcher) class Fetch_box(NewDataFetcher)
It could also be like:
class Fetch_wmo(ArgoDataFetcherProto) class Fetch_box(ArgoDataFetcherProto)
Note that the class names
Fetch_box must not
change, this is also used by the facade to auto-discover the fetcher
Fetch_wmo is used to retrieve platforms and eventually profiles
data. It must take in the
__init__() method a
WMO and a
as first and second options.
WMO is always passed,
optional. These are passed by the facade to implement the
fetcher.profile methods. When a float is requested, the
CYC option is
not passed by the facade. Last,
CYC are either a single
integer or a list of integers: this means that
Fetch_wmo must be
able to handle more than one float/platform retrieval.
Fetch_box is used to retrieve a rectangular domain in space and
time. It must take in the
__init__() method a
BOX as first
option that is passed a list(lon_min: float, lon_max: float, lat_min:
float, lat_max: float, pres_min: float, pres_max: float, date_min:
str, date_max: str) from the facade. The two bounding dates [date_min
and date_max] should be optional (if not specified, the entire time
series is requested by the user).
All http requests must go through the internal
httpstore, an internal wrapper around fsspec that allows to
manage request caching very easily. You can simply use it this way
for json requests:
from argopy.stores import httpstore with httpstore(timeout=120).open("https://argovis.colorado.edu/catalog/profiles/5904797_12") as of: profile = json.load(of)
Last but not least, about the output data. In argopy, we want
to provide data for both expert and standard users. This is explained
and illustrated in the documentation
This means for a new data fetcher that the data content
should be curated and clean of any internal/jargon variables that is
not part of the Argo ADMT vocabulary. For instance,
geoLocation are not allowed. This will ensure
that whatever the data source set by users, the output xarray or
dataframe will be formatted and contain the same variables. This will
also ensure that other argopy features can be used on the new fetcher
output, like plotting or xarray data manipulation.