Serialization and IO

xray supports direct serialization and IO to several file formats. For more options, consider exporting your objects to pandas (see the preceding section) and using its broad range of IO tools.

Pickle

The simplest way to serialize an xray object is to use Python’s built-in pickle module:

In [1]: import cPickle as pickle

In [2]: ds = xray.Dataset({'foo': (('x', 'y'), np.random.rand(4, 5))},
   ...:                   coords={'x': [10, 20, 30, 40],
   ...:                           'y': pd.date_range('2000-01-01', periods=5),
   ...:                           'z': ('x', list('abcd'))})
   ...: 

# use the highest protocol (-1) because it is way faster than the default
# text based pickle format
In [3]: pkl = pickle.dumps(ds, protocol=-1)

In [4]: pickle.loads(pkl)
Out[4]: 
<xray.Dataset>
Dimensions:  (x: 4, y: 5)
Coordinates:
  * y        (y) datetime64[ns] 2000-01-01 2000-01-02 2000-01-03 2000-01-04 ...
  * x        (x) int64 10 20 30 40
    z        (x) |S1 'a' 'b' 'c' 'd'
Data variables:
    foo      (x, y) float64 0.127 0.9667 0.2605 0.8972 0.3767 0.3362 0.4514 ...

Pickle support is important because it doesn’t require any external libraries and lets you use xray objects with Python modules like multiprocessing. However, there are two important caveats:

  1. To simplify serialization, xray’s support for pickle currently loads all array values into memory before dumping an object. This means it is not suitable for serializing datasets too big to load into memory (e.g., from netCDF or OPeNDAP).
  2. Pickle will only work as long as the internal data structure of xray objects remains unchanged. Because the internal design of xray is still being refined, we make no guarantees (at this point) that objects pickled with this version of xray will work in future versions.

netCDF

Currently, the only disk based serialization format that xray directly supports is netCDF. netCDF is a file format for fully self-described datasets that is widely used in the geosciences and supported on almost all platforms. We use netCDF because xray was based on the netCDF data model, so netCDF files on disk directly correspond to Dataset objects. Recent versions netCDF are based on the even more widely used HDF5 file-format.

Reading and writing netCDF files with xray requires the netCDF4-Python library or scipy to be installed.

We can save a Dataset to disk using the Dataset.to_netcdf method:

In [5]: ds.to_netcdf('saved_on_disk.nc')

By default, the file is saved as netCDF4 (assuming netCDF4-Python is installed). You can control the format and engine used to write the file with the format and engine arguments.

We can load netCDF files to create a new Dataset using open_dataset():

In [6]: ds_disk = xray.open_dataset('saved_on_disk.nc')
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-6-abc36c52764b> in <module>()
----> 1 ds_disk = xray.open_dataset('saved_on_disk.nc')

/home/docs/checkouts/readthedocs.org/user_builds/xray/envs/v0.6.1/local/lib/python2.7/site-packages/xray-0.6.1-py2.7.egg/xray/backends/api.pyc in open_dataset(filename_or_obj, group, decode_cf, mask_and_scale, decode_times, concat_characters, decode_coords, engine, chunks, lock, drop_variables)
    221             lock = _default_lock(filename_or_obj, engine)
    222         with close_on_error(store):
--> 223             return maybe_decode_store(store, lock)
    224     else:
    225         if engine is not None and engine != 'scipy':

/home/docs/checkouts/readthedocs.org/user_builds/xray/envs/v0.6.1/local/lib/python2.7/site-packages/xray-0.6.1-py2.7.egg/xray/backends/api.pyc in maybe_decode_store(store, lock)
    156             store, mask_and_scale=mask_and_scale, decode_times=decode_times,
    157             concat_characters=concat_characters, decode_coords=decode_coords,
--> 158             drop_variables=drop_variables)
    159 
    160         if chunks is not None:

/home/docs/checkouts/readthedocs.org/user_builds/xray/envs/v0.6.1/local/lib/python2.7/site-packages/xray-0.6.1-py2.7.egg/xray/conventions.pyc in decode_cf(obj, concat_characters, mask_and_scale, decode_times, decode_coords, drop_variables)
    888     vars, attrs, coord_names = decode_cf_variables(
    889         vars, attrs, concat_characters, mask_and_scale, decode_times,
--> 890         decode_coords, drop_variables=drop_variables)
    891     ds = Dataset(vars, attrs=attrs)
    892     ds = ds.set_coords(coord_names.union(extra_coords))

/home/docs/checkouts/readthedocs.org/user_builds/xray/envs/v0.6.1/local/lib/python2.7/site-packages/xray-0.6.1-py2.7.egg/xray/conventions.pyc in decode_cf_variables(variables, attributes, concat_characters, mask_and_scale, decode_times, decode_coords, drop_variables)
    823         new_vars[k] = decode_cf_variable(
    824             v, concat_characters=concat, mask_and_scale=mask_and_scale,
--> 825             decode_times=decode_times)
    826         if decode_coords:
    827             var_attrs = new_vars[k].attrs

/home/docs/checkouts/readthedocs.org/user_builds/xray/envs/v0.6.1/local/lib/python2.7/site-packages/xray-0.6.1-py2.7.egg/xray/conventions.pyc in decode_cf_variable(var, concat_characters, mask_and_scale, decode_times, decode_endianness)
    764             units = pop_to(attributes, encoding, 'units')
    765             calendar = pop_to(attributes, encoding, 'calendar')
--> 766             data = DecodedCFDatetimeArray(data, units, calendar)
    767         elif attributes['units'] in TIME_UNITS:
    768             # timedelta

/home/docs/checkouts/readthedocs.org/user_builds/xray/envs/v0.6.1/local/lib/python2.7/site-packages/xray-0.6.1-py2.7.egg/xray/conventions.pyc in __init__(self, array, units, calendar)
    389             if not PY3:
    390                 msg += ' Full traceback:\n' + traceback.format_exc()
--> 391             raise ValueError(msg)
    392         else:
    393             self._dtype = getattr(result, 'dtype', np.dtype('object'))

ValueError: unable to decode time units u'days since 2000-01-01 00:00:00' with calendar u'proleptic_gregorian'. Try opening your dataset with decode_times=False. Full traceback:
Traceback (most recent call last):
  File "/home/docs/checkouts/readthedocs.org/user_builds/xray/envs/v0.6.1/local/lib/python2.7/site-packages/xray-0.6.1-py2.7.egg/xray/conventions.py", line 382, in __init__
    result = decode_cf_datetime(example_value, units, calendar)
  File "/home/docs/checkouts/readthedocs.org/user_builds/xray/envs/v0.6.1/local/lib/python2.7/site-packages/xray-0.6.1-py2.7.egg/xray/conventions.py", line 146, in decode_cf_datetime
    dates = (pd.to_timedelta(flat_num_dates, delta) + ref_date).values
  File "/usr/lib/python2.7/dist-packages/pandas/tseries/timedeltas.py", line 59, in to_timedelta
    return _convert_listlike(arg, box=box)
  File "/usr/lib/python2.7/dist-packages/pandas/tseries/timedeltas.py", line 46, in _convert_listlike
    value = np.array([ _coerce_scalar_to_timedelta_type(r, unit=unit) for r in arg ])
  File "/usr/lib/python2.7/dist-packages/pandas/tseries/timedeltas.py", line 82, in _coerce_scalar_to_timedelta_type
    return tslib.convert_to_timedelta(r,unit)
  File "tslib.pyx", line 1186, in pandas.tslib.convert_to_timedelta (pandas/tslib.c:20014)
  File "tslib.pyx", line 1241, in pandas.tslib.convert_to_timedelta64 (pandas/tslib.c:20660)
ValueError: Invalid type for timedelta scalar: <type 'numpy.float64'>


In [7]: ds_disk
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-7-8740da34ae04> in <module>()
----> 1 ds_disk

NameError: name 'ds_disk' is not defined

A dataset can also be loaded or written to a specific group within a netCDF file. To load from a group, pass a group keyword argument to the open_dataset function. The group can be specified as a path-like string, e.g., to access subgroup ‘bar’ within group ‘foo’ pass ‘/foo/bar’ as the group argument. When writing multiple groups in one file, pass mode='a' to to_netcdf to ensure that each call does not delete the file.

Data is loaded lazily from netCDF files. You can manipulate, slice and subset Dataset and DataArray objects, and no array values are loaded into memory until you try to perform some sort of actual computation. For an example of how these lazy arrays work, see the OPeNDAP section below.

It is important to note that when you modify values of a Dataset, even one linked to files on disk, only the in-memory copy you are manipulating in xray is modified: the original file on disk is never touched.

Tip

xray’s lazy loading of remote or on-disk datasets is often but not always desirable. Before performing computationally intense operations, it is often a good idea to load a dataset entirely into memory by invoking the load() method.

Datasets have a close() method to close the associated netCDF file. However, it’s often cleaner to use a with statement:

# this automatically closes the dataset after use
In [8]: with xray.open_dataset('saved_on_disk.nc') as ds:
   ...:     print(ds.keys())
   ...: 
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-8-1dfdd8058cda> in <module>()
----> 1 with xray.open_dataset('saved_on_disk.nc') as ds:
      2     print(ds.keys())
      3 

/home/docs/checkouts/readthedocs.org/user_builds/xray/envs/v0.6.1/local/lib/python2.7/site-packages/xray-0.6.1-py2.7.egg/xray/backends/api.pyc in open_dataset(filename_or_obj, group, decode_cf, mask_and_scale, decode_times, concat_characters, decode_coords, engine, chunks, lock, drop_variables)
    221             lock = _default_lock(filename_or_obj, engine)
    222         with close_on_error(store):
--> 223             return maybe_decode_store(store, lock)
    224     else:
    225         if engine is not None and engine != 'scipy':

/home/docs/checkouts/readthedocs.org/user_builds/xray/envs/v0.6.1/local/lib/python2.7/site-packages/xray-0.6.1-py2.7.egg/xray/backends/api.pyc in maybe_decode_store(store, lock)
    156             store, mask_and_scale=mask_and_scale, decode_times=decode_times,
    157             concat_characters=concat_characters, decode_coords=decode_coords,
--> 158             drop_variables=drop_variables)
    159 
    160         if chunks is not None:

/home/docs/checkouts/readthedocs.org/user_builds/xray/envs/v0.6.1/local/lib/python2.7/site-packages/xray-0.6.1-py2.7.egg/xray/conventions.pyc in decode_cf(obj, concat_characters, mask_and_scale, decode_times, decode_coords, drop_variables)
    888     vars, attrs, coord_names = decode_cf_variables(
    889         vars, attrs, concat_characters, mask_and_scale, decode_times,
--> 890         decode_coords, drop_variables=drop_variables)
    891     ds = Dataset(vars, attrs=attrs)
    892     ds = ds.set_coords(coord_names.union(extra_coords))

/home/docs/checkouts/readthedocs.org/user_builds/xray/envs/v0.6.1/local/lib/python2.7/site-packages/xray-0.6.1-py2.7.egg/xray/conventions.pyc in decode_cf_variables(variables, attributes, concat_characters, mask_and_scale, decode_times, decode_coords, drop_variables)
    823         new_vars[k] = decode_cf_variable(
    824             v, concat_characters=concat, mask_and_scale=mask_and_scale,
--> 825             decode_times=decode_times)
    826         if decode_coords:
    827             var_attrs = new_vars[k].attrs

/home/docs/checkouts/readthedocs.org/user_builds/xray/envs/v0.6.1/local/lib/python2.7/site-packages/xray-0.6.1-py2.7.egg/xray/conventions.pyc in decode_cf_variable(var, concat_characters, mask_and_scale, decode_times, decode_endianness)
    764             units = pop_to(attributes, encoding, 'units')
    765             calendar = pop_to(attributes, encoding, 'calendar')
--> 766             data = DecodedCFDatetimeArray(data, units, calendar)
    767         elif attributes['units'] in TIME_UNITS:
    768             # timedelta

/home/docs/checkouts/readthedocs.org/user_builds/xray/envs/v0.6.1/local/lib/python2.7/site-packages/xray-0.6.1-py2.7.egg/xray/conventions.pyc in __init__(self, array, units, calendar)
    389             if not PY3:
    390                 msg += ' Full traceback:\n' + traceback.format_exc()
--> 391             raise ValueError(msg)
    392         else:
    393             self._dtype = getattr(result, 'dtype', np.dtype('object'))

ValueError: unable to decode time units u'days since 2000-01-01 00:00:00' with calendar u'proleptic_gregorian'. Try opening your dataset with decode_times=False. Full traceback:
Traceback (most recent call last):
  File "/home/docs/checkouts/readthedocs.org/user_builds/xray/envs/v0.6.1/local/lib/python2.7/site-packages/xray-0.6.1-py2.7.egg/xray/conventions.py", line 382, in __init__
    result = decode_cf_datetime(example_value, units, calendar)
  File "/home/docs/checkouts/readthedocs.org/user_builds/xray/envs/v0.6.1/local/lib/python2.7/site-packages/xray-0.6.1-py2.7.egg/xray/conventions.py", line 146, in decode_cf_datetime
    dates = (pd.to_timedelta(flat_num_dates, delta) + ref_date).values
  File "/usr/lib/python2.7/dist-packages/pandas/tseries/timedeltas.py", line 59, in to_timedelta
    return _convert_listlike(arg, box=box)
  File "/usr/lib/python2.7/dist-packages/pandas/tseries/timedeltas.py", line 46, in _convert_listlike
    value = np.array([ _coerce_scalar_to_timedelta_type(r, unit=unit) for r in arg ])
  File "/usr/lib/python2.7/dist-packages/pandas/tseries/timedeltas.py", line 82, in _coerce_scalar_to_timedelta_type
    return tslib.convert_to_timedelta(r,unit)
  File "tslib.pyx", line 1186, in pandas.tslib.convert_to_timedelta (pandas/tslib.c:20014)
  File "tslib.pyx", line 1241, in pandas.tslib.convert_to_timedelta64 (pandas/tslib.c:20660)
ValueError: Invalid type for timedelta scalar: <type 'numpy.float64'>

Reading encoded data

NetCDF files follow some conventions for encoding datetime arrays (as numbers with a “units” attribute) and for packing and unpacking data (as described by the “scale_factor” and “add_offset” attributes). If the argument decode_cf=True (default) is given to open_dataset, xray will attempt to automatically decode the values in the netCDF objects according to CF conventions. Sometimes this will fail, for example, if a variable has an invalid “units” or “calendar” attribute. For these cases, you can turn this decoding off manually.

You can view this encoding information (among others) in the DataArray.encoding attribute:

In [9]: ds_disk['y'].encoding
Out[9]: 
{'calendar': u'proleptic_gregorian',
 'chunksizes': None,
 'complevel': 0,
 'contiguous': True,
 'dtype': dtype('float64'),
 'fletcher32': False,
 'least_significant_digit': None,
 'shuffle': False,
 'source': 'saved_on_disk.nc',
 'units': u'days since 2000-01-01 00:00:00',
 'zlib': False}

Note that all operations that manipulate variables other than indexing will remove encoding information.

Writing encoded data

Conversely, you can customize how xray writes netCDF files on disk by providing explicit encodings for each dataset variable. The encoding argument takes a dictionary with variable names as keys and variable specific encodings as values. These encodings are saved as attributes on the netCDF variables on disk, which allows xray to faithfully read encoded data back into memory.

It is important to note that using encodings is entirely optional: if you do not supply any of these encoding options, xray will write data to disk using a default encoding, or the options in the encoding attribute, if set. This works perfectly fine in most cases, but encoding can be useful for additional control, especially for enabling compression.

In the file on disk, these encodings as saved as attributes on each variable, which allow xray and other CF-compliant tools for working with netCDF files to correctly read the data.

Scaling and type conversions

These encoding options work on any version of the netCDF file format:

  • dtype: Any valid NumPy dtype or string convertable to a dtype, e.g., 'int16' or 'float32'. This controls the type of the data written on disk.
  • _FillValue: Values of NaN in xray variables are remapped to this value when saved on disk. This is important when converting floating point with missing values to integers on disk, because NaN is not a valid dtype for integer dtypes.
  • scale_factor and add_offset: Used to convert from encoded data on disk to to the decoded data in memory, according to the formula decoded = scale_factor * encoded + add_offset.

These parameters can be fruitfully combined to compress discretized data on disk. For example, to save the variable foo with a precision of 0.1 in 16-bit integers while converting NaN to -9999, we would use encoding={'foo': {'dtype': 'int16', 'scale_factor': 0.1, '_FillValue': -9999}}. Compression and decompression with such discretization is extremely fast.

Chunk based compression

zlib, complevel, fletcher32, continguous and chunksizes can be used for enabling netCDF4/HDF5’s chunk based compression, as described in the documentation for createVariable for netCDF4-Python. This only works for netCDF4 files and thus requires using format='netCDF4' and either engine='netcdf4' or engine='h5netcdf'.

Chunk based gzip compression can yield impressive space savings, especially for sparse data, but it comes with significant performance overhead. HDF5 libraries can only read complete chunks back into memory, and maximum decompression speed is in the range of 50-100 MB/s. Worse, HDF5’s compression and decompression currently cannot be parallelized with dask. For these reasons, we recommend trying discretization based compression (described above) first.

Time units

The units and calendar attributes control how xray serializes datetime64 and timedelta64 arrays to datasets on disk as numeric values. The units encoding should be a string like 'days since 1900-01-01' for datetime64 data or a string like 'days' for timedelta64 data. calendar should be one of the calendar types supported by netCDF4-python: ‘standard’, ‘gregorian’, ‘proleptic_gregorian’ ‘noleap’, ‘365_day’, ‘360_day’, ‘julian’, ‘all_leap’, ‘366_day’.

By default, xray uses the ‘proleptic_gregorian’ calendar and units of the smallest time difference between values, with a reference time of the first time value.

OPeNDAP

xray includes support for OPeNDAP (via the netCDF4 library or Pydap), which lets us access large datasets over HTTP.

For example, we can open a connection to GBs of weather data produced by the PRISM project, and hosted by IRI at Columbia:

In [10]: remote_data = xray.open_dataset(
   ....:     'http://iridl.ldeo.columbia.edu/SOURCES/.OSU/.PRISM/.monthly/dods',
   ....:     decode_times=False)
   ....: 

In [11]: remote_data
Out[11]: 
<xray.Dataset>
Dimensions:  (T: 1422, X: 1405, Y: 621)
Coordinates:
  * X        (X) float32 -125.0 -124.958 -124.917 -124.875 -124.833 -124.792 -124.75 ...
  * T        (T) float32 -779.5 -778.5 -777.5 -776.5 -775.5 -774.5 -773.5 -772.5 -771.5 ...
  * Y        (Y) float32 49.9167 49.875 49.8333 49.7917 49.75 49.7083 49.6667 49.625 ...
Data variables:
    ppt      (T, Y, X) float64 ...
    tdmean   (T, Y, X) float64 ...
    tmax     (T, Y, X) float64 ...
    tmin     (T, Y, X) float64 ...
Attributes:
    Conventions: IRIDL
    expires: 1375315200

Note

Like many real-world datasets, this dataset does not entirely follow CF conventions. Unexpected formats will usually cause xray’s automatic decoding to fail. The way to work around this is to either set decode_cf=False in open_dataset to turn off all use of CF conventions, or by only disabling the troublesome parser. In this case, we set decode_times=False because the time axis here provides the calendar attribute in a format that xray does not expect (the integer 360 instead of a string like '360_day').

We can select and slice this data any number of times, and nothing is loaded over the network until we look at particular values:

In [12]: tmax = remote_data['tmax'][:500, ::3, ::3]

In [13]: tmax
Out[13]: 
<xray.DataArray 'tmax' (T: 500, Y: 207, X: 469)>
[48541500 values with dtype=float64]
Coordinates:
  * Y        (Y) float32 49.9167 49.7917 49.6667 49.5417 49.4167 49.2917 ...
  * X        (X) float32 -125.0 -124.875 -124.75 -124.625 -124.5 -124.375 ...
  * T        (T) float32 -779.5 -778.5 -777.5 -776.5 -775.5 -774.5 -773.5 ...
Attributes:
    pointwidth: 120
    standard_name: air_temperature
    units: Celsius_scale
    expires: 1443657600

# the data is downloaded automatically when we make the plot
In [14]: tmax[0].plot()
_images/opendap-prism-tmax.png

Combining multiple files

NetCDF files are often encountered in collections, e.g., with different files corresponding to different model runs. xray can straightforwardly combine such files into a single Dataset by making use of concat().

Note

Version 0.5 includes experimental support for manipulating datasets that don’t fit into memory with dask. If you have dask installed, you can open multiple files simultaneously using open_mfdataset():

xray.open_mfdataset('my/files/*.nc')

This function automatically concatenates and merges into a single xray datasets. For more details, see Reading and writing data.

For example, here’s how we could approximate MFDataset from the netCDF4 library:

from glob import glob
import xray

def read_netcdfs(files, dim):
    # glob expands paths with * to a list of files, like the unix shell
    paths = sorted(glob(files))
    datasets = [xray.open_dataset(p) for p in paths]
    combined = xray.concat(dataset, dim)
    return combined

read_netcdfs('/all/my/files/*.nc', dim='time')

This function will work in many cases, but it’s not very robust. First, it never closes files, which means it will fail one you need to load more than a few thousands file. Second, it assumes that you want all the data from each file and that it can all fit into memory. In many situations, you only need a small subset or an aggregated summary of the data from each file.

Here’s a slightly more sophisticated example of how to remedy these deficiencies:

def read_netcdfs(files, dim, transform_func=None):
    def process_one_path(path):
        # use a context manager, to ensure the file gets closed after use
        with xray.open_dataset(path) as ds:
            # transform_func should do some sort of selection or
            # aggregation
            if transform_func is not None:
                ds = transform_func(ds)
            # load all data from the transformed dataset, to ensure we can
            # use it after closing each original file
            ds.load()
            return ds

    paths = sorted(glob(files))
    datasets = [process_one_path(p) for p in paths]
    combined = xray.concat(datasets, dim)
    return combined

# here we suppose we only care about the combined mean of each file;
# you might also use indexing operations like .sel to subset datasets
read_netcdfs('/all/my/files/*.nc', dim='time',
             transform_func=lambda ds: ds.mean())

This pattern works well and is very robust. We’ve used similar code to process tens of thousands of files constituting 100s of GB of data.