API reference¶
This page provides an auto-generated summary of xarray’s API. For more details and examples, refer to the relevant chapters in the main part of the documentation.
Top-level functions¶
apply_ufunc (func, *args, input_core_dims, …) |
Apply a vectorized function for unlabeled arrays on xarray objects. |
align (*objects[, join, copy, indexes, exclude]) |
Given any number of Dataset and/or DataArray objects, returns new objects with aligned indexes and dimension sizes. |
broadcast (*args, **kwargs) |
Explicitly broadcast any number of DataArray or Dataset objects against one another. |
concat (objs[, dim, data_vars, coords, …]) |
Concatenate xarray objects along a new or existing dimension. |
merge (objects[, compat, join]) |
Merge any number of xarray objects into a single Dataset as variables. |
where (cond, x, y) |
Return elements from x or y depending on cond. |
set_options (**kwargs) |
Set options for xarray in a controlled context. |
full_like (other, fill_value[, dtype]) |
Return a new object with the same shape and type as a given object. |
zeros_like (other[, dtype]) |
Shorthand for full_like(other, 0, dtype) |
ones_like (other[, dtype]) |
Shorthand for full_like(other, 1, dtype) |
dot (*arrays[, dims]) |
Generalized dot product for xarray objects. |
Dataset¶
Creating a dataset¶
Dataset ([data_vars, coords, attrs, compat]) |
A multi-dimensional, in memory, array database. |
decode_cf (obj[, concat_characters, …]) |
Decode the given Dataset or Datastore according to CF conventions into a new Dataset. |
Attributes¶
Dataset.dims |
Mapping from dimension names to lengths. |
Dataset.sizes |
Mapping from dimension names to lengths. |
Dataset.data_vars |
Dictionary of xarray.DataArray objects corresponding to data variables |
Dataset.coords |
Dictionary of xarray.DataArray objects corresponding to coordinate variables |
Dataset.attrs |
Dictionary of global attributes on this dataset |
Dataset.encoding |
Dictionary of global encoding attributes on this dataset |
Dataset.indexes |
OrderedDict of pandas.Index objects used for label based indexing |
Dataset.get_index (key) |
Get an index for a dimension, with fall-back to a default RangeIndex |
Dataset.chunks |
Block dimensions for this dataset’s data or None if it’s not a dask array. |
Dataset.nbytes |
Dictionary interface¶
Datasets implement the mapping interface with keys given by variable names
and values given by DataArray
objects.
Dataset.__getitem__ (key) |
Access variables or coordinates this dataset as a DataArray . |
Dataset.__setitem__ (key, value) |
Add an array to this dataset. |
Dataset.__delitem__ (key) |
Remove a variable from this dataset. |
Dataset.update (other[, inplace]) |
Update this dataset’s variables with those from another dataset. |
Dataset.items () |
|
Dataset.values () |
Dataset contents¶
Dataset.copy ([deep]) |
Returns a copy of this dataset. |
Dataset.assign (**kwargs) |
Assign new data variables to a Dataset, returning a new object with all the original variables in addition to the new ones. |
Dataset.assign_coords (**kwargs) |
Assign new coordinates to this object. |
Dataset.assign_attrs (*args, **kwargs) |
Assign new attrs to this object. |
Dataset.pipe (func, *args, **kwargs) |
Apply func(self, *args, **kwargs) |
Dataset.merge (other[, inplace, …]) |
Merge the arrays of two datasets into a single dataset. |
Dataset.rename ([name_dict, inplace]) |
Returns a new object with renamed variables and dimensions. |
Dataset.swap_dims (dims_dict[, inplace]) |
Returns a new object with swapped dimensions. |
Dataset.expand_dims (dim[, axis]) |
Return a new object with an additional axis (or axes) inserted at the corresponding position in the array shape. |
Dataset.drop (labels[, dim]) |
Drop variables or index labels from this dataset. |
Dataset.set_coords (names[, inplace]) |
Given names of one or more variables, set them as coordinates |
Dataset.reset_coords ([names, drop, inplace]) |
Given names of coordinates, reset them to become variables |
Comparisons¶
Dataset.equals (other) |
Two Datasets are equal if they have matching variables and coordinates, all of which are equal. |
Dataset.identical (other) |
Like equals, but also checks all dataset attributes and the attributes on all variables and coordinates. |
Dataset.broadcast_equals (other) |
Two Datasets are broadcast equal if they are equal after broadcasting all variables against each other. |
Indexing¶
Dataset.loc |
Attribute for location based indexing. |
Dataset.isel ([indexers, drop]) |
Returns a new dataset with each array indexed along the specified dimension(s). |
Dataset.sel ([indexers, method, tolerance, drop]) |
Returns a new dataset with each array indexed by tick labels along the specified dimension(s). |
Dataset.squeeze ([dim, drop, axis]) |
Return a new object with squeezed data. |
Dataset.reindex ([indexers, method, …]) |
Conform this object onto a new set of indexes, filling in missing values with NaN. |
Dataset.reindex_like (other[, method, …]) |
Conform this object onto the indexes of another object, filling in missing values with NaN. |
Dataset.set_index ([append, inplace]) |
Set Dataset (multi-)indexes using one or more existing coordinates or variables. |
Dataset.reset_index (dims_or_levels[, drop, …]) |
Reset the specified index(es) or multi-index level(s). |
Dataset.reorder_levels ([inplace]) |
Rearrange index levels using input order. |
Missing value handling¶
Dataset.isnull (*args, **kwargs) |
|
Dataset.notnull (*args, **kwargs) |
|
Dataset.combine_first (other) |
Combine two Datasets, default to data_vars of self. |
Dataset.count ([dim, keep_attrs]) |
Reduce this Dataset’s data by applying count along some dimension(s). |
Dataset.dropna (dim[, how, thresh, subset]) |
Returns a new dataset with dropped labels for missing values along the provided dimension. |
Dataset.fillna (value) |
Fill missing values in this object. |
Dataset.ffill (dim[, limit]) |
Fill NaN values by propogating values forward |
Dataset.bfill (dim[, limit]) |
Fill NaN values by propogating values backward |
Dataset.interpolate_na ([dim, method, limit, …]) |
Interpolate values according to different methods. |
Dataset.where (cond[, other, drop]) |
Filter elements from this object according to a condition. |
Dataset.isin (test_elements) |
Tests each value in the array for whether it is in the supplied list. |
Computation¶
Dataset.apply (func[, keep_attrs, args]) |
Apply a function over the data variables in this dataset. |
Dataset.reduce (func[, dim, keep_attrs, …]) |
Reduce this dataset by applying func along some dimension(s). |
Dataset.groupby (group[, squeeze]) |
Returns a GroupBy object for performing grouped operations. |
Dataset.groupby_bins (group, bins[, right, …]) |
Returns a GroupBy object for performing grouped operations. |
Dataset.rolling ([min_periods, center]) |
Rolling window object. |
Dataset.resample ([freq, dim, how, skipna, …]) |
Returns a Resample object for performing resampling operations. |
Dataset.diff (dim[, n, label]) |
Calculate the n-th order discrete difference along given axis. |
Dataset.quantile (q[, dim, interpolation, …]) |
Compute the qth quantile of the data along the specified dimension. |
Aggregation:
all
any
argmax
argmin
max
mean
median
min
prod
sum
std
var
ndarray methods:
astype
argsort
clip
conj
conjugate
imag
round
real
cumsum
cumprod
rank
Grouped operations:
assign
assign_coords
first
last
fillna
where
Reshaping and reorganizing¶
Dataset.transpose (*dims) |
Return a new Dataset object with all array dimensions transposed. |
Dataset.stack (**dimensions) |
Stack any number of existing dimensions into a single new dimension. |
Dataset.unstack (dim) |
Unstack an existing dimension corresponding to a MultiIndex into multiple new dimensions. |
Dataset.shift (**shifts) |
Shift this dataset by an offset along one or more dimensions. |
Dataset.roll (**shifts) |
Roll this dataset by an offset along one or more dimensions. |
Dataset.sortby (variables[, ascending]) |
Sort object by labels or values (along an axis). |
DataArray¶
DataArray (data[, coords, dims, name, attrs, …]) |
N-dimensional array with labeled coordinates and dimensions. |
Attributes¶
DataArray.values |
The array’s data as a numpy.ndarray |
DataArray.data |
The array’s data as a dask or numpy array |
DataArray.coords |
Dictionary-like container of coordinate arrays. |
DataArray.dims |
Tuple of dimension names associated with this array. |
DataArray.sizes |
Ordered mapping from dimension names to lengths. |
DataArray.name |
The name of this array. |
DataArray.attrs |
Dictionary storing arbitrary metadata with this array. |
DataArray.encoding |
Dictionary of format-specific settings for how this array should be serialized. |
DataArray.indexes |
OrderedDict of pandas.Index objects used for label based indexing |
DataArray.get_index (key) |
Get an index for a dimension, with fall-back to a default RangeIndex |
DataArray contents¶
DataArray.assign_coords (**kwargs) |
Assign new coordinates to this object. |
DataArray.assign_attrs (*args, **kwargs) |
Assign new attrs to this object. |
DataArray.pipe (func, *args, **kwargs) |
Apply func(self, *args, **kwargs) |
DataArray.rename ([new_name_or_name_dict]) |
Returns a new DataArray with renamed coordinates or a new name. |
DataArray.swap_dims (dims_dict) |
Returns a new DataArray with swapped dimensions. |
DataArray.expand_dims (dim[, axis]) |
Return a new object with an additional axis (or axes) inserted at the corresponding position in the array shape. |
DataArray.drop (labels[, dim]) |
Drop coordinates or index labels from this DataArray. |
DataArray.reset_coords ([names, drop, inplace]) |
Given names of coordinates, reset them to become variables. |
DataArray.copy ([deep]) |
Returns a copy of this array. |
Indexing¶
DataArray.__getitem__ (key) |
|
DataArray.__setitem__ (key, value) |
|
DataArray.loc |
Attribute for location based indexing like pandas. |
DataArray.isel ([indexers, drop]) |
Return a new DataArray whose dataset is given by integer indexing along the specified dimension(s). |
DataArray.sel ([indexers, method, tolerance, …]) |
Return a new DataArray whose dataset is given by selecting index labels along the specified dimension(s). |
DataArray.squeeze ([dim, drop, axis]) |
Return a new object with squeezed data. |
DataArray.reindex ([indexers, method, …]) |
Conform this object onto a new set of indexes, filling in missing values with NaN. |
DataArray.reindex_like (other[, method, …]) |
Conform this object onto the indexes of another object, filling in missing values with NaN. |
DataArray.set_index ([append, inplace]) |
Set DataArray (multi-)indexes using one or more existing coordinates. |
DataArray.reset_index (dims_or_levels[, …]) |
Reset the specified index(es) or multi-index level(s). |
DataArray.reorder_levels ([inplace]) |
Rearrange index levels using input order. |
Missing value handling¶
DataArray.isnull (*args, **kwargs) |
|
DataArray.notnull (*args, **kwargs) |
|
DataArray.combine_first (other) |
Combine two DataArray objects, with union of coordinates. |
DataArray.count ([dim, axis, keep_attrs]) |
Reduce this DataArray’s data by applying count along some dimension(s). |
DataArray.dropna (dim[, how, thresh]) |
Returns a new array with dropped labels for missing values along the provided dimension. |
DataArray.fillna (value) |
Fill missing values in this object. |
DataArray.ffill (dim[, limit]) |
Fill NaN values by propogating values forward |
DataArray.bfill (dim[, limit]) |
Fill NaN values by propogating values backward |
DataArray.interpolate_na ([dim, method, …]) |
Interpolate values according to different methods. |
DataArray.where (cond[, other, drop]) |
Filter elements from this object according to a condition. |
DataArray.isin (test_elements) |
Tests each value in the array for whether it is in the supplied list. |
Comparisons¶
DataArray.equals (other) |
True if two DataArrays have the same dimensions, coordinates and values; otherwise False. |
DataArray.identical (other) |
Like equals, but also checks the array name and attributes, and attributes on all coordinates. |
DataArray.broadcast_equals (other) |
Two DataArrays are broadcast equal if they are equal after broadcasting them against each other such that they have the same dimensions. |
Computation¶
DataArray.reduce (func[, dim, axis, keep_attrs]) |
Reduce this array by applying func along some dimension(s). |
DataArray.groupby (group[, squeeze]) |
Returns a GroupBy object for performing grouped operations. |
DataArray.groupby_bins (group, bins[, right, …]) |
Returns a GroupBy object for performing grouped operations. |
DataArray.rolling ([min_periods, center]) |
Rolling window object. |
DataArray.dt |
Access datetime fields for DataArrays with datetime-like dtypes. |
DataArray.resample ([freq, dim, how, skipna, …]) |
Returns a Resample object for performing resampling operations. |
DataArray.get_axis_num (dim) |
Return axis number(s) corresponding to dimension(s) in this array. |
DataArray.diff (dim[, n, label]) |
Calculate the n-th order discrete difference along given axis. |
DataArray.dot (other[, dims]) |
Perform dot product of two DataArrays along their shared dims. |
DataArray.quantile (q[, dim, interpolation, …]) |
Compute the qth quantile of the data along the specified dimension. |
Aggregation:
all
any
argmax
argmin
max
mean
median
min
prod
sum
std
var
ndarray methods:
argsort
clip
conj
conjugate
imag
searchsorted
round
real
T
cumsum
cumprod
rank
Grouped operations:
assign_coords
first
last
fillna
where
Reshaping and reorganizing¶
DataArray.transpose (*dims) |
Return a new DataArray object with transposed dimensions. |
DataArray.stack (**dimensions) |
Stack any number of existing dimensions into a single new dimension. |
DataArray.unstack (dim) |
Unstack an existing dimension corresponding to a MultiIndex into multiple new dimensions. |
DataArray.shift (**shifts) |
Shift this array by an offset along one or more dimensions. |
DataArray.roll (**shifts) |
Roll this array by an offset along one or more dimensions. |
DataArray.sortby (variables[, ascending]) |
Sort object by labels or values (along an axis). |
Universal functions¶
Warning
With recent versions of numpy, dask and xarray, NumPy ufuncs are now
supported directly on all xarray and dask objects. This obliviates the need
for the xarray.ufuncs
module, which should not be used for new code
unless compatibility with versions of NumPy prior to v1.13 is required.
This functions are copied from NumPy, but extended to work on NumPy arrays,
dask arrays and all xarray objects. You can find them in the xarray.ufuncs
module:
angle
arccos
arccosh
arcsin
arcsinh
arctan
arctan2
arctanh
ceil
conj
copysign
cos
cosh
deg2rad
degrees
exp
expm1
fabs
fix
floor
fmax
fmin
fmod
fmod
frexp
hypot
imag
iscomplex
isfinite
isinf
isnan
isreal
ldexp
log
log10
log1p
log2
logaddexp
logaddexp2
logical_and
logical_not
logical_or
logical_xor
maximum
minimum
nextafter
rad2deg
radians
real
rint
sign
signbit
sin
sinh
sqrt
square
tan
tanh
trunc
IO / Conversion¶
Dataset methods¶
open_dataset (filename_or_obj[, group, …]) |
Load and decode a dataset from a file or file-like object. |
open_mfdataset (paths[, chunks, concat_dim, …]) |
Open multiple files as a single dataset. |
open_rasterio (filename[, parse_coordinates, …]) |
Open a file with rasterio (experimental). |
open_zarr (store[, group, synchronizer, …]) |
Load and decode a dataset from a Zarr store. |
Dataset.to_netcdf ([path, mode, format, …]) |
Write dataset contents to a netCDF file. |
Dataset.to_zarr ([store, mode, synchronizer, …]) |
Write dataset contents to a zarr group. |
save_mfdataset (datasets, paths[, mode, …]) |
Write multiple datasets to disk as netCDF files simultaneously. |
Dataset.to_array ([dim, name]) |
Convert this dataset into an xarray.DataArray |
Dataset.to_dataframe () |
Convert this dataset into a pandas.DataFrame. |
Dataset.to_dask_dataframe ([dim_order, set_index]) |
Convert this dataset into a dask.dataframe.DataFrame. |
Dataset.to_dict () |
Convert this dataset to a dictionary following xarray naming conventions. |
Dataset.from_dataframe (dataframe) |
Convert a pandas.DataFrame into an xarray.Dataset |
Dataset.from_dict (d) |
Convert a dictionary into an xarray.Dataset. |
Dataset.close () |
Close any files linked to this object |
Dataset.compute (**kwargs) |
Manually trigger loading of this dataset’s data from disk or a remote source into memory and return a new dataset. |
Dataset.persist (**kwargs) |
Trigger computation, keeping data as dask arrays |
Dataset.load (**kwargs) |
Manually trigger loading of this dataset’s data from disk or a remote source into memory and return this dataset. |
Dataset.chunk ([chunks, name_prefix, token, lock]) |
Coerce all arrays in this dataset into dask arrays with the given chunks. |
Dataset.filter_by_attrs (**kwargs) |
Returns a Dataset with variables that match specific conditions. |
Dataset.info ([buf]) |
Concise summary of a Dataset variables and attributes. |
DataArray methods¶
open_dataarray (filename_or_obj[, group, …]) |
Open an DataArray from a netCDF file containing a single data variable. |
DataArray.to_dataset ([dim, name]) |
Convert a DataArray to a Dataset. |
DataArray.to_netcdf (*args, **kwargs) |
Write DataArray contents to a netCDF file. |
DataArray.to_pandas () |
Convert this array into a pandas object with the same shape. |
DataArray.to_series () |
Convert this array into a pandas.Series. |
DataArray.to_dataframe ([name]) |
Convert this array and its coordinates into a tidy pandas.DataFrame. |
DataArray.to_index () |
Convert this variable to a pandas.Index. |
DataArray.to_masked_array ([copy]) |
Convert this array into a numpy.ma.MaskedArray |
DataArray.to_cdms2 () |
Convert this array into a cdms2.Variable |
DataArray.to_iris () |
Convert this array into a iris.cube.Cube |
DataArray.from_iris (cube) |
Convert a iris.cube.Cube into an xarray.DataArray |
DataArray.to_dict () |
Convert this xarray.DataArray into a dictionary following xarray naming conventions. |
DataArray.from_series (series) |
Convert a pandas.Series into an xarray.DataArray. |
DataArray.from_cdms2 (variable) |
Convert a cdms2.Variable into an xarray.DataArray |
DataArray.from_dict (d) |
Convert a dictionary into an xarray.DataArray |
DataArray.close () |
Close any files linked to this object |
DataArray.compute (**kwargs) |
Manually trigger loading of this array’s data from disk or a remote source into memory and return a new array. |
DataArray.persist (**kwargs) |
Trigger computation in constituent dask arrays |
DataArray.load (**kwargs) |
Manually trigger loading of this array’s data from disk or a remote source into memory and return this array. |
DataArray.chunk ([chunks, name_prefix, …]) |
Coerce this array’s data into a dask arrays with the given chunks. |
GroupBy objects¶
core.groupby.DataArrayGroupBy (obj, group[, …]) |
GroupBy object specialized to grouping DataArray objects |
core.groupby.DataArrayGroupBy.apply (func[, …]) |
Apply a function over each array in the group and concatenate them together into a new array. |
core.groupby.DataArrayGroupBy.reduce (func[, …]) |
Reduce the items in this group by applying func along some dimension(s). |
core.groupby.DatasetGroupBy (obj, group[, …]) |
|
core.groupby.DatasetGroupBy.apply (func, **kwargs) |
Apply a function over each Dataset in the group and concatenate them together into a new Dataset. |
core.groupby.DatasetGroupBy.reduce (func[, …]) |
Reduce the items in this group by applying func along some dimension(s). |
Rolling objects¶
core.rolling.DataArrayRolling (obj[, …]) |
|
core.rolling.DataArrayRolling.construct (…) |
Convert this rolling object to xr.DataArray, where the window dimension is stacked as a new dimension |
core.rolling.DataArrayRolling.reduce (func, …) |
Reduce the items in this group by applying func along some dimension(s). |
core.rolling.DatasetRolling (obj[, …]) |
|
core.rolling.DatasetRolling.construct (window_dim) |
Convert this rolling object to xr.Dataset, where the window dimension is stacked as a new dimension |
core.rolling.DatasetRolling.reduce (func, …) |
Reduce the items in this group by applying func along some dimension(s). |
Resample objects¶
Resample objects also implement the GroupBy interface
(methods like apply()
, reduce()
, mean()
, sum()
, etc.).
core.resample.DataArrayResample (*args, **kwargs) |
DataArrayGroupBy object specialized to time resampling operations over a specified dimension |
core.resample.DataArrayResample.asfreq () |
Return values of original object at the new up-sampling frequency; essentially a re-index with new times set to NaN. |
core.resample.DataArrayResample.backfill () |
Backward fill new values at up-sampled frequency. |
core.resample.DataArrayResample.interpolate ([kind]) |
Interpolate up-sampled data using the original data as knots. |
core.resample.DataArrayResample.nearest () |
Take new values from nearest original coordinate to up-sampled frequency coordinates. |
core.resample.DataArrayResample.pad () |
Forward fill new values at up-sampled frequency. |
core.resample.DatasetResample (*args, **kwargs) |
DatasetGroupBy object specialized to resampling a specified dimension |
core.resample.DatasetResample.asfreq () |
Return values of original object at the new up-sampling frequency; essentially a re-index with new times set to NaN. |
core.resample.DatasetResample.backfill () |
Backward fill new values at up-sampled frequency. |
core.resample.DatasetResample.interpolate ([kind]) |
Interpolate up-sampled data using the original data as knots. |
core.resample.DatasetResample.nearest () |
Take new values from nearest original coordinate to up-sampled frequency coordinates. |
core.resample.DatasetResample.pad () |
Forward fill new values at up-sampled frequency. |
Custom Indexes¶
CFTimeIndex |
Custom Index for working with CF calendars and dates |
Plotting¶
DataArray.plot |
Access plotting functions |
plot.plot (darray[, row, col, col_wrap, ax, …]) |
Default plot of DataArray using matplotlib.pyplot. |
plot.contourf (x, y, z, ax, **kwargs) |
Filled contour plot of 2d DataArray |
plot.contour (x, y, z, ax, **kwargs) |
Contour plot of 2d DataArray |
plot.hist (darray[, figsize, size, aspect, ax]) |
Histogram of DataArray |
plot.imshow (x, y, z, ax, **kwargs) |
Image plot of 2d DataArray using matplotlib.pyplot |
plot.line (darray, *args, **kwargs) |
Line plot of DataArray index against values |
plot.pcolormesh (x, y, z, ax[, infer_intervals]) |
Pseudocolor plot of 2d DataArray |
plot.FacetGrid (data[, col, row, col_wrap, …]) |
Initialize the matplotlib figure and FacetGrid object. |
Testing¶
testing.assert_equal (a, b) |
Like numpy.testing.assert_array_equal() , but for xarray objects. |
testing.assert_identical (a, b) |
Like xarray.testing.assert_equal() , but also matches the objects’ names and attributes. |
testing.assert_allclose (a, b[, rtol, atol, …]) |
Like numpy.testing.assert_allclose() , but for xarray objects. |
Exceptions¶
MergeError |
Error class for merge failures due to incompatible arguments. |
SerializationWarning |
Warnings about encoding/decoding issues in serialization. |
Advanced API¶
Dataset.variables |
Low level interface to Dataset contents as dict of Variable objects. |
DataArray.variable |
Low level interface to the Variable object for this DataArray. |
Variable (dims, data[, attrs, encoding, fastpath]) |
A netcdf-like variable consisting of dimensions, data and attributes which describe a single Array. |
IndexVariable (dims, data[, attrs, encoding, …]) |
Wrapper for accommodating a pandas.Index in an xarray.Variable. |
as_variable (obj[, name]) |
Convert an object into a Variable. |
register_dataset_accessor (name) |
Register a custom property on xarray.Dataset objects. |
register_dataarray_accessor (name) |
Register a custom accessor on xarray.DataArray objects. |
These backends provide a low-level interface for lazily loading data from
external file-formats or protocols, and can be manually invoked to create
arguments for the from_store
and dump_to_store
Dataset methods:
backends.NetCDF4DataStore (netcdf4_dataset[, …]) |
Store for reading and writing data via the Python-NetCDF4 library. |
backends.H5NetCDFStore (filename[, mode, …]) |
Store for reading and writing data via h5netcdf |
backends.PydapDataStore (ds) |
Store for accessing OpenDAP datasets with pydap. |
backends.ScipyDataStore (filename_or_obj[, …]) |
Store for reading and writing data via scipy.io.netcdf. |