# Combining data¶

• For combining datasets or data arrays along a dimension, see concatenate.
• For combining datasets with different variables, see merge.
• For combining datasets or data arrays with different indexes or missing values, see combine.

## Concatenate¶

To combine arrays along existing or new dimension into a larger array, you can use concat(). concat takes an iterable of DataArray or Dataset objects, as well as a dimension name, and concatenates along that dimension:

In [1]: arr = xr.DataArray(np.random.randn(2, 3),
...:                    [('x', ['a', 'b']), ('y', [10, 20, 30])])
...:

In [2]: arr[:, :1]
Out[2]:
<xarray.DataArray (x: 2, y: 1)>
array([[ 0.469112],
[-1.135632]])
Coordinates:
* x        (x) <U1 'a' 'b'
* y        (y) int64 10

# this resembles how you would use np.concatenate
In [3]: xr.concat([arr[:, :1], arr[:, 1:]], dim='y')
Out[3]:
<xarray.DataArray (x: 2, y: 3)>
array([[ 0.469112, -0.282863, -1.509059],
[-1.135632,  1.212112, -0.173215]])
Coordinates:
* x        (x) <U1 'a' 'b'
* y        (y) int64 10 20 30


In addition to combining along an existing dimension, concat can create a new dimension by stacking lower dimensional arrays together:

In [4]: arr[0]
Out[4]:
<xarray.DataArray (y: 3)>
array([ 0.469112, -0.282863, -1.509059])
Coordinates:
x        <U1 'a'
* y        (y) int64 10 20 30

# to combine these 1d arrays into a 2d array in numpy, you would use np.array
In [5]: xr.concat([arr[0], arr[1]], 'x')
Out[5]:
<xarray.DataArray (x: 2, y: 3)>
array([[ 0.469112, -0.282863, -1.509059],
[-1.135632,  1.212112, -0.173215]])
Coordinates:
* y        (y) int64 10 20 30
* x        (x) <U1 'a' 'b'


If the second argument to concat is a new dimension name, the arrays will be concatenated along that new dimension, which is always inserted as the first dimension:

In [6]: xr.concat([arr[0], arr[1]], 'new_dim')
Out[6]:
<xarray.DataArray (new_dim: 2, y: 3)>
array([[ 0.469112, -0.282863, -1.509059],
[-1.135632,  1.212112, -0.173215]])
Coordinates:
* y        (y) int64 10 20 30
x        (new_dim) <U1 'a' 'b'
Dimensions without coordinates: new_dim


The second argument to concat can also be an Index or DataArray object as well as a string, in which case it is used to label the values along the new dimension:

In [7]: xr.concat([arr[0], arr[1]], pd.Index([-90, -100], name='new_dim'))
Out[7]:
<xarray.DataArray (new_dim: 2, y: 3)>
array([[ 0.469112, -0.282863, -1.509059],
[-1.135632,  1.212112, -0.173215]])
Coordinates:
* y        (y) int64 10 20 30
x        (new_dim) <U1 'a' 'b'
* new_dim  (new_dim) int64 -90 -100


Of course, concat also works on Dataset objects:

In [8]: ds = arr.to_dataset(name='foo')

In [9]: xr.concat([ds.sel(x='a'), ds.sel(x='b')], 'x')
Out[9]:
<xarray.Dataset>
Dimensions:  (x: 2, y: 3)
Coordinates:
* y        (y) int64 10 20 30
* x        (x) <U1 'a' 'b'
Data variables:
foo      (x, y) float64 0.4691 -0.2829 -1.509 -1.136 1.212 -0.1732


concat() has a number of options which provide deeper control over which variables are concatenated and how it handles conflicting variables between datasets. With the default parameters, xarray will load some coordinate variables into memory to compare them between datasets. This may be prohibitively expensive if you are manipulating your dataset lazily using Parallel computing with Dask.

## Merge¶

To combine variables and coordinates between multiple DataArray and/or Dataset object, use merge(). It can merge a list of Dataset, DataArray or dictionaries of objects convertible to DataArray objects:

In [10]: xr.merge([ds, ds.rename({'foo': 'bar'})])
Out[10]:
<xarray.Dataset>
Dimensions:  (x: 2, y: 3)
Coordinates:
* x        (x) <U1 'a' 'b'
* y        (y) int64 10 20 30
Data variables:
foo      (x, y) float64 0.4691 -0.2829 -1.509 -1.136 1.212 -0.1732
bar      (x, y) float64 0.4691 -0.2829 -1.509 -1.136 1.212 -0.1732

In [11]: xr.merge([xr.DataArray(n, name='var%d' % n) for n in range(5)])
Out[11]:
<xarray.Dataset>
Dimensions:  ()
Data variables:
var0     int64 0
var1     int64 1
var2     int64 2
var3     int64 3
var4     int64 4


If you merge another dataset (or a dictionary including data array objects), by default the resulting dataset will be aligned on the union of all index coordinates:

In [12]: other = xr.Dataset({'bar': ('x', [1, 2, 3, 4]), 'x': list('abcd')})

In [13]: xr.merge([ds, other])
Out[13]:
<xarray.Dataset>
Dimensions:  (x: 4, y: 3)
Coordinates:
* x        (x) object 'a' 'b' 'c' 'd'
* y        (y) int64 10 20 30
Data variables:
foo      (x, y) float64 0.4691 -0.2829 -1.509 -1.136 ... nan nan nan nan
bar      (x) int64 1 2 3 4


This ensures that merge is non-destructive. xarray.MergeError is raised if you attempt to merge two variables with the same name but different values:

In [14]: xr.merge([ds, ds + 1])
MergeError: conflicting values for variable 'foo' on objects to be combined:
first value: <xarray.Variable (x: 2, y: 3)>
array([[ 0.4691123 , -0.28286334, -1.5090585 ],
[-1.13563237,  1.21211203, -0.17321465]])
second value: <xarray.Variable (x: 2, y: 3)>
array([[ 1.4691123 ,  0.71713666, -0.5090585 ],
[-0.13563237,  2.21211203,  0.82678535]])


The same non-destructive merging between DataArray index coordinates is used in the Dataset constructor:

In [15]: xr.Dataset({'a': arr[:-1], 'b': arr[1:]})
Out[15]:
<xarray.Dataset>
Dimensions:  (x: 2, y: 3)
Coordinates:
* x        (x) object 'a' 'b'
* y        (y) int64 10 20 30
Data variables:
a        (x, y) float64 0.4691 -0.2829 -1.509 nan nan nan
b        (x, y) float64 nan nan nan -1.136 1.212 -0.1732


## Combine¶

The instance method combine_first() combines two datasets/data arrays and defaults to non-null values in the calling object, using values from the called object to fill holes. The resulting coordinates are the union of coordinate labels. Vacant cells as a result of the outer-join are filled with NaN. For example:

In [16]: ar0 = xr.DataArray([[0, 0], [0, 0]], [('x', ['a', 'b']), ('y', [-1, 0])])

In [17]: ar1 = xr.DataArray([[1, 1], [1, 1]], [('x', ['b', 'c']), ('y', [0, 1])])

In [18]: ar0.combine_first(ar1)
Out[18]:
<xarray.DataArray (x: 3, y: 3)>
array([[ 0.,  0., nan],
[ 0.,  0.,  1.],
[nan,  1.,  1.]])
Coordinates:
* x        (x) object 'a' 'b' 'c'
* y        (y) int64 -1 0 1

In [19]: ar1.combine_first(ar0)
Out[19]:
<xarray.DataArray (x: 3, y: 3)>
array([[ 0.,  0., nan],
[ 0.,  1.,  1.],
[nan,  1.,  1.]])
Coordinates:
* x        (x) object 'a' 'b' 'c'
* y        (y) int64 -1 0 1


For datasets, ds0.combine_first(ds1) works similarly to xr.merge([ds0, ds1]), except that xr.merge raises MergeError when there are conflicting values in variables to be merged, whereas .combine_first defaults to the calling object’s values.

## Update¶

In contrast to merge, update() modifies a dataset in-place without checking for conflicts, and will overwrite any existing variables with new values:

In [20]: ds.update({'space': ('space', [10.2, 9.4, 3.9])})
Out[20]:
<xarray.Dataset>
Dimensions:  (space: 3, x: 2, y: 3)
Coordinates:
* x        (x) <U1 'a' 'b'
* y        (y) int64 10 20 30
* space    (space) float64 10.2 9.4 3.9
Data variables:
foo      (x, y) float64 0.4691 -0.2829 -1.509 -1.136 1.212 -0.1732


However, dimensions are still required to be consistent between different Dataset variables, so you cannot change the size of a dimension unless you replace all dataset variables that use it.

update also performs automatic alignment if necessary. Unlike merge, it maintains the alignment of the original array instead of merging indexes:

In [21]: ds.update(other)
Out[21]:
<xarray.Dataset>
Dimensions:  (space: 3, x: 2, y: 3)
Coordinates:
* x        (x) object 'a' 'b'
* y        (y) int64 10 20 30
* space    (space) float64 10.2 9.4 3.9
Data variables:
foo      (x, y) float64 0.4691 -0.2829 -1.509 -1.136 1.212 -0.1732
bar      (x) int64 1 2


The exact same alignment logic when setting a variable with __setitem__ syntax:

In [22]: ds['baz'] = xr.DataArray([9, 9, 9, 9, 9], coords=[('x', list('abcde'))])

In [23]: ds.baz
Out[23]:
<xarray.DataArray 'baz' (x: 2)>
array([9, 9])
Coordinates:
* x        (x) object 'a' 'b'


## Equals and identical¶

xarray objects can be compared by using the equals(), identical() and broadcast_equals() methods. These methods are used by the optional compat argument on concat and merge.

equals checks dimension names, indexes and array values:

In [24]: arr.equals(arr.copy())
Out[24]: True


identical also checks attributes, and the name of each object:

In [25]: arr.identical(arr.rename('bar'))
Out[25]: False


broadcast_equals does a more relaxed form of equality check that allows variables to have different dimensions, as long as values are constant along those new dimensions:

In [26]: left = xr.Dataset(coords={'x': 0})

In [27]: right = xr.Dataset({'x': [0, 0, 0]})

Out[28]: True


Like pandas objects, two xarray objects are still equal or identical if they have missing values marked by NaN in the same locations.

In contrast, the == operation performs element-wise comparison (like numpy):

In [29]: arr == arr.copy()
Out[29]:
<xarray.DataArray (x: 2, y: 3)>
array([[ True,  True,  True],
[ True,  True,  True]])
Coordinates:
* x        (x) <U1 'a' 'b'
* y        (y) int64 10 20 30


Note that NaN does not compare equal to NaN in element-wise comparison; you may need to deal with missing values explicitly.

## Merging with ‘no_conflicts’¶

The compat argument 'no_conflicts' is only available when combining xarray objects with merge. In addition to the above comparison methods it allows the merging of xarray objects with locations where either have NaN values. This can be used to combine data with overlapping coordinates as long as any non-missing values agree or are disjoint:

In [30]: ds1 = xr.Dataset({'a': ('x', [10, 20, 30, np.nan])}, {'x': [1, 2, 3, 4]})

In [31]: ds2 = xr.Dataset({'a': ('x', [np.nan, 30, 40, 50])}, {'x': [2, 3, 4, 5]})

In [32]: xr.merge([ds1, ds2], compat='no_conflicts')
Out[32]:
<xarray.Dataset>
Dimensions:  (x: 5)
Coordinates:
* x        (x) int64 1 2 3 4 5
Data variables:
a        (x) float64 10.0 20.0 30.0 40.0 50.0


Note that due to the underlying representation of missing values as floating point numbers (NaN), variable data type is not always preserved when merging in this manner.