Merge, join, and concatenate

原文:http://pandas.pydata.org/pandas-docs/stable/merging.html

译者:飞龙 UsyiyiCN

校对:(虚位以待)

pandas提供了各种设施,以便在连接/合并类型操作的情况下,轻松地将Series,DataFrame和Panel对象与索引的各种集合逻辑以及关系代数功能组合在一起。

Concatenating objects

concat函数(在主pandas命名空间中)执行沿轴执行连接操作的所有繁重工作,同时执行索引(如果有)的可选集逻辑(联合或交集)轴。注意,我说“如果有”,因为对于Series只有一个可能的级联轴。

在介绍concat的所有细节以及它能做什么之前,这里有一个简单的例子:

In [1]: df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'],
   ...:                     'B': ['B0', 'B1', 'B2', 'B3'],
   ...:                     'C': ['C0', 'C1', 'C2', 'C3'],
   ...:                     'D': ['D0', 'D1', 'D2', 'D3']},
   ...:                     index=[0, 1, 2, 3])
   ...: 

In [2]: df2 = pd.DataFrame({'A': ['A4', 'A5', 'A6', 'A7'],
   ...:                     'B': ['B4', 'B5', 'B6', 'B7'],
   ...:                     'C': ['C4', 'C5', 'C6', 'C7'],
   ...:                     'D': ['D4', 'D5', 'D6', 'D7']},
   ...:                      index=[4, 5, 6, 7])
   ...: 

In [3]: df3 = pd.DataFrame({'A': ['A8', 'A9', 'A10', 'A11'],
   ...:                     'B': ['B8', 'B9', 'B10', 'B11'],
   ...:                     'C': ['C8', 'C9', 'C10', 'C11'],
   ...:                     'D': ['D8', 'D9', 'D10', 'D11']},
   ...:                     index=[8, 9, 10, 11])
   ...: 

In [4]: frames = [df1, df2, df3]

In [5]: result = pd.concat(frames)

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_concat_basic.png

像它在ndarrays上的同级函数一样,numpy.concatenatepandas.concat接受同类型对象的列表或dict,并将它们与“与其他轴“:

pd.concat(objs, axis=0, join='outer', join_axes=None, ignore_index=False,
          keys=None, levels=None, names=None, verify_integrity=False,
          copy=True)

没有一点点上下文和例子许多这些参数没有多大意义。让我们来看上面的例子。假设我们想要将特定的键与每一个被切碎的DataFrame关联起来。我们可以使用keys参数:

In [6]: result = pd.concat(frames, keys=['x', 'y', 'z'])

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_concat_keys.png

正如你可以看到的(如果你已经阅读了文档的其余部分),结果对象的索引具有hierarchical index这意味着我们现在可以做的东西,像通过键选择每个块:

In [7]: result.ix['y']
Out[7]: 
    A   B   C   D
4  A4  B4  C4  D4
5  A5  B5  C5  D5
6  A6  B6  C6  D6
7  A7  B7  C7  D7

这不是一个伸展,看看这可以非常有用。有关此功能的更多详细信息。

注意

然而,值得注意的是,concat(因此append)会创建数据的完整副本,并且不断重用此函数可能会产生重大的性能损失。如果需要使用对多个数据集的操作,请使用列表推导。

frames = [ process_your_file(f) for f in files ]
result = pd.concat(frames)

Set logic on the other axes

例如,当将多个DataFrames(或面板或...)粘合在一起时,您可以选择如何处理其他轴(不是并置的轴)。这可以通过三种方式完成:

这里是每个这些方法的示例。首先,默认的join='outer'行为:

In [8]: df4 = pd.DataFrame({'B': ['B2', 'B3', 'B6', 'B7'],
   ...:                  'D': ['D2', 'D3', 'D6', 'D7'],
   ...:                  'F': ['F2', 'F3', 'F6', 'F7']},
   ...:                 index=[2, 3, 6, 7])
   ...: 

In [9]: result = pd.concat([df1, df4], axis=1)

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_concat_axis1.png

注意,行索引已经被组合和排序。这与join='inner'是一样的:

In [10]: result = pd.concat([df1, df4], axis=1, join='inner')

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_concat_axis1_inner.png

最后,假设我们只想从原始DataFrame重用确切索引

In [11]: result = pd.concat([df1, df4], axis=1, join_axes=[df1.index])

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_concat_axis1_join_axes.png

Concatenating using append

concat有用的快捷方式是Series和DataFrame上的append实例方法。这些方法实际上早于concat它们沿axis=0连接,即索引:

In [12]: result = df1.append(df2)

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_append1.png

在DataFrame的情况下,索引必须是不相交的,但列不需要是:

In [13]: result = df1.append(df4)

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_append2.png

append可能需要多个对象进行连接:

In [14]: result = df1.append([df2, df3])

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_append3.png

注意

与不附加到原始列表并不返回任何内容的list.append方法不同,append 不会修改df1并返回其附带df2的副本。

Ignoring indexes on the concatenation axis

对于没有有意义索引的DataFrames,您可能希望附加它们,并忽略它们可能具有重叠索引的事实:

为此,请使用ignore_index参数:

In [15]: result = pd.concat([df1, df4], ignore_index=True)

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_concat_ignore_index.png

这也是DataFrame.append的有效参数:

In [16]: result = df1.append(df4, ignore_index=True)

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_append_ignore_index.png

Concatenating with mixed ndims

您可以连接Series和DataFrames的混合。该系列将被转换为DataFrames,列名称为Series的名称。

In [17]: s1 = pd.Series(['X0', 'X1', 'X2', 'X3'], name='X')

In [18]: result = pd.concat([df1, s1], axis=1)

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_concat_mixed_ndim.png

如果未命名的系列通过,它们将被连续编号。

In [19]: s2 = pd.Series(['_0', '_1', '_2', '_3'])

In [20]: result = pd.concat([df1, s2, s2, s2], axis=1)

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_concat_unnamed_series.png

传递ignore_index=True将删除所有名称引用。

In [21]: result = pd.concat([df1, s1], axis=1, ignore_index=True)

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_concat_series_ignore_index.png

More concatenating with group keys

keys参数的常见用法是在基于现有系列创建新的DataFrame时覆盖列名。请注意默认行为是如何让结果DataFrame继承父系列名称(如果存在)。

In [22]: s3 = pd.Series([0, 1, 2, 3], name='foo')

In [23]: s4 = pd.Series([0, 1, 2, 3])

In [24]: s5 = pd.Series([0, 1, 4, 5])

In [25]: pd.concat([s3, s4, s5], axis=1)
Out[25]: 
   foo  0  1
0    0  0  0
1    1  1  1
2    2  2  4
3    3  3  5

通过keys参数,我们可以覆盖现有的列名。

In [26]: pd.concat([s3, s4, s5], axis=1, keys=['red','blue','yellow'])
Out[26]: 
   red  blue  yellow
0    0     0       0
1    1     1       1
2    2     2       4
3    3     3       5

让我们现在考虑一个变化的第一个例子:

In [27]: result = pd.concat(frames, keys=['x', 'y', 'z'])

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_concat_group_keys2.png

您还可以将dict传递到concat,在这种情况下,dict键将用于keys参数(除非指定了其他键):

In [28]: pieces = {'x': df1, 'y': df2, 'z': df3}

In [29]: result = pd.concat(pieces)

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_concat_dict.png
In [30]: result = pd.concat(pieces, keys=['z', 'y'])

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_concat_dict_keys.png

创建的MultiIndex具有根据传递的键和DataFrame段的索引构建的级别:

In [31]: result.index.levels
Out[31]: FrozenList([[u'z', u'y'], [4, 5, 6, 7, 8, 9, 10, 11]])

如果您想指定其他级别(偶尔会这样),您可以使用levels参数:

In [32]: result = pd.concat(pieces, keys=['x', 'y', 'z'],
   ....:                 levels=[['z', 'y', 'x', 'w']],
   ....:                 names=['group_key'])
   ....: 

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_concat_dict_keys_names.png
In [33]: result.index.levels
Out[33]: FrozenList([[u'z', u'y', u'x', u'w'], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]])

是的,这是相当深奥,但实际上是实现像GroupBy,其中分类变量的顺序是有意义的。

Appending rows to a DataFrame

虽然不是特别有效(因为必须创建一个新的对象),你可以通过传递一个Series或dict到append,它返回一个新的DataFrame如上所示,附加一行到DataFrame。

In [34]: s2 = pd.Series(['X0', 'X1', 'X2', 'X3'], index=['A', 'B', 'C', 'D'])

In [35]: result = df1.append(s2, ignore_index=True)

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_append_series_as_row.png

您应该使用ignore_index与此方法指示DataFrame丢弃其索引。如果希望保留索引,应该构造一个适当索引的DataFrame,并附加或连接这些对象。

您还可以传递一个列表或系列:

In [36]: dicts = [{'A': 1, 'B': 2, 'C': 3, 'X': 4},
   ....:          {'A': 5, 'B': 6, 'C': 7, 'Y': 8}]
   ....: 

In [37]: result = df1.append(dicts, ignore_index=True)

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_append_dits.png

Database-style DataFrame joining/merging

pandas具有全功能的,高性能内存中连接操作,与SQL等关系数据库非常相似。这些方法比其他开源实现(例如R中的base::merge.data.frame)执行得更好(在某些情况下好得多一个数量级)。其原因是DataFrame中的数据的仔细的算法设计和内部布局。

有关某些高级策略,请参阅cookbook

熟悉SQL但是新增了pandas的用户可能对与SQL的comparison with SQL

pandas提供单个函数merge作为DataFrame对象之间的所有标准数据库连接操作的入口点:

pd.merge(left, right, how='inner', on=None, left_on=None, right_on=None,
         left_index=False, right_index=False, sort=True,
         suffixes=('_x', '_y'), copy=True, indicator=False)

返回类型将与left如果leftDataFrameright是DataFrame的子类,则返回类型仍然是DataFrame

merge是pandas命名空间中的函数,它也可用作DataFrame实例方法,调用DataFrame被隐式地视为连接中的左侧对象。

The related DataFrame.join method, uses merge internally for the index-on-index (by default) and column(s)-on-index join. 如果您只加入索引,您可能希望使用DataFrame.join来保存自己一些输入。

Brief primer on merge methods (relational algebra)

经验丰富的关系数据库(如SQL)的用户将熟悉用于描述两个类似SQL表结构(DataFrame对象)之间的连接操作的术语。有几种情况需要考虑,这是非常重要的理解:

注意

当连接列上的列(可能是多对多连接)时,传递的DataFrame对象上的任何索引都将被丢弃

值得花一些时间来理解多对多连接情况的结果。在SQL /标准关系代数中,如果一个键组合在两个表中出现多次,则结果表将具有相关数据的笛卡尔乘积这里是一个非常基本的例子,一个唯一的组合键:

In [38]: left = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'],
   ....:                      'A': ['A0', 'A1', 'A2', 'A3'],
   ....:                      'B': ['B0', 'B1', 'B2', 'B3']})
   ....: 

In [39]: right = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'],
   ....:                       'C': ['C0', 'C1', 'C2', 'C3'],
   ....:                       'D': ['D0', 'D1', 'D2', 'D3']})
   ....: 

In [40]: result = pd.merge(left, right, on='key')

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_merge_on_key.png

这里是一个更复杂的示例与多个连接键:

In [41]: left = pd.DataFrame({'key1': ['K0', 'K0', 'K1', 'K2'],
   ....:                      'key2': ['K0', 'K1', 'K0', 'K1'],
   ....:                      'A': ['A0', 'A1', 'A2', 'A3'],
   ....:                      'B': ['B0', 'B1', 'B2', 'B3']})
   ....: 

In [42]: right = pd.DataFrame({'key1': ['K0', 'K1', 'K1', 'K2'],
   ....:                       'key2': ['K0', 'K0', 'K0', 'K0'],
   ....:                       'C': ['C0', 'C1', 'C2', 'C3'],
   ....:                       'D': ['D0', 'D1', 'D2', 'D3']})
   ....: 

In [43]: result = pd.merge(left, right, on=['key1', 'key2'])

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_merge_on_key_multiple.png

The how argument to merge specifies how to determine which keys are to be included in the resulting table. 如果在左或右表中未出现组合键,则连接表中的值将为NA以下是how选项及其SQL等效名称的摘要:

合并方法 SQL加入名称 描述
left LEFT OUTER JOIN 仅使用左框架的键
right RIGHT OUTER JOIN 仅使用右边框的键
outer FULL OUTER JOIN 使用来自两个帧的键的联合
inner INNER JOIN 使用两个帧的交叉点
In [44]: result = pd.merge(left, right, how='left', on=['key1', 'key2'])

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_merge_on_key_left.png
In [45]: result = pd.merge(left, right, how='right', on=['key1', 'key2'])

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_merge_on_key_right.png
In [46]: result = pd.merge(left, right, how='outer', on=['key1', 'key2'])

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_merge_on_key_outer.png
In [47]: result = pd.merge(left, right, how='inner', on=['key1', 'key2'])

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_merge_on_key_inner.png

The merge indicator

版本0.17.0中的新功能。

merge现在接受参数indicator如果True,则将一个名为_merge的分类类型列添加到接受值的输出对象:

观察原产地 _merge
只在'left'框中合并键 left_only
仅在'right'框中合并键 right_only
在两个框架中合并关键帧 both
In [48]: df1 = pd.DataFrame({'col1': [0, 1], 'col_left':['a', 'b']})

In [49]: df2 = pd.DataFrame({'col1': [1, 2, 2],'col_right':[2, 2, 2]})

In [50]: pd.merge(df1, df2, on='col1', how='outer', indicator=True)
Out[50]: 
   col1 col_left  col_right      _merge
0     0        a        NaN   left_only
1     1        b        2.0        both
2     2      NaN        2.0  right_only
3     2      NaN        2.0  right_only

indicator参数也将接受字符串参数,在这种情况下,指示符函数将使用传递的字符串的值作为指示符列的名称。

In [51]: pd.merge(df1, df2, on='col1', how='outer', indicator='indicator_column')
Out[51]: 
   col1 col_left  col_right indicator_column
0     0        a        NaN        left_only
1     1        b        2.0             both
2     2      NaN        2.0       right_only
3     2      NaN        2.0       right_only

Joining on index

DataFrame.join是一种方便的方法,用于将两个可能不同索引的DataFrames的列合并为单个结果DataFrame。这里有一个非常基本的例子:

In [52]: left = pd.DataFrame({'A': ['A0', 'A1', 'A2'],
   ....:                      'B': ['B0', 'B1', 'B2']},
   ....:                      index=['K0', 'K1', 'K2'])
   ....: 

In [53]: right = pd.DataFrame({'C': ['C0', 'C2', 'C3'],
   ....:                       'D': ['D0', 'D2', 'D3']},
   ....:                       index=['K0', 'K2', 'K3'])
   ....: 

In [54]: result = left.join(right)

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_join.png
In [55]: result = left.join(right, how='outer')

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_join_outer.png
In [56]: result = left.join(right, how='inner')

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_join_inner.png

这里的数据对齐在索引(行标签)上。使用merge加上指示它使用索引的其他参数也可以实现相同的行为:

In [57]: result = pd.merge(left, right, left_index=True, right_index=True, how='outer')

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_merge_index_outer.png
In [58]: result = pd.merge(left, right, left_index=True, right_index=True, how='inner');

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_merge_index_inner.png

Joining key columns on an index

join在参数上接受一个可选的on这两个函数调用是完全等价的:

left.join(right, on=key_or_keys)
pd.merge(left, right, left_on=key_or_keys, right_index=True,
      how='left', sort=False)

显然你可以选择任何形式,你觉得更方便。对于多对一连接(其中一个DataFrame已通过连接键索引),使用join可能更方便。这里有一个简单的例子:

In [59]: left = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'],
   ....:                      'B': ['B0', 'B1', 'B2', 'B3'],
   ....:                      'key': ['K0', 'K1', 'K0', 'K1']})
   ....: 

In [60]: right = pd.DataFrame({'C': ['C0', 'C1'],
   ....:                       'D': ['D0', 'D1']},
   ....:                       index=['K0', 'K1'])
   ....: 

In [61]: result = left.join(right, on='key')

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_join_key_columns.png
In [62]: result = pd.merge(left, right, left_on='key', right_index=True,
   ....:                   how='left', sort=False);
   ....: 

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_merge_key_columns.png

要在多个键上连接,传递的DataFrame必须具有MultiIndex

In [63]: left = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'],
   ....:                      'B': ['B0', 'B1', 'B2', 'B3'],
   ....:                      'key1': ['K0', 'K0', 'K1', 'K2'],
   ....:                      'key2': ['K0', 'K1', 'K0', 'K1']})
   ....: 

In [64]: index = pd.MultiIndex.from_tuples([('K0', 'K0'), ('K1', 'K0'),
   ....:                                   ('K2', 'K0'), ('K2', 'K1')])
   ....: 

In [65]: right = pd.DataFrame({'C': ['C0', 'C1', 'C2', 'C3'],
   ....:                    'D': ['D0', 'D1', 'D2', 'D3']},
   ....:                   index=index)
   ....: 

现在可以通过传递两个键列名称来连接:

In [66]: result = left.join(right, on=['key1', 'key2'])

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_join_multikeys.png

DataFrame.join的默认值是执行左连接(本质上是一个“VLOOKUP”操作,对于Excel用户),它只使用在调用DataFrame中找到的键。其他连接类型,例如内连接,可以很容易地执行:

In [67]: result = left.join(right, on=['key1', 'key2'], how='inner')

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_join_multikeys_inner.png

正如你所看到的,这会删除任何没有匹配的行。

Joining a single Index to a Multi-index

版本0.14.0中的新功能。

您可以使用多索引的DataFrame级别加入单索引的DataFrame该级别将使单索引帧的索引的名称与多索引帧的级别名称匹配。

In [68]: left = pd.DataFrame({'A': ['A0', 'A1', 'A2'],
   ....:                      'B': ['B0', 'B1', 'B2']},
   ....:                      index=pd.Index(['K0', 'K1', 'K2'], name='key'))
   ....: 

In [69]: index = pd.MultiIndex.from_tuples([('K0', 'Y0'), ('K1', 'Y1'),
   ....:                                   ('K2', 'Y2'), ('K2', 'Y3')],
   ....:                                    names=['key', 'Y'])
   ....: 

In [70]: right = pd.DataFrame({'C': ['C0', 'C1', 'C2', 'C3'],
   ....:                       'D': ['D0', 'D1', 'D2', 'D3']},
   ....:                       index=index)
   ....: 

In [71]: result = left.join(right, how='inner')

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_join_multiindex_inner.png

这是等效的,但是较少冗长和更多的内存高效/更快。

In [72]: result = pd.merge(left.reset_index(), right.reset_index(),
   ....:       on=['key'], how='inner').set_index(['key','Y'])
   ....: 

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_merge_multiindex_alternative.png

Joining with two multi-indexes

这不是通过join实现的,但是可以使用以下方法完成。

In [73]: index = pd.MultiIndex.from_tuples([('K0', 'X0'), ('K0', 'X1'),
   ....:                                    ('K1', 'X2')],
   ....:                                     names=['key', 'X'])
   ....: 

In [74]: left = pd.DataFrame({'A': ['A0', 'A1', 'A2'],
   ....:                      'B': ['B0', 'B1', 'B2']},
   ....:                       index=index)
   ....: 

In [75]: result = pd.merge(left.reset_index(), right.reset_index(),
   ....:                   on=['key'], how='inner').set_index(['key','X','Y'])
   ....: 

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_merge_two_multiindex.png

Overlapping value columns

合并suffixes参数需要一个字符串列表的元组,以附加到输入DataFrames中的重叠列名称,以消除结果列的歧义:

In [76]: left = pd.DataFrame({'k': ['K0', 'K1', 'K2'], 'v': [1, 2, 3]})

In [77]: right = pd.DataFrame({'k': ['K0', 'K0', 'K3'], 'v': [4, 5, 6]})

In [78]: result = pd.merge(left, right, on='k')

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_merge_overlapped.png
In [79]: result = pd.merge(left, right, on='k', suffixes=['_l', '_r'])

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_merge_overlapped_suffix.png

DataFrame.join具有类似的lsuffixrsuffix参数。

In [80]: left = left.set_index('k')

In [81]: right = right.set_index('k')

In [82]: result = left.join(right, lsuffix='_l', rsuffix='_r')

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_merge_overlapped_multi_suffix.png

Joining multiple DataFrame or Panel objects

DataFrames的列表或元组也可以传递到DataFrame.join,以将它们的索引连接在一起。对于Panel.join也是如此。

In [83]: right2 = pd.DataFrame({'v': [7, 8, 9]}, index=['K1', 'K1', 'K2'])

In [84]: result = left.join([right, right2])

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_join_multi_df.png

Merging together values within Series or DataFrame columns

另一个相当普遍的情况是有两个相似索引(或类似索引)的Series或DataFrame对象,并且希望在一个对象中“修补”值,以匹配另一个中的索引值。这里是一个例子:

In [85]: df1 = pd.DataFrame([[np.nan, 3., 5.], [-4.6, np.nan, np.nan],
   ....:                    [np.nan, 7., np.nan]])
   ....: 

In [86]: df2 = pd.DataFrame([[-42.6, np.nan, -8.2], [-5., 1.6, 4]],
   ....:                    index=[1, 2])
   ....: 

为此,请使用combine_first方法:

In [87]: result = df1.combine_first(df2)

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_combine_first.png

注意,这个方法只从右边的DataFrame中获取值,如果它们在左边的DataFrame中缺失的话。相关方法update可替代地更改非NA值:

In [88]: df1.update(df2)

http://pandas.pydata.org/pandas-docs/version/0.19.2/_images/merging_update.png

Timeseries friendly merging

Merging Ordered Data

merge_ordered()函数允许组合时间序列和其他有序数据。特别地,它具有可选的fill_method关键字来填充/内插缺失的数据:

In [89]: left = pd.DataFrame({'k': ['K0', 'K1', 'K1', 'K2'],
   ....:                      'lv': [1, 2, 3, 4],
   ....:                      's': ['a', 'b', 'c', 'd']})
   ....: 

In [90]: right = pd.DataFrame({'k': ['K1', 'K2', 'K4'],
   ....:                       'rv': [1, 2, 3]})
   ....: 

In [91]: pd.merge_ordered(left, right, fill_method='ffill', left_by='s')
Out[91]: 
     k   lv  s   rv
0   K0  1.0  a  NaN
1   K1  1.0  a  1.0
2   K2  1.0  a  2.0
3   K4  1.0  a  3.0
4   K1  2.0  b  1.0
5   K2  2.0  b  2.0
6   K4  2.0  b  3.0
7   K1  3.0  c  1.0
8   K2  3.0  c  2.0
9   K4  3.0  c  3.0
10  K1  NaN  d  1.0
11  K2  4.0  d  2.0
12  K4  4.0  d  3.0

Merging AsOf

版本0.19.0中的新功能。

merge_asof()类似于有序左连接,除了我们匹配最近的键而不是相等的键。For each row in the left DataFrame, we select the last row in the right DataFrame whose on key is less than the left’s key. 两个DataFrames都必须按键排序。

可选地,asof合并可以执行分组合并。除了on键上最接近的匹配,这与by键相同。

例如;我们可能会有tradesquotes,我们要asof合并它们。

In [92]: trades = pd.DataFrame({
   ....:     'time': pd.to_datetime(['20160525 13:30:00.023',
   ....:                             '20160525 13:30:00.038',
   ....:                             '20160525 13:30:00.048',
   ....:                             '20160525 13:30:00.048',
   ....:                             '20160525 13:30:00.048']),
   ....:     'ticker': ['MSFT', 'MSFT',
   ....:                'GOOG', 'GOOG', 'AAPL'],
   ....:     'price': [51.95, 51.95,
   ....:               720.77, 720.92, 98.00],
   ....:     'quantity': [75, 155,
   ....:                  100, 100, 100]},
   ....:     columns=['time', 'ticker', 'price', 'quantity'])
   ....: 

In [93]: quotes = pd.DataFrame({
   ....:     'time': pd.to_datetime(['20160525 13:30:00.023',
   ....:                             '20160525 13:30:00.023',
   ....:                             '20160525 13:30:00.030',
   ....:                             '20160525 13:30:00.041',
   ....:                             '20160525 13:30:00.048',
   ....:                             '20160525 13:30:00.049',
   ....:                             '20160525 13:30:00.072',
   ....:                             '20160525 13:30:00.075']),
   ....:     'ticker': ['GOOG', 'MSFT', 'MSFT',
   ....:                'MSFT', 'GOOG', 'AAPL', 'GOOG',
   ....:                'MSFT'],
   ....:     'bid': [720.50, 51.95, 51.97, 51.99,
   ....:             720.50, 97.99, 720.50, 52.01],
   ....:     'ask': [720.93, 51.96, 51.98, 52.00,
   ....:             720.93, 98.01, 720.88, 52.03]},
   ....:     columns=['time', 'ticker', 'bid', 'ask'])
   ....: 
In [94]: trades
Out[94]: 
                     time ticker   price  quantity
0 2016-05-25 13:30:00.023   MSFT   51.95        75
1 2016-05-25 13:30:00.038   MSFT   51.95       155
2 2016-05-25 13:30:00.048   GOOG  720.77       100
3 2016-05-25 13:30:00.048   GOOG  720.92       100
4 2016-05-25 13:30:00.048   AAPL   98.00       100

In [95]: quotes
Out[95]: 
                     time ticker     bid     ask
0 2016-05-25 13:30:00.023   GOOG  720.50  720.93
1 2016-05-25 13:30:00.023   MSFT   51.95   51.96
2 2016-05-25 13:30:00.030   MSFT   51.97   51.98
3 2016-05-25 13:30:00.041   MSFT   51.99   52.00
4 2016-05-25 13:30:00.048   GOOG  720.50  720.93
5 2016-05-25 13:30:00.049   AAPL   97.99   98.01
6 2016-05-25 13:30:00.072   GOOG  720.50  720.88
7 2016-05-25 13:30:00.075   MSFT   52.01   52.03

默认情况下,我们使用asof的引号。

In [96]: pd.merge_asof(trades, quotes,
   ....:               on='time',
   ....:               by='ticker')
   ....: 
Out[96]: 
                     time ticker   price  quantity     bid     ask
0 2016-05-25 13:30:00.023   MSFT   51.95        75   51.95   51.96
1 2016-05-25 13:30:00.038   MSFT   51.95       155   51.97   51.98
2 2016-05-25 13:30:00.048   GOOG  720.77       100  720.50  720.93
3 2016-05-25 13:30:00.048   GOOG  720.92       100  720.50  720.93
4 2016-05-25 13:30:00.048   AAPL   98.00       100     NaN     NaN

我们只在2ms之内的报价时间和交易时间之间。

In [97]: pd.merge_asof(trades, quotes,
   ....:               on='time',
   ....:               by='ticker',
   ....:               tolerance=pd.Timedelta('2ms'))
   ....: 
Out[97]: 
                     time ticker   price  quantity     bid     ask
0 2016-05-25 13:30:00.023   MSFT   51.95        75   51.95   51.96
1 2016-05-25 13:30:00.038   MSFT   51.95       155     NaN     NaN
2 2016-05-25 13:30:00.048   GOOG  720.77       100  720.50  720.93
3 2016-05-25 13:30:00.048   GOOG  720.92       100  720.50  720.93
4 2016-05-25 13:30:00.048   AAPL   98.00       100     NaN     NaN

我们只有在10ms之内的报价时间和交易时间之间,我们排除准时匹配。注意,虽然我们排除了(报价)的精确匹配,但是先前报价传播到该时间点。

In [98]: pd.merge_asof(trades, quotes,
   ....:               on='time',
   ....:               by='ticker',
   ....:               tolerance=pd.Timedelta('10ms'),
   ....:               allow_exact_matches=False)
   ....: 
Out[98]: 
                     time ticker   price  quantity    bid    ask
0 2016-05-25 13:30:00.023   MSFT   51.95        75    NaN    NaN
1 2016-05-25 13:30:00.038   MSFT   51.95       155  51.97  51.98
2 2016-05-25 13:30:00.048   GOOG  720.77       100    NaN    NaN
3 2016-05-25 13:30:00.048   GOOG  720.92       100    NaN    NaN
4 2016-05-25 13:30:00.048   AAPL   98.00       100    NaN    NaN