Data deduplication

Introduction

This example shows how to find records in datasets belonging to the same entity. In our case,we try to deduplicate a dataset with records of persons. We will try to link within the dataset based on attributes like first name, surname, sex, date of birth, place and address. The data used in this example is part of Febrl and is fictitious.

First, start with importing the recordlinkage module. The submodule recordlinkage.datasets contains several datasets that can be used for testing. For this example, we use the Febrl dataset 1. This dataset contains 1000 records of which 500 original and 500 duplicates, with exactly one duplicate per original record. This dataset can be loaded with the function load_febrl1.

In [1]:
import recordlinkage
from recordlinkage.datasets import load_febrl1

The dataset is loaded with the following code. The returned datasets are of type pandas.DataFrame. This makes it easy to manipulate the data if desired. For details about data manipulation with pandas, see their comprehensive documentation http://pandas.pydata.org/.

In [2]:
dfA = load_febrl1()

dfA.head()
Out[2]:
given_name surname street_number address_1 address_2 suburb postcode state date_of_birth soc_sec_id
rec_id
rec-223-org NaN waller 6 tullaroop street willaroo st james 4011 wa 19081209 6988048
rec-122-org lachlan berry 69 giblin street killarney bittern 4814 qld 19990219 7364009
rec-373-org deakin sondergeld 48 goldfinch circuit kooltuo canterbury 2776 vic 19600210 2635962
rec-10-dup-0 kayla harrington NaN maltby circuit coaling coolaroo 3465 nsw 19150612 9004242
rec-227-org luke purdon 23 ramsay place mirani garbutt 2260 vic 19831024 8099933

Make record pairs

It is very intuitive to start with comparing each record in DataFrame dfA with all other records in DataFrame dfA. In fact, we want to make record pairs. Each record pair should contain two different records of DataFrame dfA. This process of making record pairs is also called ‘indexing’. With the recordlinkage module, indexing is easy. First, load the FullIndex class. This class makes a full index on a .index(...) call. In case of deduplication of a single dataframe, one dataframe is sufficient as input argument.

In [3]:
indexer = recordlinkage.FullIndex()
pairs = indexer.index(dfA)

With the method index, all possible (and unique) record pairs are made. The method returns a pandas.MultiIndex. The number of pairs is equal to the number of records in dfA times the number of records in dfB.

In [4]:
print (len(dfA), len(pairs))
# (1000*1000-1000)/2 = 499500
1000 499500

Many of these record pairs do not belong to the same person. The recordlinkage toolkit has some more advanced indexing methods to reduce the number of record pairs. Obvious non-matches are left out of the index. Note that if a matching record pair is not included in the index, it can not be matched anymore.

One of the most well known indexing methods is named blocking. This method includes only record pairs that are identical on one or more stored attributes of the person (or entity in general). The blocking method can be used in the recordlinkage module.

In [5]:
indexer = recordlinkage.BlockIndex(on='given_name')
pairs = indexer.index(dfA)

print (len(pairs))
2082

The argument ‘given_name’ is the blocking variable. This variable has to be the name of a column in dfA and dfB. It is possible to parse a list of columns names to block on multiple variables. Blocking on multiple variables will reduce the number of record pairs even further.

Another implemented indexing method is Sorted Neighbourhood Indexing (Pairs.sortedneighbourhood). This method is very useful when there are many misspellings in the string were used for indexing. In fact, sorted neighbourhood indexing is a generalisation of blocking. See the documentation for details about sorted neighbourd indexing.

Compare records

Each record pair is a candidate match. To classify the candidate record pairs into matches and non-matches, compare the records on all attributes both records have in common. The recordlinkage module has a class named Compare. This class is used to compare the records. The following code shows how to compare attributes.

In [6]:
# This cell can take some time to compute.
compare_cl = recordlinkage.Compare()

compare_cl.exact('given_name', 'given_name', label='given_name')
compare_cl.string('surname', 'surname', method='jarowinkler', threshold=0.85, label='surname')
compare_cl.exact('date_of_birth', 'date_of_birth', label='date_of_birth')
compare_cl.exact('suburb', 'suburb', label='suburb')
compare_cl.exact('state', 'state', label='state')
compare_cl.string('address_1', 'address_1', threshold=0.85, label='address_1')

features = compare_cl.compute(pairs, dfA)

The comparing of record pairs starts when the compute method is called. All attribute comparisons are stored in a DataFrame with horizontally the features and vertically the record pairs. The first 10 comparison vectors are:

In [7]:
features.head(10)
Out[7]:
given_name surname date_of_birth suburb state address_1
rec_id rec_id
rec-122-org rec-183-dup-0 1 0.0 0 0 0 0.0
rec-248-org 1 0.0 0 0 1 0.0
rec-469-org 1 0.0 0 0 0 0.0
rec-74-org 1 0.0 0 0 0 0.0
rec-183-org 1 0.0 0 0 0 0.0
rec-360-dup-0 1 0.0 0 0 0 0.0
rec-248-dup-0 1 0.0 0 0 1 0.0
rec-469-dup-0 1 0.0 0 0 0 0.0
rec-183-dup-0 rec-248-org 1 0.0 0 0 0 0.0
rec-469-org 1 0.0 0 0 1 0.0
In [8]:
features.describe()
Out[8]:
given_name surname date_of_birth suburb state address_1
count 2082.0 2082.000000 2082.000000 2082.000000 2082.000000 2082.000000
mean 1.0 0.144092 0.139289 0.108549 0.327089 0.133045
std 0.0 0.351268 0.346331 0.311148 0.469263 0.339705
min 1.0 0.000000 0.000000 0.000000 0.000000 0.000000
25% 1.0 0.000000 0.000000 0.000000 0.000000 0.000000
50% 1.0 0.000000 0.000000 0.000000 0.000000 0.000000
75% 1.0 0.000000 0.000000 0.000000 1.000000 0.000000
max 1.0 1.000000 1.000000 1.000000 1.000000 1.000000

The last step is to decide which records belong to the same person. In this example, we keep it simple:

In [9]:
# Sum the comparison results.
features.sum(axis=1).value_counts().sort_index(ascending=False)
Out[9]:
6.0     142
5.0     145
4.0      30
3.0       9
2.0     376
1.0    1380
dtype: int64
In [10]:
matches = features[features.sum(axis=1) > 3]

print(len(matches))
matches.head(10)
317
Out[10]:
given_name surname date_of_birth suburb state address_1
rec_id rec_id
rec-183-dup-0 rec-183-org 1 1.0 1 1 1 1.0
rec-122-dup-0 rec-122-org 1 1.0 1 1 1 1.0
rec-248-dup-0 rec-248-org 1 1.0 1 1 1 1.0
rec-373-dup-0 rec-373-org 1 1.0 1 1 1 1.0
rec-10-dup-0 rec-10-org 1 1.0 1 1 1 1.0
rec-342-dup-0 rec-342-org 1 1.0 0 1 1 1.0
rec-330-dup-0 rec-330-org 1 0.0 1 1 1 0.0
rec-397-dup-0 rec-397-org 1 1.0 1 1 1 0.0
rec-472-dup-0 rec-472-org 1 1.0 1 1 1 0.0
rec-190-dup-0 rec-190-org 1 1.0 0 1 1 1.0

Full code

In [11]:
import recordlinkage
from recordlinkage.datasets import load_febrl1

dfA = load_febrl1()

# Indexation step
pcl = recordlinkage.BlockIndex(on='given_name')
pairs = pcl.index(dfA)

# Comparison step
compare_cl = recordlinkage.Compare()

compare_cl.exact('given_name', 'given_name', label='given_name')
compare_cl.string('surname', 'surname', method='jarowinkler', threshold=0.85, label='surname')
compare_cl.exact('date_of_birth', 'date_of_birth', label='date_of_birth')
compare_cl.exact('suburb', 'suburb', label='suburb')
compare_cl.exact('state', 'state', label='state')
compare_cl.string('address_1', 'address_1', threshold=0.85, label='address_1')

features = compare_cl.compute(pairs, dfA)

# Classification step
matches = features[features.sum(axis=1) > 3]
print(len(matches))

317