Skip to content
Advertisement

Testing string membership using (in) keyword in python is very slow

I have the following text dataset:

4 million paragraphs of length between (10-60 words each).

paragraphs = ['yarrow heart lamium bleeding daisy pea',  'sweet shasta sedum daisy yarrow rhododendron',  'rhododendron pea shasta daisy gladiolus heart',  'gladiolus lamium rhododendron heart pea shasta',  'heart daisy yarrow gladiolus rhododendron sedum',  'pea sedum sweet shasta yarrow bleeding', 
 'yarrow sedum lamium sweet daisy gladiolus',  'heart daisy sweet bleeding pea shasta',  'daisy sweet lamium rhododendron pea bleeding',  'daisy lamium rhododendron gladiolus yarrow',  'rhododendron daisy lamium yarrow sedum',  'bleeding sedum pea heart daisy yarrow', 
 'pea sweet yarrow gladiolus lamium shasta',  'pea rhododendron sweet daisy bleeding yarrow',  'gladiolus daisy bleeding lamium sedum shasta',  'bleeding yarrow pea sedum daisy sweet',  'lamium sweet gladiolus heart rhododendron daisy',  'lamium yarrow sedum pea heart shasta',  'shasta lamium pea heart sedum yarrow',  'lamium bleeding daisy rhododendron gladiolus pea', 
 'lamium yarrow shasta heart sweet gladiolus',  'pea shasta heart sweet yarrow gladiolus',  'sedum shasta rhododendron daisy pea bleeding',  'sedum rhododendron shasta daisy lamium sweet',  'sweet rhododendron yarrow heart sedum daisy',  'bleeding sedum heart gladiolus daisy', 
 'lamium yarrow gladiolus pea sweet rhododendron',  'pea sedum bleeding daisy rhododendron',  'shasta pea rhododendron daisy sedum sweet',  'lamium yarrow bleeding pea shasta sedum']

Also I have a set of 30,000 unique sentences:

set_sentences = {'bleeding daisy yarrow',  'bleeding lamium shasta',  'bleeding sweet daisy',  'daisy lamium',  'daisy shasta yarrow',  'gladiolus lamium daisy',  'gladiolus shasta',  'heart daisy lamium',  'heart shasta lamium', 
 'heart sweet daisy',  'heart sweet lamium',  'lamium daisy shasta',  'lamium sweet pea',  'lamium yarrow',  
 'pea daisy rhododendron',  'pea shasta sweet',  'pea sweet gladiolus',
  'rhododendron bleeding sedum',  'rhododendron daisy',  'rhododendron gladiolus shasta',  'sedum bleeding yarrow',  'sedum lamium bleeding',  'sweet bleeding pea',  'sweet lamium daisy',  'sweet shasta',  'yarrow gladiolus',  'yarrow sedum heart',  'yarrow sedum rhododendron',  'yarrow sedum shasta',  'yarrow sedum sweet'}

I want to check if ANY of the sentences in the set are in those 4 million paragraphs. If any of those 30,000 sentences are in one of those paragraphs I want to keep that particular paragraph, else I should discard it.

Here is my implementation, which works but for that amount of data it’s very slow.

def membership_testing(para, set_item):
    for item in set_item:
        if item in para:
            return 'VALID'

df = pd.DataFrame(data={'PARAGRAPH': paragraphs})
df['VALIDITY'] = df['PARAGRAPH'].apply(lambda x: membership_testing(x, set_sentences))
df['VALIDITY'] = df['VALIDITY'].fillna('INVALID')
df = df[df['VALIDITY'] == 'VALID'].reset_index(drop=True)

How could I improve my code? I tried using swifter, it’s estimated that it will take around 5 hours for that amount of data!

Is there a way to speed things up, like dask? I’m open to the idea of using a different file format like CSV etc, for example, if reading data from disk.

Advertisement

Answer

from collections import defaultdict

def defaultdict_init():
    return defaultdict(defaultdict_init)

# let's split your sentences into words and build a kind of a tree, so any
# word check is O(1)
def sentences_to_tree(sentences):
    tree = defaultdict_init()

    for sentence in set_sentences:
        tree_ = tree
        for word in sentence.split(" "):
            tree_ = tree_[word]
        # adding this magic string so we know that there was a sentence of that
        # length
        tree_["__DOT__"] = True
    return tree

def filter_paragraphs(tree, paragraphs):
    for paragraph in paragraphs:
        words = paragraph.split(" ")
        for i in range(len(words)):
            word = words[i]
            if word in tree:
                # a paragraph word matches a first word of some sentences,
                # let's peek at next words of the paragraph to see if they
                # match the deeper leaves of our tree
                success = False
                tree_ = tree[word]

                if "__DOT__" in tree_:
                    # this single word is a match
                    yield paragraph
                    break

                for j in range(i + 1, len(words)):
                    if words[j] in tree_:
                        tree_ = tree_[words[j]]
                        if "__DOT__" in tree_:
                            # we matched the full sentence
                            yield paragraph
                            success = True
                            break
                    else:
                        break

                if success:
                    break

tree = sentences_to_tree(set_sentences)
result = list(filter_paragraphs(tree, paragraphs))

"""
In [2]: tree
Out[2]:
defaultdict(<function __main__.defaultdict_init()>,
            {'sweet': defaultdict(<function __main__.defaultdict_init()>,
                         {'lamium': defaultdict(<function __main__.defaultdict_init()>,
                                      {'daisy': defaultdict(<function __main__.defaultdict_init()>,
                                                   {'__DOT__': True})}),
                          'shasta': defaultdict(<function __main__.defaultdict_init()>,
                                      {'__DOT__': True}),
                          'bleeding': defaultdict(<function __main__.defaultdict_init()>,
                                      {'pea': defaultdict(<function __main__.defaultdict_init()>,
                                                   {'__DOT__': True})})}),
             'lamium': defaultdict(<function __main__.defaultdict_init()>,
                         {'sweet': defaultdict(<function __main__.defaultdict_init()>,
                                      {'pea': defaultdict(<function __main__.defaultdict_init()>,
                                                   {'__DOT__': True})}),
                          'daisy': defaultdict(<function __main__.defaultdict_init()>,
                                      {'shasta': defaultdict(<function __main__.defaultdict_init()>,
                                                   {'__DOT__': True})}),
                          'yarrow': defaultdict(<function __main__.defaultdict_init()>,
                                      {'__DOT__': True})}),
...

In [5]: result
Out[5]:
['sweet shasta sedum daisy yarrow rhododendron',
 'heart daisy yarrow gladiolus rhododendron sedum',
 'pea sedum sweet shasta yarrow bleeding',
 'heart daisy sweet bleeding pea shasta',
 'daisy lamium rhododendron gladiolus yarrow',
 'rhododendron daisy lamium yarrow sedum',
 'pea sweet yarrow gladiolus lamium shasta',
 'lamium sweet gladiolus heart rhododendron daisy',
 'lamium yarrow sedum pea heart shasta',
 'lamium yarrow shasta heart sweet gladiolus',
 'pea shasta heart sweet yarrow gladiolus',
 'sedum shasta rhododendron daisy pea bleeding',
 'sedum rhododendron shasta daisy lamium sweet',
 'lamium yarrow gladiolus pea sweet rhododendron',
 'shasta pea rhododendron daisy sedum sweet',
 'lamium yarrow bleeding pea shasta sedum']
"""

It’s time to profile:

In [18]: many_paragraphs = paragraphs * 100000

In [19]: len(many_paragraphs)
Out[19]: 3000000

In [20]: %timeit list(filter_paragraphs(tree, many_paragraphs))
3.72 s ± 11.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
User contributions licensed under: CC BY-SA
3 People found this is helpful
Advertisement