Spacy包含noun_chunks功能以检索名词短语集。
函数english_noun_chunks(如下所示)使用word.pos == NOUN

def english_noun_chunks(doc):
    labels = ['nsubj', 'dobj', 'nsubjpass', 'pcomp', 'pobj',
              'attr', 'root']
    np_deps = [doc.vocab.strings[label] for label in labels]
    conj = doc.vocab.strings['conj']
    np_label = doc.vocab.strings['NP']
    for i in range(len(doc)):
        word = doc[i]
        if word.pos == NOUN and word.dep in np_deps:
            yield word.left_edge.i, word.i+1, np_label
        elif word.pos == NOUN and word.dep == conj:
            head = word.head
            while head.dep == conj and head.head.i < head.i:
                head = head.head
            # If the head is an NP, and we're coordinated to it, we're an NP
            if head.dep in np_deps:
                yield word.left_edge.i, word.i+1, np_label


我想从保持某些正则表达式的句子中获取一些信息。例如,我的短语是零个或多个形容词,后跟一个或多个名词。

{(<JJ>)*(<NN | NNS | NNP>)+}


是否可以不重写english_noun_chunks功能?

最佳答案

您可以重写此函数而不会损失任何性能,因为它是在纯python中实现的,但是为什么不在获取这些块之后就对其进行过滤呢?

import re
import spacy

def filtered_chunks(doc, pattern):
  for chunk in doc.noun_chunks:
    signature = ''.join(['<%s>' % w.tag_ for w in chunk])
    if pattern.match(signature) is not None:
      yield chunk

nlp = spacy.load('en')
doc = nlp(u'Great work!')
pattern = re.compile(r'(<JJ>)*(<NN>|<NNS>|<NNP>)+')

print(list(filtered_chunks(doc, pattern)))

07-24 09:53