这是代码片段:
In [390]: t
Out[390]: ['my', 'phone', 'number', 'is', '1111', '1111', '1111']
In [391]: ner_tagger.tag(t)
Out[391]:
[('my', 'O'),
('phone', 'O'),
('number', 'O'),
('is', 'O'),
('1111\xa01111\xa01111', 'NUMBER')]
我期望的是:
Out[391]:
[('my', 'O'),
('phone', 'O'),
('number', 'O'),
('is', 'O'),
('1111', 'NUMBER'),
('1111', 'NUMBER'),
('1111', 'NUMBER')]
如您所见,人造电话号码由\ xa0连起来,这被认为是一个不间断的空格。我可以通过设置CoreNLP而不更改其他默认规则来将其分开。
ner_tagger定义为:
ner_tagger = CoreNLPParser(url='http://localhost:9000', tagtype='ner')
最佳答案
TL; DR
NLTK在将令牌列表读入字符串中并将其传递给CoreNLP服务器之前。然后,CoreNLP重新标记输入,并将类似数字的标记与\xa0
(不间断空格)连接在一起。
在长
让我们看一下代码,如果我们从tag()
看CoreNLPParser
函数,我们会看到它调用了tag_sents()
函数,并在调用允许raw_tag_sents()之前将字符串的输入列表转换为字符串。 >要重新标记输入,请参见https://github.com/nltk/nltk/blob/develop/nltk/parse/corenlp.py#L348:
def tag_sents(self, sentences):
"""
Tag multiple sentences.
Takes multiple sentences as a list where each sentence is a list of
tokens.
:param sentences: Input sentences to tag
:type sentences: list(list(str))
:rtype: list(list(tuple(str, str))
"""
# Converting list(list(str)) -> list(str)
sentences = (' '.join(words) for words in sentences)
return [sentences[0] for sentences in self.raw_tag_sents(sentences)]
def tag(self, sentence):
"""
Tag a list of tokens.
:rtype: list(tuple(str, str))
>>> parser = CoreNLPParser(url='http://localhost:9000', tagtype='ner')
>>> tokens = 'Rami Eid is studying at Stony Brook University in NY'.split()
>>> parser.tag(tokens)
[('Rami', 'PERSON'), ('Eid', 'PERSON'), ('is', 'O'), ('studying', 'O'), ('at', 'O'), ('Stony', 'ORGANIZATION'),
('Brook', 'ORGANIZATION'), ('University', 'ORGANIZATION'), ('in', 'O'), ('NY', 'O')]
>>> parser = CoreNLPParser(url='http://localhost:9000', tagtype='pos')
>>> tokens = "What is the airspeed of an unladen swallow ?".split()
>>> parser.tag(tokens)
[('What', 'WP'), ('is', 'VBZ'), ('the', 'DT'),
('airspeed', 'NN'), ('of', 'IN'), ('an', 'DT'),
('unladen', 'JJ'), ('swallow', 'VB'), ('?', '.')]
"""
return self.tag_sents([sentence])[0]
并且在调用时,
CoreNLPParser
使用raw_tag_sents()
将输入传递到服务器:def raw_tag_sents(self, sentences):
"""
Tag multiple sentences.
Takes multiple sentences as a list where each sentence is a string.
:param sentences: Input sentences to tag
:type sentences: list(str)
:rtype: list(list(list(tuple(str, str)))
"""
default_properties = {'ssplit.isOneSentence': 'true',
'annotators': 'tokenize,ssplit,' }
# Supports only 'pos' or 'ner' tags.
assert self.tagtype in ['pos', 'ner']
default_properties['annotators'] += self.tagtype
for sentence in sentences:
tagged_data = self.api_call(sentence, properties=default_properties)
yield [[(token['word'], token[self.tagtype]) for token in tagged_sentence['tokens']]
for tagged_sentence in tagged_data['sentences']]
所以问题是如何解决问题并在传递令牌时获取令牌?
如果我们查看CoreNLP中Tokenizer的选项,则会看到
api_call()
选项:https://stanfordnlp.github.io/CoreNLP/tokenize.html#options
Preventing tokens from containing a space in Stanford CoreNLP
如果我们在调用
tokenize.whitespace
之前对允许额外的properties
进行了一些更改,则可以在令牌传递给由空格连接的CoreNLP服务器时强制执行令牌,例如更改代码:def tag_sents(self, sentences, properties=None):
"""
Tag multiple sentences.
Takes multiple sentences as a list where each sentence is a list of
tokens.
:param sentences: Input sentences to tag
:type sentences: list(list(str))
:rtype: list(list(tuple(str, str))
"""
# Converting list(list(str)) -> list(str)
sentences = (' '.join(words) for words in sentences)
if properties == None:
properties = {'tokenize.whitespace':'true'}
return [sentences[0] for sentences in self.raw_tag_sents(sentences, properties)]
def tag(self, sentence, properties=None):
"""
Tag a list of tokens.
:rtype: list(tuple(str, str))
>>> parser = CoreNLPParser(url='http://localhost:9000', tagtype='ner')
>>> tokens = 'Rami Eid is studying at Stony Brook University in NY'.split()
>>> parser.tag(tokens)
[('Rami', 'PERSON'), ('Eid', 'PERSON'), ('is', 'O'), ('studying', 'O'), ('at', 'O'), ('Stony', 'ORGANIZATION'),
('Brook', 'ORGANIZATION'), ('University', 'ORGANIZATION'), ('in', 'O'), ('NY', 'O')]
>>> parser = CoreNLPParser(url='http://localhost:9000', tagtype='pos')
>>> tokens = "What is the airspeed of an unladen swallow ?".split()
>>> parser.tag(tokens)
[('What', 'WP'), ('is', 'VBZ'), ('the', 'DT'),
('airspeed', 'NN'), ('of', 'IN'), ('an', 'DT'),
('unladen', 'JJ'), ('swallow', 'VB'), ('?', '.')]
"""
return self.tag_sents([sentence], properties)[0]
def raw_tag_sents(self, sentences, properties=None):
"""
Tag multiple sentences.
Takes multiple sentences as a list where each sentence is a string.
:param sentences: Input sentences to tag
:type sentences: list(str)
:rtype: list(list(list(tuple(str, str)))
"""
default_properties = {'ssplit.isOneSentence': 'true',
'annotators': 'tokenize,ssplit,' }
default_properties.update(properties or {})
# Supports only 'pos' or 'ner' tags.
assert self.tagtype in ['pos', 'ner']
default_properties['annotators'] += self.tagtype
for sentence in sentences:
tagged_data = self.api_call(sentence, properties=default_properties)
yield [[(token['word'], token[self.tagtype]) for token in tagged_sentence['tokens']]
for tagged_sentence in tagged_data['sentences']]
更改上面的代码后:
>>> from nltk.parse.corenlp import CoreNLPParser
>>> ner_tagger = CoreNLPParser(url='http://localhost:9000', tagtype='ner')
>>> sent = ['my', 'phone', 'number', 'is', '1111', '1111', '1111']
>>> ner_tagger.tag(sent)
[('my', 'O'), ('phone', 'O'), ('number', 'O'), ('is', 'O'), ('1111', 'DATE'), ('1111', 'DATE'), ('1111', 'DATE')]
关于python - 为什么CoreNLP ner tagger和ner tagger将分隔的数字连在一起?,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/52250268/