问题描述
我无法理解两者之间的区别.不过,我开始知道 word_tokenize 使用 Penn-Treebank 进行标记化.但 TweetTokenizer 上没有任何内容可用.对于哪种数据,我应该使用 TweetTokenizer 而不是 word_tokenize?
I am unable to understand the difference between the two. Though, I come to know that word_tokenize uses Penn-Treebank for tokenization purposes. But nothing on TweetTokenizer is available. For which sort of data should I be using TweetTokenizer over word_tokenize?
推荐答案
嗯,两个分词器的工作方式几乎相同,将给定的句子拆分为单词.但是您可以将 TweetTokenizer
视为 word_tokenize
的子集.TweetTokenizer
保持主题标签完整,而 word_tokenize
不会.
Well, both tokenizers almost work the same way, to split a given sentence into words. But you can think of TweetTokenizer
as a subset of word_tokenize
. TweetTokenizer
keeps hashtags intact while word_tokenize
doesn't.
希望下面的例子能解开你所有的疑惑...
I hope the below example will clear all your doubts...
from nltk.tokenize import TweetTokenizer
from nltk.tokenize import word_tokenize
tt = TweetTokenizer()
tweet = "This is a cooool #dummysmiley: :-) :-P <3 and some arrows < > -> <-- @remy: This is waaaaayyyy too much for you!!!!!!"
print(tt.tokenize(tweet))
print(word_tokenize(tweet))
# output
# ['This', 'is', 'a', 'cooool', '#dummysmiley', ':', ':-)', ':-P', '<3', 'and', 'some', 'arrows', '<', '>', '->', '<--', '@remy', ':', 'This', 'is', 'waaaaayyyy', 'too', 'much', 'for', 'you', '!', '!', '!']
# ['This', 'is', 'a', 'cooool', '#', 'dummysmiley', ':', ':', '-', ')', ':', '-P', '<', '3', 'and', 'some', 'arrows', '<', '>', '-', '>', '<', '--', '@', 'remy', ':', 'This', 'is', 'waaaaayyyy', 'too', 'much', 'for', 'you', '!', '!', '!', '!', '!', '!']
您可以看到 word_tokenize
已将 #dummysmiley
拆分为 '#'
和 'dummysmiley'
,而TweetTokenizer 没有,因为 '#dummysmiley'
.TweetTokenizer
主要用于分析推文.您可以从此链接了解有关分词器的更多信息
You can see that word_tokenize
has split #dummysmiley
as '#'
and 'dummysmiley'
, while TweetTokenizer didn't, as '#dummysmiley'
. TweetTokenizer
is built mainly for analyzing tweets.You can learn more about tokenizer from this link
这篇关于nltk.TweetTokenizer 与 nltk.word_tokenize 有何不同?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!