本文介绍了执行和测试stanford核心nlp示例的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我下载了stanford核心nlp软件包并试图在我的机器上测试它。

I downloaded stanford core nlp packages and tried to test it on my machine.

使用命令: java -cp* - mx1g edu.stanford.nlp.sentiment.SentimentPipeline -file input.txt

我得到的情绪结果为正面否定 input.txt 包含要测试的句子。

I got sentiment result in form of positive or negative. input.txt contains the sentence to be tested.

更多命令: java - cp stanford-corenlp-3.3.0.jar; stanford-corenlp-3.3.0-models.jar; xom.jar; joda-time.jar -Xmx600m edu.stanford.nlp.pipeline.StanfordCoreNLP -annotators tokenize,ssplit,pos ,引理,解析-file input.txt 执行时给出以下行:

On more command: java -cp stanford-corenlp-3.3.0.jar;stanford-corenlp-3.3.0-models.jar;xom.jar;joda-time.jar -Xmx600m edu.stanford.nlp.pipeline.StanfordCoreNLP -annotators tokenize,ssplit,pos,lemma,parse -file input.txt when executed gives follwing lines :

H:\Drive E\Stanford\stanfor-corenlp-full-2013~>java -cp stanford-corenlp-3.3.0.j
ar;stanford-corenlp-3.3.0-models.jar;xom.jar;joda-time.jar -Xmx600m edu.stanford
.nlp.pipeline.StanfordCoreNLP -annotators tokenize,ssplit,pos,lemma,parse -file
input.txt
Adding annotator tokenize
Adding annotator ssplit
Adding annotator pos
Reading POS tagger model from edu/stanford/nlp/models/pos-tagger/english-left3wo
rds/english-left3words-distsim.tagger ... done [36.6 sec].
Adding annotator lemma
Adding annotator parse
Loading parser from serialized file edu/stanford/nlp/models/lexparser/englishPCF
G.ser.gz ... done [13.7 sec].

Ready to process: 1 files, skipped 0, total 1
Processing file H:\Drive E\Stanford\stanfor-corenlp-full-2013~\input.txt ... wri
ting to H:\Drive E\Stanford\stanfor-corenlp-full-2013~\input.txt.xml {
  Annotating file H:\Drive E\Stanford\stanfor-corenlp-full-2013~\input.txt [13.6
81 seconds]
} [20.280 seconds]
Processed 1 documents
Skipped 0 documents, error annotating 0 documents
Annotation pipeline timing information:
PTBTokenizerAnnotator: 0.4 sec.
WordsToSentencesAnnotator: 0.0 sec.
POSTaggerAnnotator: 1.8 sec.
MorphaAnnotator: 2.2 sec.
ParserAnnotator: 9.1 sec.
TOTAL: 13.6 sec. for 10 tokens at 0.7 tokens/sec.
Pipeline setup: 58.2 sec.
Total time for StanfordCoreNLP pipeline: 79.6 sec.

H:\Drive E\Stanford\stanfor-corenlp-full-2013~>

可以理解。没有任何信息。

Could understand. No informative result.

我有一个例子:

import java.io.*;
import java.util.*;

import edu.stanford.nlp.io.*;
import edu.stanford.nlp.ling.*;
import edu.stanford.nlp.pipeline.*;
import edu.stanford.nlp.trees.*;
import edu.stanford.nlp.util.*;

public class StanfordCoreNlpDemo {

  public static void main(String[] args) throws IOException {
    PrintWriter out;
    if (args.length > 1) {
      out = new PrintWriter(args[1]);
    } else {
      out = new PrintWriter(System.out);
    }
    PrintWriter xmlOut = null;
    if (args.length > 2) {
      xmlOut = new PrintWriter(args[2]);
    }

    StanfordCoreNLP pipeline = new StanfordCoreNLP();
    Annotation annotation;
    if (args.length > 0) {
      annotation = new Annotation(IOUtils.slurpFileNoExceptions(args[0]));
    } else {
      annotation = new Annotation("Kosgi Santosh sent an email to Stanford University. He didn't get a reply.");
    }

    pipeline.annotate(annotation);
    pipeline.prettyPrint(annotation, out);
    if (xmlOut != null) {
      pipeline.xmlPrint(annotation, xmlOut);
    }
    // An Annotation is a Map and you can get and use the various analyses individually.
    // For instance, this gets the parse tree of the first sentence in the text.
    List<CoreMap> sentences = annotation.get(CoreAnnotations.SentencesAnnotation.class);
    if (sentences != null && sentences.size() > 0) {
      CoreMap sentence = sentences.get(0);
      Tree tree = sentence.get(TreeCoreAnnotations.TreeAnnotation.class);
      out.println();
      out.println("The first sentence parsed is:");
      tree.pennPrint(out);
    }
  }

}

试图在包含必要库的netbeans中执行它。但是它总是卡在中间或者给出异常线程main中的异常java.lang.OutOfMemoryError:Java堆空间

Tried to execute it in netbeans with including necessary library. But it always stuck in between or gives exception Exception in thread "main" java.lang.OutOfMemoryError: Java heap space

我将内存设置为属性/运行/ VM框

任何想法我怎么能使用命令行在java示例上运行?

Any idea how can I run above java example using command line?

我想得到示例的情绪分数

UPDATE

输出: java -cp* - mx1g edu.stanford.nlp。 sentiment.SentimentPipeline -file input.txt

输出: java -cp stanford-corenlp-3.3.0.j
ar; stanford-corenlp-3.3.0-models.jar; xom.jar; joda-time.jar -Xmx600m edu.stanford
.nlp.pipeline.StanfordCoreNLP -annotators tokenize,ssplit,pos,lemma,parse -file
input.txt

推荐答案

您需要将情绪注释器添加到注释器列表中:

You need to add the "sentiment" annotator to the list of annotators:

-annotators tokenize,ssplit,pos,lemma,parse,sentiment

这将为XML中的每个句子节点添加情绪属性。

This will add a "sentiment" property to each sentence node in your XML.

这篇关于执行和测试stanford核心nlp示例的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

10-09 23:28