本文介绍了Hadoop copyFromLocal的内存不足问题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试将包含1,048,578个文件的目录复制到hdfs文件系统中,但是出现以下错误:

I'm trying to copy a directory that contains 1,048,578 files into hdfs file system but, got below error:

Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
    at java.util.Arrays.copyOf(Arrays.java:2367)
    at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:130)
    at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:114)
    at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:415)
    at java.lang.StringBuffer.append(StringBuffer.java:237)
    at java.net.URI.appendSchemeSpecificPart(URI.java:1892)
    at java.net.URI.toString(URI.java:1922)
    at java.net.URI.<init>(URI.java:749)
    at org.apache.hadoop.fs.shell.PathData.stringToUri(PathData.java:565)
    at org.apache.hadoop.fs.shell.PathData.<init>(PathData.java:151)
    at org.apache.hadoop.fs.shell.PathData.getDirectoryContents(PathData.java:273)
    at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
    at org.apache.hadoop.fs.shell.CommandWithDestination.recursePath(CommandWithDestination.java:291)
    at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
    at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:278)
    at org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:243)
    at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:260)
    at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:244)
    at org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:220)
    at org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:267)
    at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:190)
    at org.apache.hadoop.fs.shell.Command.run(Command.java:154)
    at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
    at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)

推荐答案

问题基本上与Hadoop客户端有关.通过将"GCOverheadLimit"增加到4GB,可以解决此问题.以下命令解决了我的问题.

Issue was basically with Hadoop client. This is fixed by increasing "GCOverheadLimit" to 4GB. Following command solved my problem.

export HADOOP_CLIENT_OPTS =-XX:-UseGCOverheadLimit -Xmx4096m"

export HADOOP_CLIENT_OPTS="-XX:-UseGCOverheadLimit -Xmx4096m"

这篇关于Hadoop copyFromLocal的内存不足问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-11 01:30