我在 IntelliJ IDE 中运行 HbaseTestingUtility 时遇到问题,我可以看到以下错误可能是文件名太长的结果:

16/03/14 22:45:13 WARN datanode.DataNode: IOException in BlockReceiver.run():
java.io.IOException: Failed to move meta file for ReplicaBeingWritten, blk_1073741825_1001, RBW
getNumBytes()     = 7
getBytesOnDisk()  = 7
getVisibleLength()= 7
getVolume()       = C:\Users\user1\Documents\work\Repos\hadoop-analys\reporting\mrkts-surveillance\target\test-data\9654a646-e923-488a-9e20-46396fd15292\dfscluster_6b264e6b-0218-4f30-ad5b-72e838940b1e\dfs\data\data1\current
getBlockFile()    = C:\Users\user1\Documents\work\Repos\hadoop-analys\reporting\mrkts-surveillance\target\test-data\9654a646-e923-488a-9e20-46396fd15292\dfscluster_6b264e6b-0218-4f30-ad5b-72e838940b1e\dfs\data\data1\current\BP-429386217-192.168.1.110-1457991908038\current\rbw\blk_1073741825
bytesAcked=7
bytesOnDisk=7 from C:\Users\user1\Documents\work\Repos\hadoop-analys\reporting\mrkts-surveillance\target\test-data\9654a646-e923-488a-9e20-46396fd15292\dfscluster_6b264e6b-0218-4f30-ad5b-72e838940b1e\dfs\data\data1\current\BP-429386217-192.168.1.110-1457991908038\current\rbw\blk_1073741825_1001.meta to    C:\Users\user1\Documents\work\Repos\hadoop-analys\reporting\mrkts-surveillance\target\test-data\9654a646-e923-488a-9e20-46396fd15292\dfscluster_6b264e6b-0218-4f30-ad5b-72e838940b1e\dfs\data\data1\current\BP-429386217-192.168.1.110-1457991908038\current\finalized\subdir0\subdir0\blk_1073741825_1001.meta
   at     org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:615)
at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addBlock(BlockPoolSlice.java:250)
at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addBlock(FsVolumeImpl.java:229)
at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeReplica(FsDatasetImpl.java:1119)
at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeBlock(FsDatasetImpl.java:1100)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.finalizeBlock(BlockReceiver.java:1293)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1233)
at java.lang.Thread.run(Thread.java:745)
Caused by: 3: The system cannot find the path specified.

任何想法,我如何指定 Hbasetestingutility 的基目录不使用这个庞大的起始目录?

谢谢,

最佳答案

您可以使用 test.build.data.basedirectory。

请看HBaseCommonTestingUtility中的 getDataTestDir

/**
 * System property key to get base test directory value
 */
public static final String BASE_TEST_DIRECTORY_KEY =
  "test.build.data.basedirectory";

/**
  * @return Where to write test data on local filesystem, specific to
  * the test.  Useful for tests that do not use a cluster.
  * Creates it if it does not exist already.
  */
public Path getDataTestDir() {
   if (this.dataTestDir == null) {
     setupDataTestDir();
   }
   return new Path(this.dataTestDir.getAbsolutePath());
 }

关于windows - 如何在 HbaseTestingUtility 中更改 HBase 基目录,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/35999169/

10-13 05:35