我在http://fiware-cosmos.readthedocs.io/en/latest/user_and_programmer_manual/batch/using_hadoop_and_ecosystem/#top站点中实现了代码,该代码包括在MapReduce程序的命令行中添加参数“regex”。该程序可以很好地显示

16/06/28 17:19:47 INFO input.FileInputFormat: Total input paths to process : 1
16/06/28 17:19:47 INFO mapreduce.JobSubmitter: number of splits:1
16/06/28 17:19:48 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1448020964278_0633
16/06/28 17:19:49 INFO impl.YarnClientImpl: Submitted application application_1448020964278_0633
16/06/28 17:19:49 INFO mapreduce.Job: The url to track the job: http://co2-hdpmaster.irit.fr:8088/proxy/application_1448020964278_0633/
16/06/28 17:19:49 INFO mapreduce.Job: Running job: job_1448020964278_0633
16/06/28 17:19:59 INFO mapreduce.Job: Job job_1448020964278_0633 running in uber mode : false
16/06/28 17:19:59 INFO mapreduce.Job:  map 0% reduce 0%
16/06/28 17:20:10 INFO mapreduce.Job:  map 100% reduce 0%
16/06/28 17:20:19 INFO mapreduce.Job:  map 100% reduce 100%
16/06/28 17:20:20 INFO mapreduce.Job: Job job_1448020964278_0633 completed successfully
16/06/28 17:20:20 INFO mapreduce.Job: Counters: 49
        File System Counters
                FILE: Number of bytes read=6
                FILE: Number of bytes written=230845
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=916
                HDFS: Number of bytes written=0
                HDFS: Number of read operations=6
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=2
        Job Counters
                Launched map tasks=1
                Launched reduce tasks=1
                Rack-local map tasks=1
                Total time spent by all maps in occupied slots (ms)=7478
                Total time spent by all reduces in occupied slots (ms)=7151
                Total time spent by all map tasks (ms)=7478
                Total time spent by all reduce tasks (ms)=7151
                Total vcore-seconds taken by all map tasks=7478
                Total vcore-seconds taken by all reduce tasks=7151
                Total megabyte-seconds taken by all map tasks=22972416
                Total megabyte-seconds taken by all reduce tasks=21967872
        Map-Reduce Framework
                Map input records=17
                Map output records=0
                Map output bytes=0
                Map output materialized bytes=6
                Input split bytes=125
                Combine input records=0
                Combine output records=0
                Reduce input groups=0
                Reduce shuffle bytes=6
                Reduce input records=0
                Reduce output records=0
                Spilled Records=0
                Shuffled Maps =1
                Failed Shuffles=0
                Merged Map outputs=1
                GC time elapsed (ms)=114
                CPU time spent (ms)=2120
                Physical memory (bytes) snapshot=1398767616
                Virtual memory (bytes) snapshot=6716833792
                Total committed heap usage (bytes)=2156396544
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters
                Bytes Read=791
        File Output Format Counters
                Bytes Written=0

当我想使用hadoop dfs -cat output/part-r-00000命令显示文件内容时,它什么也不返回
有人可以解释这个问题

最佳答案

您的工作没有产生任何结果:

Map output records=0
Reduce output records=0
HDFS: Number of bytes written=0

因此文件可能为空。您应该在HDFS上检查其大小以确认这一点。

关于java - “hadoop dfs -cat output”不返回任何内容,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/38098470/

10-12 22:15