我正在编写一个应用程序,该应用程序处理来自二进制文件(最多50兆)的大量整数。我需要尽快执行此操作,主要的性能问题是磁盘访问时间,因为我从磁盘进行了大量读取,因此优化读取时间通常会提高应用程序的性能。

直到现在,我还认为将文件分割成的块越少(即读取次数越少/读取大小越大),应用程序运行速度就会越快。这是因为HDD由于其机械性质而在寻找即定位块的开始处非常慢。但是,一旦找到要读取的块的开始,它就应该相当快地执行实际读取。

好吧,直到我运行此测试为止:


  旧测试已删除,由于硬盘缓存存在问题


新测试(由于文件太大(1gb),我访问了其中的随机位置,因此HDD缓存在这里无济于事):

    int mega = 1024 * 1024;
    int giga = 1024 * 1024 * 1024;
    byte[] bigBlock = new byte[mega];
    int hundredKilo = mega / 10;
    byte[][] smallBlocks = new byte[10][hundredKilo];
    String location = "C:\\Users\\Vladimir\\Downloads\\boom.avi";
    RandomAccessFile raf;
    FileInputStream f;
    long start;
    long end;
    int position;
    java.util.Random rand = new java.util.Random();
    int bigBufferTotalReadTime = 0;
    int smallBufferTotalReadTime = 0;

    for (int j = 0; j < 100; j++)
    {
        position = rand.nextInt(giga);
        raf = new RandomAccessFile(location, "r");
        raf.seek((long) position);
        f = new FileInputStream(raf.getFD());
        start = System.currentTimeMillis();
        f.read(bigBlock);
        end = System.currentTimeMillis();
        bigBufferTotalReadTime += end - start;
        f.close();
    }

    for (int j = 0; j < 100; j++)
    {
        position = rand.nextInt(giga);
        raf = new RandomAccessFile(location, "r");
        raf.seek((long) position);
        f = new FileInputStream(raf.getFD());
        start = System.currentTimeMillis();
        for (int i = 0; i < 10; i++)
        {
            f.read(smallBlocks[i]);
        }
        end = System.currentTimeMillis();
        smallBufferTotalReadTime += end - start;
        f.close();
    }

    System.out.println("Average performance of small buffer: " + (smallBufferTotalReadTime / 100));
    System.out.println("Average performance of big buffer: " + (bigBufferTotalReadTime / 100));


结果:
小缓冲区的平均值-35ms
大缓冲区的平均值-40毫秒?!
(在linux和Windows上都尝试过,在两种情况下,更大的块大小都会导致更长的读取时间,为什么?)

在多次运行此测试后,我意识到,出于某种神奇的原因,读取一个大块平均比顺序读取10个较小的块要花更长的时间。我认为这可能是Windows太聪明并试图优化其文件系统中某些内容的结果,所以我在Linux上运行了相同的代码,令我惊讶的是,我得到了相同的结果。

我不知道为什么会这样,有人可以给我一个提示吗?在这种情况下,最佳的块大小是多少?

亲切的问候

最佳答案

第一次读取数据后,数据将位于磁盘缓存中。二读应该更快。您首先需要运行您认为更快的测试。 ;)

如果您有50 MB的内存,则应该能够一次读取整个文件。



package com.google.code.java.core.files;

import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.channels.FileChannel;

public class FileReadingMain {
    public static void main(String... args) throws IOException {
        File temp = File.createTempFile("deleteme", "zeros");
        FileOutputStream fos = new FileOutputStream(temp);
        fos.write(new byte[50 * 1024 * 1024]);
        fos.close();

        for (int i = 0; i < 3; i++)
            for (int blockSize = 1024 * 1024; blockSize >= 512; blockSize /= 2) {
                readFileNIO(temp, blockSize);
                readFile(temp, blockSize);
            }
    }

    private static void readFile(File temp, int blockSize) throws IOException {
        long start = System.nanoTime();
        byte[] bytes = new byte[blockSize];
        int r;
        for (r = 0; System.nanoTime() - start < 2e9; r++) {
            FileInputStream fis = new FileInputStream(temp);
            while (fis.read(bytes) > 0) ;
            fis.close();
        }
        long time = System.nanoTime() - start;
        System.out.printf("IO: Reading took %.3f ms using %,d byte blocks%n", time / r / 1e6, blockSize);
    }

    private static void readFileNIO(File temp, int blockSize) throws IOException {
        long start = System.nanoTime();
        ByteBuffer bytes = ByteBuffer.allocateDirect(blockSize);
        int r;
        for (r = 0; System.nanoTime() - start < 2e9; r++) {
            FileChannel fc = new FileInputStream(temp).getChannel();
            while (fc.read(bytes) > 0) {
                bytes.clear();
            }
            fc.close();
        }
        long time = System.nanoTime() - start;
        System.out.printf("NIO: Reading took %.3f ms using %,d byte blocks%n", time / r / 1e6, blockSize);
    }
}


在我的笔记本电脑上打印

NIO: Reading took 57.255 ms using 1,048,576 byte blocks
IO: Reading took 112.943 ms using 1,048,576 byte blocks
NIO: Reading took 48.860 ms using 524,288 byte blocks
IO: Reading took 78.002 ms using 524,288 byte blocks
NIO: Reading took 41.474 ms using 262,144 byte blocks
IO: Reading took 61.744 ms using 262,144 byte blocks
NIO: Reading took 41.336 ms using 131,072 byte blocks
IO: Reading took 56.264 ms using 131,072 byte blocks
NIO: Reading took 42.184 ms using 65,536 byte blocks
IO: Reading took 64.700 ms using 65,536 byte blocks
NIO: Reading took 41.595 ms using 32,768 byte blocks <= fastest for NIO
IO: Reading took 49.385 ms using 32,768 byte blocks <= fastest for IO
NIO: Reading took 49.676 ms using 16,384 byte blocks
IO: Reading took 59.731 ms using 16,384 byte blocks
NIO: Reading took 55.596 ms using 8,192 byte blocks
IO: Reading took 74.191 ms using 8,192 byte blocks
NIO: Reading took 77.148 ms using 4,096 byte blocks
IO: Reading took 84.943 ms using 4,096 byte blocks
NIO: Reading took 104.242 ms using 2,048 byte blocks
IO: Reading took 112.768 ms using 2,048 byte blocks
NIO: Reading took 177.214 ms using 1,024 byte blocks
IO: Reading took 185.006 ms using 1,024 byte blocks
NIO: Reading took 303.164 ms using 512 byte blocks
IO: Reading took 316.487 ms using 512 byte blocks


似乎最佳读取大小可能是32KB。注意:由于文件完全在磁盘缓存中,因此对于从磁盘读取的文件而言,这可能不是最佳大小。

09-26 05:47