在Android的Photo Collage应用程序中,我正在使用Volley加载图像。
我正在使用具有50 mb存储空间的DiskBasedCache(包含在volley中)来防止多次重新下载相同的图像。
上一次我检查DiskBasedCache包含大约1000个缓存条目。
当我的应用程序启动时,Volley会调用mCache.initialize(),它将在我的Galaxy S4上花费约10秒(!)来执行以下操作:
我发现启动时读取1000多个文件不是加载缓存索引的非常有效的方法! :-)
从volley / toolbox / DiskBasedCache.java:
@Override
public synchronized void initialize() {
if (!mRootDirectory.exists()) {
if (!mRootDirectory.mkdirs()) {
VolleyLog.e("Unable to create cache dir %s", mRootDirectory.getAbsolutePath());
}
return;
}
File[] files = mRootDirectory.listFiles();
if (files == null) {
return;
}
for (File file : files) {
FileInputStream fis = null;
try {
fis = new FileInputStream(file);
CacheHeader entry = CacheHeader.readHeader(fis);
entry.size = file.length();
putEntry(entry.key, entry);
} catch (IOException e) {
if (file != null) {
file.delete();
}
} finally {
try {
if (fis != null) {
fis.close();
}
} catch (IOException ignored) { }
}
}
}
我正在寻找一种快速且可扩展的解决方案。也许是替代性DiskBasedCache的实现,或有关如何改进 Volley 库的建议。
更新:(2014-01-06)
注意到Volley缓存使用了许多小的(1字节)IO读/写。我克隆了DiskBasedCache.java,并使用BufferedInputStream和BufferedOutputStreams封装了所有FileInputStreams和FileOutputStreams。我发现这种优化使我的速度提高了3-10倍。
与使用中央索引文件写入新的磁盘缓存相比,此修改的错误风险低。
更新:(2014-01-10)
这是我现在正在使用的新类BufferedDiskBasedCache.java。
package no.ludde.android.ds.android.volley;
/*
* Copyright (C) 2011 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import android.os.SystemClock;
import com.android.volley.Cache;
import com.android.volley.VolleyLog;
import java.io.BufferedInputStream;
import java.io.BufferedOutputStream;
import java.io.EOFException;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.FilterInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.util.Collections;
import java.util.HashMap;
import java.util.Iterator;
import java.util.LinkedHashMap;
import java.util.Map;
/**
* Cache implementation that caches files directly onto the hard disk in the specified
* directory. The default disk usage size is 5MB, but is configurable.
*/
public class BufferedDiskBasedCache implements Cache {
/** Map of the Key, CacheHeader pairs */
private final Map<String, CacheHeader> mEntries =
new LinkedHashMap<String, CacheHeader>(16, .75f, true);
/** Total amount of space currently used by the cache in bytes. */
private long mTotalSize = 0;
/** The root directory to use for the cache. */
private final File mRootDirectory;
/** The maximum size of the cache in bytes. */
private final int mMaxCacheSizeInBytes;
/** Default maximum disk usage in bytes. */
private static final int DEFAULT_DISK_USAGE_BYTES = 5 * 1024 * 1024;
/** High water mark percentage for the cache */
private static final float HYSTERESIS_FACTOR = 0.9f;
/** Magic number for current version of cache file format. */
private static final int CACHE_MAGIC = 0x20120504;
/**
* Constructs an instance of the DiskBasedCache at the specified directory.
* @param rootDirectory The root directory of the cache.
* @param maxCacheSizeInBytes The maximum size of the cache in bytes.
*/
public BufferedDiskBasedCache(File rootDirectory, int maxCacheSizeInBytes) {
mRootDirectory = rootDirectory;
mMaxCacheSizeInBytes = maxCacheSizeInBytes;
}
/**
* Constructs an instance of the DiskBasedCache at the specified directory using
* the default maximum cache size of 5MB.
* @param rootDirectory The root directory of the cache.
*/
public BufferedDiskBasedCache(File rootDirectory) {
this(rootDirectory, DEFAULT_DISK_USAGE_BYTES);
}
/**
* Clears the cache. Deletes all cached files from disk.
*/
@Override
public synchronized void clear() {
File[] files = mRootDirectory.listFiles();
if (files != null) {
for (File file : files) {
file.delete();
}
}
mEntries.clear();
mTotalSize = 0;
VolleyLog.d("Cache cleared.");
}
/**
* Returns the cache entry with the specified key if it exists, null otherwise.
*/
@Override
public synchronized Entry get(String key) {
CacheHeader entry = mEntries.get(key);
// if the entry does not exist, return.
if (entry == null) {
return null;
}
File file = getFileForKey(key);
CountingInputStream cis = null;
try {
cis = new CountingInputStream(new BufferedInputStream(new FileInputStream(file)));
CacheHeader.readHeader(cis); // eat header
byte[] data = streamToBytes(cis, (int) (file.length() - cis.bytesRead));
return entry.toCacheEntry(data);
} catch (IOException e) {
VolleyLog.d("%s: %s", file.getAbsolutePath(), e.toString());
remove(key);
return null;
} finally {
if (cis != null) {
try {
cis.close();
} catch (IOException ioe) {
return null;
}
}
}
}
/**
* Initializes the DiskBasedCache by scanning for all files currently in the
* specified root directory. Creates the root directory if necessary.
*/
@Override
public synchronized void initialize() {
if (!mRootDirectory.exists()) {
if (!mRootDirectory.mkdirs()) {
VolleyLog.e("Unable to create cache dir %s", mRootDirectory.getAbsolutePath());
}
return;
}
File[] files = mRootDirectory.listFiles();
if (files == null) {
return;
}
for (File file : files) {
BufferedInputStream fis = null;
try {
fis = new BufferedInputStream(new FileInputStream(file));
CacheHeader entry = CacheHeader.readHeader(fis);
entry.size = file.length();
putEntry(entry.key, entry);
} catch (IOException e) {
if (file != null) {
file.delete();
}
} finally {
try {
if (fis != null) {
fis.close();
}
} catch (IOException ignored) { }
}
}
}
/**
* Invalidates an entry in the cache.
* @param key Cache key
* @param fullExpire True to fully expire the entry, false to soft expire
*/
@Override
public synchronized void invalidate(String key, boolean fullExpire) {
Entry entry = get(key);
if (entry != null) {
entry.softTtl = 0;
if (fullExpire) {
entry.ttl = 0;
}
put(key, entry);
}
}
/**
* Puts the entry with the specified key into the cache.
*/
@Override
public synchronized void put(String key, Entry entry) {
pruneIfNeeded(entry.data.length);
File file = getFileForKey(key);
try {
BufferedOutputStream fos = new BufferedOutputStream(new FileOutputStream(file));
CacheHeader e = new CacheHeader(key, entry);
e.writeHeader(fos);
fos.write(entry.data);
fos.close();
putEntry(key, e);
return;
} catch (IOException e) {
}
boolean deleted = file.delete();
if (!deleted) {
VolleyLog.d("Could not clean up file %s", file.getAbsolutePath());
}
}
/**
* Removes the specified key from the cache if it exists.
*/
@Override
public synchronized void remove(String key) {
boolean deleted = getFileForKey(key).delete();
removeEntry(key);
if (!deleted) {
VolleyLog.d("Could not delete cache entry for key=%s, filename=%s",
key, getFilenameForKey(key));
}
}
/**
* Creates a pseudo-unique filename for the specified cache key.
* @param key The key to generate a file name for.
* @return A pseudo-unique filename.
*/
private String getFilenameForKey(String key) {
int firstHalfLength = key.length() / 2;
String localFilename = String.valueOf(key.substring(0, firstHalfLength).hashCode());
localFilename += String.valueOf(key.substring(firstHalfLength).hashCode());
return localFilename;
}
/**
* Returns a file object for the given cache key.
*/
public File getFileForKey(String key) {
return new File(mRootDirectory, getFilenameForKey(key));
}
/**
* Prunes the cache to fit the amount of bytes specified.
* @param neededSpace The amount of bytes we are trying to fit into the cache.
*/
private void pruneIfNeeded(int neededSpace) {
if ((mTotalSize + neededSpace) < mMaxCacheSizeInBytes) {
return;
}
if (VolleyLog.DEBUG) {
VolleyLog.v("Pruning old cache entries.");
}
long before = mTotalSize;
int prunedFiles = 0;
long startTime = SystemClock.elapsedRealtime();
Iterator<Map.Entry<String, CacheHeader>> iterator = mEntries.entrySet().iterator();
while (iterator.hasNext()) {
Map.Entry<String, CacheHeader> entry = iterator.next();
CacheHeader e = entry.getValue();
boolean deleted = getFileForKey(e.key).delete();
if (deleted) {
mTotalSize -= e.size;
} else {
VolleyLog.d("Could not delete cache entry for key=%s, filename=%s",
e.key, getFilenameForKey(e.key));
}
iterator.remove();
prunedFiles++;
if ((mTotalSize + neededSpace) < mMaxCacheSizeInBytes * HYSTERESIS_FACTOR) {
break;
}
}
if (VolleyLog.DEBUG) {
VolleyLog.v("pruned %d files, %d bytes, %d ms",
prunedFiles, (mTotalSize - before), SystemClock.elapsedRealtime() - startTime);
}
}
/**
* Puts the entry with the specified key into the cache.
* @param key The key to identify the entry by.
* @param entry The entry to cache.
*/
private void putEntry(String key, CacheHeader entry) {
if (!mEntries.containsKey(key)) {
mTotalSize += entry.size;
} else {
CacheHeader oldEntry = mEntries.get(key);
mTotalSize += (entry.size - oldEntry.size);
}
mEntries.put(key, entry);
}
/**
* Removes the entry identified by 'key' from the cache.
*/
private void removeEntry(String key) {
CacheHeader entry = mEntries.get(key);
if (entry != null) {
mTotalSize -= entry.size;
mEntries.remove(key);
}
}
/**
* Reads the contents of an InputStream into a byte[].
* */
private static byte[] streamToBytes(InputStream in, int length) throws IOException {
byte[] bytes = new byte[length];
int count;
int pos = 0;
while (pos < length && ((count = in.read(bytes, pos, length - pos)) != -1)) {
pos += count;
}
if (pos != length) {
throw new IOException("Expected " + length + " bytes, read " + pos + " bytes");
}
return bytes;
}
/**
* Handles holding onto the cache headers for an entry.
*/
// Visible for testing.
static class CacheHeader {
/** The size of the data identified by this CacheHeader. (This is not
* serialized to disk. */
public long size;
/** The key that identifies the cache entry. */
public String key;
/** ETag for cache coherence. */
public String etag;
/** Date of this response as reported by the server. */
public long serverDate;
/** TTL for this record. */
public long ttl;
/** Soft TTL for this record. */
public long softTtl;
/** Headers from the response resulting in this cache entry. */
public Map<String, String> responseHeaders;
private CacheHeader() { }
/**
* Instantiates a new CacheHeader object
* @param key The key that identifies the cache entry
* @param entry The cache entry.
*/
public CacheHeader(String key, Entry entry) {
this.key = key;
this.size = entry.data.length;
this.etag = entry.etag;
this.serverDate = entry.serverDate;
this.ttl = entry.ttl;
this.softTtl = entry.softTtl;
this.responseHeaders = entry.responseHeaders;
}
/**
* Reads the header off of an InputStream and returns a CacheHeader object.
* @param is The InputStream to read from.
* @throws IOException
*/
public static CacheHeader readHeader(InputStream is) throws IOException {
CacheHeader entry = new CacheHeader();
int magic = readInt(is);
if (magic != CACHE_MAGIC) {
// don't bother deleting, it'll get pruned eventually
throw new IOException();
}
entry.key = readString(is);
entry.etag = readString(is);
if (entry.etag.equals("")) {
entry.etag = null;
}
entry.serverDate = readLong(is);
entry.ttl = readLong(is);
entry.softTtl = readLong(is);
entry.responseHeaders = readStringStringMap(is);
return entry;
}
/**
* Creates a cache entry for the specified data.
*/
public Entry toCacheEntry(byte[] data) {
Entry e = new Entry();
e.data = data;
e.etag = etag;
e.serverDate = serverDate;
e.ttl = ttl;
e.softTtl = softTtl;
e.responseHeaders = responseHeaders;
return e;
}
/**
* Writes the contents of this CacheHeader to the specified OutputStream.
*/
public boolean writeHeader(OutputStream os) {
try {
writeInt(os, CACHE_MAGIC);
writeString(os, key);
writeString(os, etag == null ? "" : etag);
writeLong(os, serverDate);
writeLong(os, ttl);
writeLong(os, softTtl);
writeStringStringMap(responseHeaders, os);
os.flush();
return true;
} catch (IOException e) {
VolleyLog.d("%s", e.toString());
return false;
}
}
}
private static class CountingInputStream extends FilterInputStream {
private int bytesRead = 0;
private CountingInputStream(InputStream in) {
super(in);
}
@Override
public int read() throws IOException {
int result = super.read();
if (result != -1) {
bytesRead++;
}
return result;
}
@Override
public int read(byte[] buffer, int offset, int count) throws IOException {
int result = super.read(buffer, offset, count);
if (result != -1) {
bytesRead += result;
}
return result;
}
}
/*
* Homebrewed simple serialization system used for reading and writing cache
* headers on disk. Once upon a time, this used the standard Java
* Object{Input,Output}Stream, but the default implementation relies heavily
* on reflection (even for standard types) and generates a ton of garbage.
*/
/**
* Simple wrapper around {@link InputStream#read()} that throws EOFException
* instead of returning -1.
*/
private static int read(InputStream is) throws IOException {
int b = is.read();
if (b == -1) {
throw new EOFException();
}
return b;
}
static void writeInt(OutputStream os, int n) throws IOException {
os.write((n >> 0) & 0xff);
os.write((n >> 8) & 0xff);
os.write((n >> 16) & 0xff);
os.write((n >> 24) & 0xff);
}
static int readInt(InputStream is) throws IOException {
int n = 0;
n |= (read(is) << 0);
n |= (read(is) << 8);
n |= (read(is) << 16);
n |= (read(is) << 24);
return n;
}
static void writeLong(OutputStream os, long n) throws IOException {
os.write((byte)(n >>> 0));
os.write((byte)(n >>> 8));
os.write((byte)(n >>> 16));
os.write((byte)(n >>> 24));
os.write((byte)(n >>> 32));
os.write((byte)(n >>> 40));
os.write((byte)(n >>> 48));
os.write((byte)(n >>> 56));
}
static long readLong(InputStream is) throws IOException {
long n = 0;
n |= ((read(is) & 0xFFL) << 0);
n |= ((read(is) & 0xFFL) << 8);
n |= ((read(is) & 0xFFL) << 16);
n |= ((read(is) & 0xFFL) << 24);
n |= ((read(is) & 0xFFL) << 32);
n |= ((read(is) & 0xFFL) << 40);
n |= ((read(is) & 0xFFL) << 48);
n |= ((read(is) & 0xFFL) << 56);
return n;
}
static void writeString(OutputStream os, String s) throws IOException {
byte[] b = s.getBytes("UTF-8");
writeLong(os, b.length);
os.write(b, 0, b.length);
}
static String readString(InputStream is) throws IOException {
int n = (int) readLong(is);
byte[] b = streamToBytes(is, n);
return new String(b, "UTF-8");
}
static void writeStringStringMap(Map<String, String> map, OutputStream os) throws IOException {
if (map != null) {
writeInt(os, map.size());
for (Map.Entry<String, String> entry : map.entrySet()) {
writeString(os, entry.getKey());
writeString(os, entry.getValue());
}
} else {
writeInt(os, 0);
}
}
static Map<String, String> readStringStringMap(InputStream is) throws IOException {
int size = readInt(is);
Map<String, String> result = (size == 0)
? Collections.<String, String>emptyMap()
: new HashMap<String, String>(size);
for (int i = 0; i < size; i++) {
String key = readString(is).intern();
String value = readString(is).intern();
result.put(key, value);
}
return result;
}
}
最佳答案
是的,DiskBasedCache的工作方式需要打开initialize()中的所有文件。简直就是....不是一个好主意:-(
您需要执行一个不同的实现,该实现在启动时不会打开所有文件。
制作一份DiskBasedCache副本,然后将initialize()更改为
@Override
public synchronized void initialize() {
if (!mRootDirectory.exists()) {
if (!mRootDirectory.mkdirs()) {
VolleyLog.e("Unable to create cache dir %s", mRootDirectory.getAbsolutePath());
}
}
}
并更改get(),以便进一步检查文件在文件系统中是否存在,例如
@Override
public synchronized Entry get(String key) {
CacheHeader entry = mEntries.get(key);
File file = getFileForKey(key);
if (entry == null && !file.exists()) { // EXTRA CHECK
// if the entry does not exist, return.
VolleyLog.d("DrVolleyDiskBasedCache miss for " + key);
return null;
}
...
我在https://play.google.com/store/apps/details?id=dk.dr.radio中使用了这种方法,并且效果很好-大约300000个用户已经测试了其健壮性:-)
您可以从https://code.google.com/p/dr-radio-android/source/browse/trunk/DRRadiov3/src/dk/dr/radio/net/volley/DrDiskBasedCache.java下载该文件的完整版本(您必须删除一些特定于DR Radio的东西)