simpleQueryForBlobFileDescriptor

simpleQueryForBlobFileDescriptor

本文介绍了快速读取使用simpleQueryForBlobFileDescriptor内SQLite的斑点?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我用下面的code读斑点与SQLite的在我的Andr​​oid应用100KB和1000KB的大小:

 公共对象getCachefromDb(字符串sIdentifier){    字符串sSQL ='+ sIdentifier +选择从cachtable其中identifier =缓存';    光标C = NULL;
    尝试{
        C = connection_chronica.rawQuery(sSQL,NULL);
    }赶上(SQLiteException E){
        Log.v(SQLite的EXCETION,e.getMessage());
    }
    c.moveToFirst();
    Log.v(DEBUG负荷高速缓存,sIdentifier:+ sIdentifier);
    字节[] BDATA = NULL;
    尝试{
        BDATA = c.getBlob(0);
    }赶上(例外五){
        e.printStackTrace();
    }
    对象o = NULL;    如果(BDATA!= NULL){
        ByteArrayInputStream的BOS =新ByteArrayInputStream进行(BDATA);
        ObjectInputStream的OIS;
        尝试{
            OIS =新的ObjectInputStream(BOS);
            O = ois.readObject();        }赶上(StreamCorruptedException E){
            // TODO自动生成catch块
            e.printStackTrace();
        }赶上(IOException异常五){
            // TODO自动生成catch块
            e.printStackTrace();
        }赶上(ClassNotFoundException的E){
            // TODO自动生成catch块
            e.printStackTrace();
        }
    }    c.close();
    返回O;}

我想优化读取速度,我发现文章mentoining simpleQueryForBlobFileDescriptor。

我的问题:这是否帮助我阅读BLOBS更快?如果是的话我如何使用它?

从其他职位举例:

  SQLiteStatement GET = mDb.compileStatement(
    选择blobColumn+
    FROM tableName值+
    WHE​​RE _id = 1+
    LIMIT 1
);ParcelFileDescriptor结果= get.simpleQueryForBlobFileDescriptor();
FIS的FileInputStream =新的FileInputStream(result.getFileDescriptor()); //读到像任何其他


解决方案

我的测试结果说,它的速度较慢。

长期测试后,我发现,在使用simpleQueryForBlobFileDescriptor较慢。见下code。我的老code例如读取390毫秒一个blob和新的code。与simpleQueryForBlobFileDescriptor读取805毫秒同一斑点。我读的地方,
simpleQueryForBlobFileDescriptor应该是一滴读数非常快,但这似乎不是在我的测试。 Perhabs我不这样做是否正确? ( 我希望如此 )。任何其他提示。

 公共对象getCachefromDb_old(字符串sIdentifier){    Log.v(DEBUG LOAD BLOB,开始:+ sIdentifier);
    对象o = NULL;
    尝试{
        // Erstelle EIN SQLStatemant麻省理工学院einer InClause
        字符串sSQL ='+ sIdentifier +选择从cache表,其中标识=缓存';
        SQLiteStatement GET = connection_chronica.compileStatement(sSQL);
        ParcelFileDescriptor结果= get.simpleQueryForBlobFileDescriptor();
        FIS的FileInputStream =新的FileInputStream(result.getFileDescriptor());
        ObjectInputStream的inStream中=新的ObjectInputStream(FIS);
        O = inStream.readObject();    }赶上(StreamCorruptedException E1){
        // TODO自动生成catch块
        e1.printStackTrace();
    }赶上(IOException异常E1){
        // TODO自动生成catch块
        e1.printStackTrace();
    }赶上(ClassNotFoundException的E){
        // TODO自动生成catch块
        e.printStackTrace();
    }
    Log.v(DEBUG LOAD BLOB,结束:+ sIdentifier);    返回O;}

I am reading blobs with size between 100kb and 1000kb from SQlite in my Android App using the following code :

public Object getCachefromDb(String sIdentifier){

    String sSQL = " Select cache from cachtable where identifier='" + sIdentifier + "'";

    Cursor c = null;
    try {
        c = connection_chronica.rawQuery(sSQL, null);
    } catch (SQLiteException e) {
        Log.v("SQLite Excetion", e.getMessage());
    }


    c.moveToFirst();
    Log.v("DEBUG load Cache","sIdentifier : " + sIdentifier);
    byte[] bData=null;
    try{
        bData = c.getBlob(0);
    }catch(Exception e){
        e.printStackTrace();
    }
    Object o = null;

    if (bData!=null){
        ByteArrayInputStream bos = new ByteArrayInputStream(bData);
        ObjectInputStream ois;
        try {
            ois = new ObjectInputStream(bos);
            o=ois.readObject();

        } catch (StreamCorruptedException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        } catch (IOException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        } catch (ClassNotFoundException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }
    }

    c.close();
    return o;

}

I would like to optimize the speed of reading and I found articles mentoining simpleQueryForBlobFileDescriptor.

My question : Does this help me reading BLOBS faster ? and if so how can I use it ?

Example from other posts:

SQLiteStatement get = mDb.compileStatement(
    "SELECT blobColumn" + 
    " FROM tableName" +
    " WHERE _id = 1" +
    " LIMIT 1"
);

ParcelFileDescriptor result = get.simpleQueryForBlobFileDescriptor();
FileInputStream fis = new FileInputStream(result.getFileDescriptor()); // read like any other
解决方案

My Tests results says its slower.

After long testing I found out, that using simpleQueryForBlobFileDescriptor is slower. See following code. My old code for example reads a blob in 390 miliseconds and the new code with simpleQueryForBlobFileDescriptor reads the same blob in 805 ms. I read somewhere that simpleQueryForBlobFileDescriptor should be very fast for blob readings, but this seems not to be in my tests. Perhabs I am not doing it properly ? ( I hope so ). Any other hints.

public Object getCachefromDb_old(String sIdentifier){

    Log.v("DEBUG LOAD BLOB","Start : " + sIdentifier);
    Object o = null;
    try {
        // Erstelle ein SQLStatemant mit einer InClause
        String sSQL = " Select cache from cachetable where identifier='" + sIdentifier + "'";
        SQLiteStatement get = connection_chronica.compileStatement(sSQL);
        ParcelFileDescriptor result = get.simpleQueryForBlobFileDescriptor();
        FileInputStream fis = new FileInputStream(result.getFileDescriptor());
        ObjectInputStream inStream = new ObjectInputStream(fis);
        o=inStream.readObject();

    } catch (StreamCorruptedException e1) {
        // TODO Auto-generated catch block
        e1.printStackTrace();
    } catch (IOException e1) {
        // TODO Auto-generated catch block
        e1.printStackTrace();
    } catch (ClassNotFoundException e) {
        // TODO Auto-generated catch block
        e.printStackTrace();
    }
    Log.v("DEBUG LOAD BLOB","End : " + sIdentifier);

    return o;

}

这篇关于快速读取使用simpleQueryForBlobFileDescriptor内SQLite的斑点?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

09-25 08:18