问题描述
我正在尝试在 blob 列中编写和更新 pdf 文档,但我只能更新 blob,仅写入比以前存储的数据更多的数据.如果我尝试使用较小的文档数据更新 blob 列,我只会得到损坏的 pdf.
I'm trying to write and update a pdf document in a blob column but I'm just able to update the blob only writing more data than the previous stored data.If I try to update the blob column with a smaller document data I get only a corrupted pdf.
首先使用 empty_blob() 函数初始化 blob 列.我编写了下面的示例 Java 类来测试这种行为.我第一次运行它时将true"作为 main 方法的第一个参数,因此在第一行中存储了大约 31kB 的文档,在第二行中存储了一个 278kB 的文档.然后我用'false'作为参数运行它,这样两行应该更新交换文档.结果是,只有当我写入的数据比现有数据多时,我才能得到正确的结果.
First the blob column has been initialized using empty_blob() function. I wrote the sample Java class below to test this behaviour. I run it the first time with 'true' as first parameter of the main method so in the first row there's stored a document of about 31kB and in the second row there's a document of 278kB.Then I run it with 'false' as parameter, in this way the two rows should be updated swapping the documents. The result is that I get a correct result only when I write more data than the existing one.
如何编写一种方法,以可靠的方式写入和更新 blob,而不用担心二进制数据的大小?
How is it possible to write a method that writes and updates a blob in a reliable way without worring about binary data's size?
import static org.apache.commons.io.IOUtils.copy;
import java.io.FileInputStream;
import java.io.OutputStream;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import oracle.jdbc.OracleDriver;
import oracle.jdbc.OracleResultSet;
import oracle.sql.BLOB;
import org.apache.commons.lang.ArrayUtils;
/**
* Prerequisites:
* 1) a table named 'x' must exists [create table x (i number, j blob);]
* 2) that table should have two columns [insert into x (i, j) values (1, empty_blob()); insert into x (i, j) values (2, empty_blob()); commit;]
* 3) download lsp.pdf from http://www.objectmentor.com/resources/articles/lsp.pdf
* 4) download dotguide.pdf from http://www.graphviz.org/Documentation/dotguide.pdf
*/
public class UpdateBlob {
public static void main(String[] args) throws Exception {
processFiles(new String[]{"lsp.pdf", "dotguide.pdf"}, Boolean.valueOf(args[0]));
}
public static void processFiles(String [] fileNames, boolean forward) throws Exception {
if(!forward){
ArrayUtils.reverse(a);
}
int idx = 1;
for(String fname : fileNames){
insert(idx++, fname);
}
}
private static void insert(int idx, String fname) throws Exception{
Connection conn = null;
PreparedStatement ps = null;
ResultSet rs = null;
try {
DriverManager.registerDriver(new OracleDriver());
conn = DriverManager.getConnection("jdbc:oracle:thin:@"+db+":"+port+":"+sid, user, pwd);
ps = conn.prepareStatement("select j from x where i = ? for update");
ps.setLong(1, idx);
rs = ps.executeQuery();
if (rs.next()) {
FileInputStream instream = new FileInputStream(fname);
BLOB blob = ((OracleResultSet)rs).getBLOB(1);
OutputStream outstream = blob.setBinaryStream(1L);
copy(instream, outstream);
instream.close();
outstream.close();
}
rs.close();
ps.close();
conn.close();
} catch (SQLException e) {
e.printStackTrace();
throw new Exception(e);
}
}
}
Oracle 版本:11.1.0.7.0 - 64 位
Oracle version: 11.1.0.7.0 - 64bit
我什至尝试了标准的 JDBC API,但没有使用 Oracle 的特定 API(如上面的示例),但没有成功.
I even tried the standard JDBC API without using Oracle's specific one (like in the example above) without any success.
推荐答案
简单多了:
PreparedStatement pstmt =
conn.prepareStatement("update blob_table set blob = ? where id = ?");
File blob = new File("/path/to/picture.png");
FileInputStream in = new FileInputStream(blob);
// the cast to int is necessary because with JDBC 4 there is
// also a version of this method with a (int, long)
// but that is not implemented by Oracle
pstmt.setBinaryStream(1, in, (int)blob.length());
pstmt.setInt(2, 42); // set the PK value
pstmt.executeUpdate();
conn.commit();
pstmt.close();
在使用 INSERT 语句时它的工作原理相同.不需要 empty_blob()
和第二个更新语句.
It works the same when using an INSERT statement. No need for empty_blob()
and a second update statement.
这篇关于如何以可靠的方式编写/更新 Oracle blob?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!