问题描述
我使用pg_dump
从Postgres 9.4导出包含大对象(LO)的数据,如下所示:
I used pg_dump
to export the data including large objects (LO) from Postgres 9.4 like this:
$ pg_dump fhir -O -b > fhir.sql
生成的fhir.sql
中的LO语句如下:
The LO statements in my generated fhir.sql
are like this:
SET standard_conforming_strings = on;
SELECT pg_catalog.lowrite(0, '\x1f8b0800000000000000a5903b6ec3300c86efa2d9b1dad5a728daa2');
当我在Postgres8.2中执行\i fhir.sql
时,出现此错误:
When I executed \i fhir.sql
in my Postgres8.2, I got this error:
ERROR: invalid input syntax for type bytea
当我SET standard_conforming_strings = off
时,已插入数据,但收到警告,并且pg_largeobject
表中的数据为:
When I SET standard_conforming_strings = off
, the data was inserted, but I got a warning, and the data in my pg_largeobject
table was:
14 | 0 | \0378b0800000000000000a5903b6ec3300c86efa2d9b1dad5a728daa2
原来的\x1f
更改为\037
,我进行了测试,它不再是我的原始zip文件…
The original \x1f
was changed to \037
, and I had a test, it is no longer my original zip file…
我该如何解决?
更新:
我使用Hibernate程序将相同的原始数据插入Greenplum(基于Postgresql8.2),然后使用pg_dump
进行导出,其格式如下:
I inserted the same original data into Greenplum(based on Postgresql8.2) with a Hibernate program, and then used pg_dump
to export it, its format was like this:
SELECT pg_catalog.lowrite(0, '\\037\\213\\010\\000\\000\\000\\000\\000\\000\\000\\245\\220;n\\3030\\014')
推荐答案
UPDATA
我发现了一种更简单的工作方式:只需使用pg_dump -b -Fc
即可将包括LO在内的数据导出到自定义文件中,然后再使用与pg_dump
相同版本的pg_restore
即可,该版本用于导出数据以导入自定义文件将数据归档到greenplum中.
I found out an easier way to work: just use pg_dump -b -Fc
to export data including LO into a custom file, later use pg_restore
of the same version of pg_dump
which you used to export data to import the custom file data into greenplum.
脚本:
$ pg_dump fhir -O -a -Fc -f fhir.dump
$ pg_restore -h mdw -d fhir -U gpadmin -a fhir.dump > errors.log 2>&1
我想念的一个事实是lo_export
导出二进制数据,该数据可以完美地导入到greenplum中.
A point fact I've missed is that lo_export
exports binary data which is able to be imported into greenplum perfectly.
我的解决方案(针对我的情况):
My solution(for my situation):
- 分别从Postgres9.4导出纯数据(使用
pg_dump
,不包括LO)和LO数据(使用lo_export
). - 将纯数据转储导入Greenplum
- 使用
lo_import
导入LO数据,这将生成一些新的oid
(从Postgres8.4开始的lo_import
中带有oid
的lo_import
),同时使用这些新的oid
.
- export the plain data(using
pg_dump
,exclude LO) and LO data(usinglo_export
) from Postgres9.4 separately. - import the plain data dump into Greenplum
- import the LO data with
lo_import
, this will generate some newoid
(lo_import
withoid
started from Postgres8.4), meanwhile update the correspondingoid
of the referenced table with these newoid
.
示例脚本:
-
从Postgres9.4导出纯数据
export plain data from Postgres9.4
$ pg_dump fhir -O -a -n public -f fhir.dmp
从某些包含LO数据的表中导出LO,并用原始oid命名导出的文件
export LO from some table containing LO data, naming the exported files with the original oids
SELECT lo_export(res_text, '/usr/local/pgsql/export/res_lo/'||res_text) FROM hfj_resource;
将纯数据导入Greenplum
import plain data into Greenplum
\i fhir.dmp
在Greenplum中创建一个函数以导入LO并更新引用的oids
create a function in Greenplum to import LO and update referenced oids
CREATE FUNCTION import_lo(tab_name text, lo_path text) RETURNS void AS $$
DECLARE
res record;
new_oid oid;
BEGIN
FOR res in EXECUTE 'select res_text from '||$1 LOOP
new_oid := lo_import($2||'/'||res.res_text);
RAISE NOTICE 'res_text from % to %', res.res_text, new_oid;
EXECUTE 'update '||$1||' set res_text='||new_oid||'where res_text='||res.res_text;
END LOOP;
RAISE NOTICE 'import large object into % finished .....', $1;
END;
$$ LANGUAGE plpgsql;
导入LO
import LO
SELECT import_lo('hfj_resource', '/home/gpadmin/export/res_lo');
这篇关于如何从Postgres 9.4转储大对象数据,然后将其导入Postgres8.x?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!