问题描述
使用从多个grib2文件创建的xarray数据集时出现分段错误.写入netcdf以及写入数据帧时都会发生此错误.对于出了什么问题的任何建议,我们将不胜感激.
I get a segmentation fault working with a xarray dataset that was created from multiple grib2 files. The fault occurs when writing out to a netcdf as well as when writing to a dataframe. Any suggestions on what is going wrong are appreciated.
files = os.listdir(download_dir)
文件示例(来自 http://dd.weather. gc.ca/model_hrdps/west/grib2/00/000/)'CMC_hrdps_west_RH_TGL_2_ps2.5km_2016072800_P015-00.grib2',...'CMC_hrdps_west_TMP_TGL_2_ps2.5km_2016072800_P011-00.grib2'
Example of files (from http://dd.weather.gc.ca/model_hrdps/west/grib2/00/000/)'CMC_hrdps_west_RH_TGL_2_ps2.5km_2016072800_P015-00.grib2',...'CMC_hrdps_west_TMP_TGL_2_ps2.5km_2016072800_P011-00.grib2'
# import and combine all grib2 files
ds = xr.open_mfdataset(files,concat_dim='time',engine='pynio')
<xarray.Dataset>
Dimensions: (time: 48, xgrid_0: 685, ygrid_0: 485)
Coordinates:
gridlat_0 (ygrid_0, xgrid_0) float32 44.6896 44.6956 44.7015 44.7075 ...
* ygrid_0 (ygrid_0) int64 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 ...
* xgrid_0 (xgrid_0) int64 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 ...
* time (time) datetime64[ns] 2016-07-28T01:00:00 2016-07-28T02:00:00 ...
gridlon_0 (ygrid_0, xgrid_0) float32 -129.906 -129.879 -129.851 ...
Data variables:
u (time, ygrid_0, xgrid_0) float64 nan nan nan nan nan nan nan ...
gridrot_0 (time, ygrid_0, xgrid_0) float32 nan nan nan nan nan nan nan ...
Qli (time, ygrid_0, xgrid_0) float64 nan nan nan nan nan nan nan ...
Qsi (time, ygrid_0, xgrid_0) float64 nan nan nan nan nan nan nan ...
p (time, ygrid_0, xgrid_0) float64 nan nan nan nan nan nan nan ...
rh (time, ygrid_0, xgrid_0) float64 nan nan nan nan nan nan nan ...
press (time, ygrid_0, xgrid_0) float64 nan nan nan nan nan nan nan ...
t (time, ygrid_0, xgrid_0) float64 nan nan nan nan nan nan nan ...
vw_dir (time, ygrid_0, xgrid_0) float64 nan nan nan nan nan nan nan ...
写到netcdf
ds.to_netcdf('test.nc')
分段错误(核心已转储)
Segmentation fault (core dumped)
推荐答案
尝试将proprocess=lambda x: x.load()
添加到open_mfdataset
调用中.这样可以确保在处理下一个数据集之前,每个数据集都已完全加载到内存中.
Try adding proprocess=lambda x: x.load()
to the open_mfdataset
call. This will ensure that each dataset is fully loaded into memory before processing the next one.
这篇关于分段错误将xarray数据集写入netcdf或dataframe的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!