本文介绍了拆分更新查询可提高性能的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我经常导入一个2GB csv文件,有2400万行到SQL Server。我将其作为文本导入,然后通过SELECT xxx INTO执行转换。

I frequently import a 2GB csv file with 24 million rows to SQL Server. I import this as text and then carry out the conversion via SELECT xxx INTO.

如果我将此转换分割成不同的查询,数据?

Will the conversion use less memory be used if I split this into separate queries on different sections of the data?

推荐答案

说实话,最好不要使用这个方法,而是使用这里指定的BULK INSERT :

To be honest, it may be better not to use that method at all, but to instead use BULK INSERT as specified here:

这是很简单的:

BULK INSERT dbo.TableForBulkData
FROM 'C:\BulkDataFile.csv'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
)

你通过C#做它,那么你可以使用SqlBulkCopy库,或者如果你需要从命令行执行它,你总是可以使用BCP。

If you're doing it through C#, then you can use the SqlBulkCopy library, or if you need to do it from command line, you can always use BCP.

注意,您目前使用的方法最多可减慢10倍:

Note, the method you're currently using is up to 10 times slower:

可以使用传统的SQLCommand类从CSV文件将数据插入数据库。但这是一个非常缓慢的过程。与其他三种方式相比我已经讨论过了,这个过程至少要慢10倍。强烈建议不要逐行循环遍历CSV文件,并对每一行执行SqlCommand以将大量日期从CSV文件插入到SQL Server数据库。

Data can be inserted to the database from a CSV file using the conventional SQLCommand class. But this is a very slow process. Compared to the other three ways I have already discussed, this process is at least 10 times slower. It is strongly recommended to not loop through the CSV file row by row and execute SqlCommand for every row to insert a bulk amount of date from the CSV file to the SQL Server database.

这篇关于拆分更新查询可提高性能的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

09-03 08:29