问题描述
我有制表符分隔的文本文件。文件约为100MB。我想将数据从该文件存储到SQL Server表中。当存储在sql server中时,该文件包含1百万条记录。最好的方法是什么?
I have tab delimited text file. File is around 100MB. I want to store data from this file to SQL server table. The file contains 1 million records when stored in sql server. What is the best way to achieve this?
我可以在c#中创建内存数据表,然后将其上载到sql server,但是在这种情况下,它将加载整个100 MB文件到内存。如果文件变大了怎么办?
I can create in momory datatable in c# and then upload the same to sql server, but in this case it will load entire 100 MB file to memory. What if file size get bigger?
推荐答案
没问题; CsvReader
将处理大多数带分隔符的文本格式,并实现 IDataReader
,因此可用于提供 SqlBulkCopy
。例如:
No problem; CsvReader
will handle most delimited text formats, and implements IDataReader
, so can be used to feed a SqlBulkCopy
. For example:
using (var file = new StreamReader(path))
using (var csv = new CsvReader(file, true)) // true = first row is headers
using (var bcp = new SqlBulkCopy(connectionString))
{
bcp.DestinationTableName = "Foo";
bcp.WriteToServer(csv);
}
请注意, CsvReader
有很多选项,文件处理更加精细(指定定界符规则等)。 SqlBulkCopy
是高性能的大容量API,非常高效。这是一个流读取器/写入器API; 不会立即将所有数据加载到内存中。
Note that CsvReader
has lots of options more more subtle file handling (specifying the delimiter rules, etc). SqlBulkCopy
is the high-performance bulk-load API - very efficient. This is a streaming reader/writer API; it does not load all the data into memory at once.
这篇关于使用C#从分隔的文本文件中将SQL Server表中的批量数据插入的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!