问题描述
我创建的基础上云集了数据表
数据动态的文本文件。
I am creating a text file on the fly based on data gathered in a DataTable
.
我目前的(测试)数据集有773行,我想用逗号(,)分割为每个列,打破在单独的行每行。这里是我的尝试;
My current (test) dataset has 773 rows, which I want to split with a comma (,) for each column and break each row on a separate line. Here's my attempt;
string FileName = "TEST_" + System.DateTime.Now.ToString("ddMMyyhhmm") + ".txt";
StreamWriter sw = File.CreateText(@"PATH...." + FileName);
foreach (DataRow row in Product.Rows)
{
bool firstCol = true;
foreach (DataColumn col in Product.Columns)
{
if (!firstCol) sw.Write(",");
sw.Write(row[col].ToString());
firstCol = false;
}
sw.WriteLine();
}
的输出是一个文本文件,如预期。瞬间,大部分数据出现,但文本文件从未充分显示了773行。我已经试过几次,行数可以从720行的一路变化到750行773列的部分完成,但它永远不会完成。
The output is a text file, as expected. Instantly, most of the data appears, but the text file never fully displays the 773 rows. I have tried this several times, the number of rows can vary from 720 rows to 750 rows all the way to partial completion of row 773, but it never finishes.
我没有干扰,或停止在任何时间申请。
I haven't interfered with, or stopped the application at any time.
任何想法?
推荐答案
简短的回答
您需要刷新流
,由的StreamWriter
在使用
块,或者叫 sw.Flush()或者包装
你的的foreach
循环结束。
You need to flush the Stream
, by either wrapping the StreamWriter
in a using
block, or call sw.Flush()
at the end of your foreach
loop.
龙答案
在使用的StreamWriter
任何形式的(的StreamWriter
,的BinaryWriter
,的TextWriter
),写入底层流/装置(你的情况的文件) - 它不会直接写入到文件中,因为这是昂贵的,相比于使用一个缓冲器。
When using a StreamWriter
of any kind (StreamWriter
, BinaryWriter
, TextWriter
), writing to the underlying stream/device (in your case a file) - it will not write directly to the file, as this is expensive, compared to using a buffer.
(1)想象一下以下内容:
(1) Imagine the following:
- 您正在通过循环10.000记录
- 每条记录都直接调用时写入文件
.WRITE()
,下一个纪录在移动之前。
- You're looping through 10.000 records
- Each record is written to the file directly when invoking
.Write()
, before moving on with the next record.
(2)它是如何真正起作用的:
(2) How it really works:
- 你通过10.000的记录
- 调用时,每个记录被写入缓冲区
.WRITE()
,移动到前下一个记录。 - 当缓冲区到达将刷新到磁盘的元素一定规模/数量。
循环
- You're looping through 10.000 records
- Each record is written to a buffer when invoking
.Write()
, before moving on to the next record. - When the buffer reaches a certain size/number of elements it will flush to disc.
您可以看到,通过使用(2)你将获得大量的比较(1),因为每一个新的记录并不需要立即写入文件。
You can see that by using (2) you will gain a lot of IO performance compared to (1), as each new record doesn't need to be written to the file immediately.
所以(1)将需要写入文件10.000倍,而(2)仅必须写的内容(1)需要一个级分(其可以是5.000倍,2.000倍 - 这取决于缓冲区是如何实现的)。
So (1) would need to write to the file 10.000 times, while (2) only has to write a fraction of what (1) needs to (which could be 5.000 times, 2.000 times - it depends on how the buffer is implemented).
通过在一个包裹流
使用
块,或调用同花顺()
就可以了,只要你用流$ C $完成C>将使其刷新缓冲区(和丢失的数据),以该文件。
By wrapping the Stream
in a using
block, or calling Flush()
on it, whenever you're done with the Stream
will make it flush the buffer (and missing data) to the file.
这篇关于生成txt文件从一个DataTable的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!