我正在处理一些每天处理大量数据的企业应用程序,并且这样做是用C#.NET 4编写的WINDOWS SERVICE应用程序。它也与SQL SERVER 2008 R2有连接,但是由于某种原因(随机),我把它扔了存储JSON序列化数据的同步表中的以下错误:

Exception of type 'System.OutOfMemoryException' was thrown.
at System.Data.SqlClient.TdsParser.ReadPlpUnicodeChars(Char[]& buff, Int32 offst, Int32 len, TdsParserStateObject stateObj)
at System.Data.SqlClient.TdsParser.ReadSqlStringValue(SqlBuffer value, Byte type, Int32 length, Encoding encoding, Boolean isPlp, TdsParserStateObject stateObj)
at System.Data.SqlClient.TdsParser.ReadSqlValue(SqlBuffer value, SqlMetaDataPriv md, Int32 length, TdsParserStateObject stateObj)
at System.Data.SqlClient.SqlDataReader.ReadColumnData()
at System.Data.SqlClient.SqlDataReader.ReadColumn(Int32 i, Boolean setTimeout)
at System.Data.SqlClient.SqlDataReader.GetValueInternal(Int32 i)
at System.Data.SqlClient.SqlDataReader.GetValues(Object[] values)


该表是保留LOB数据的相当通用的表:

CREATE TABLE [dbo].[SyncJobItem](
 [id_job_item] [int] IDENTITY(1,1) NOT NULL,
 [id_job] [int] NOT NULL,
 [id_job_item_type] [int] NOT NULL,
 [id_job_status] [int] NOT NULL,
 [id_c] [int] NULL,
 [id_s] [int] NULL,
 [job_data] [nvarchar](max) NOT NULL,
 [last_update] [datetime] NOT NULL,
CONSTRAINT [PK_SyncJobItem] PRIMARY KEY CLUSTERED)


失败的LOB记录在job_data列中具有36.231.800个字符的数据,这大约是70MB的数据(如果我们说1个字符为2个字节,即UTF-8)。

请考虑更改作业数据(例如磁盘)或类似数据的存储空间不是我的选择。我想解决此错误,因此如果有人知道任何内容,请帮忙!

同样,此错误随机发生在同一数据上,正在运行的系统是vmWare-vCloud,即我认为是一些大型刀片系统。我们有大约6GB的RAM用于虚拟机(服务最多使用1-2GB),服务编译为x64,系统编译为x64 Windows 2008R2 Standard。我确保没有单个对象的内存超过2GB,事实并非如此,SqlClient内部也有错误,在我15年的开发经验中,我从未见过,Google却一无所获。同样,该错误不在数据库方面,因为数据库具有超过32GB的RAM,并且仅使用20GB的峰值。对于我在该系统中使用的特定细节,在每个作业步骤之后(数据上有多个步骤),通常不使用多线程和GC.Collect()。

编辑:

这是解决此问题的完整代码:

    internal static void ExecuteReader(IConnectionProvider conn, IList destination, IObjectFiller objectBuilder, string cmdText, DbParameterCollection parameters, CommandType cmdType, int cmdTimeout)
    {
        IDbCommand cmd = CreateCommand(conn.DBMS, cmdText, parameters, cmdType, cmdTimeout);
        cmd.Connection = conn.Connection;

        bool connIsOpennedLocally = EnsureOpenConnection(conn);
        try
        {
            AssignExistingPendingTransactionToCommand(conn, cmd);
            using (IDataReader reader = cmd.ExecuteReader(CommandBehavior.SingleResult))
            {
                objectBuilder.FillCollection(reader, destination);
                PopulateOutputParameterValues(parameters, cmd);
            }
        }
        finally
        {
            CloseConnectionIfLocal(conn, connIsOpennedLocally);
            cmd.Dispose();
        }
    }

...

    private void FillFromAlignedReader(ICollection<TEntity> collection, IDataReader openedDataReader, IDbTable table)
    {
        // Fastest scenario: data reader fields match entity field completely.
        // It's safe to reuse same array because GetValues() always overwrites all members. Memory is allocated only once.
        object[] values = new object[openedDataReader.FieldCount];
        while (openedDataReader.Read())
        {
            openedDataReader.GetValues(values);
            TEntity entity = CreateEntity(table, EntityState.Synchronized, values);
            collection.Add(entity);
        }
    }

最佳答案

对于那些经过大量测试和MSDN(link)之后遇到此问题的人,我得出的结论是,在x64机器上,正常读取模式下SqlDataReader能够读取的最大单个字段大小约为70MB,在此之后它需要将其SqlCommand切换为CommandBehavior.SequentialAccess并传输字段内容。

像这样工作的示例代码:

    ...
    behaviour = CommandBehavior.SequentialAccess;
    using (IDataReader reader = cmd.ExecuteReader(behaviour))
    {
       filler.FillData(reader, destination);
    }


当您在循环中读取数据时,您需要按顺序获取列,当您到达BLOB列时,应调用如下代码(取决于数据类型):

    ...
    private string GetBlobDataString(IDataReader openedDataReader, int columnIndex)
    {
        StringBuilder data = new StringBuilder(20000);
        char[] buffer = new char[1000];
        long startIndex = 0;

        long dataReceivedCount = openedDataReader.GetChars(columnIndex, startIndex, buffer, 0, 1000);
        data.Append(buffer, 0, (int)dataReceivedCount);
        while (dataReceivedCount == 1000)
        {
            startIndex += 1000;
            dataReceivedCount = openedDataReader.GetChars(columnIndex, startIndex, buffer, 0, 1000);
            data.Append(buffer, 0, (int)dataReceivedCount);
        }

        return data.ToString();
    }

    private byte[] GetBlobDataBinary(IDataReader openedDataReader, int columnIndex)
    {
        MemoryStream data = new MemoryStream(20000);
        BinaryWriter dataWriter = new BinaryWriter(data);

        byte[] buffer = new byte[1000];
        long startIndex = 0;

        long dataReceivedCount = openedDataReader.GetBytes(columnIndex, startIndex, buffer, 0, 1000);
        dataWriter.Write(buffer, 0, (int)dataReceivedCount);
        while (dataReceivedCount == 1000)
        {
            startIndex += 1000;
            dataReceivedCount = openedDataReader.GetBytes(columnIndex, startIndex, buffer, 0, 1000);
            dataWriter.Write(buffer, 0, (int)dataReceivedCount);
        }

        data.Position = 0;
        return data.ToArray();
    }


这应该适用于大约1GB-1.5GB的数据,此后它将在单个对象上中断,无法保留足够大小的连续内存块,因此可以直接将其从缓冲区刷新到磁盘,也可以将数据拆分为多个较小的对象。

10-06 07:02