本文介绍了随着会话的进行,查询和提交将花费更长的时间的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用Fluent NHibernate将数据导入到SQL Server数据库中.

I use Fluent NHibernate to import data into a SQL Server Database.

我在我的实体上使用了自动映射(效果很好)

I used Automapping on my entities (which worked perfectly)

Fluently.Configure(nhibernateConfig_)
    .Mappings(map_ => map_.AutoMappings.Add(AutoMap
    .AssemblyOf<EntityMapping>(new AutomapConfiguration())
    .UseOverridesFromAssemblyOf<EntityMapping>()
    .Conventions.AddFromAssemblyOf<EntityMapping>()));

并使用QueryOver从数据库获取数据

and use QueryOver to get data from the database

public AttributeTranslation GetOrCreateAttributeTranslation(Attribute attribute_, string language_)
{
    AttributeTranslation translation = _session
        .QueryOver<AttributeTranslation>()
        .And(trans_ => trans_.Attribute == attribute_)
        .And(trans_ => trans_.Language == language_)
        .SingleOrDefault();

    if (translation == null) {
        translation = new AttributeTranslation() {
            Language = language_,
            Attribute = attribute_
        };
        Save(translation);
    }

    return translation;
}

不幸的是,在我处理输入时,随着会话的进行,要读取的每个查询和每个提交的事务都花费的时间越来越长.

Unfortunately, as I process my input, each query to read and each committed transaction takes longer and longer as the session progresses.

// Surrounded in several for loops that process the input
using (ITransaction transaction = _repo.BeginTransaction()) {
    Stopwatch transGetWatch = Stopwatch.StartNew();
    AttributeTranslation translation = _repo.GetOrCreateAttributeTranslation(attribute, lang.Key);
    transGetWatch.Stop();
    translation.DisplayName = value;
    _repo.Save(translation);

    Stopwatch commitWatch = Stopwatch.StartNew();
    transaction.Commit();
    commitWatch.Stop();

    Console.Write("\rGetTrans[{0}] Commit[{1}]                    ",
        transGetWatch.ElapsedMilliseconds,
        commitWatch.ElapsedMilliseconds);
}

我试图切换到HiLo进行ID生成,但无济于事.

I tried switching to HiLo for ID generation, to no avail.

public class AttributeTranslationMapOverride : IAutoMappingOverride<AttributeTranslation>
{
    public void Override(AutoMapping<AttributeTranslation> map_)
    {
        map_.Id(attr_ => attr_.Id).GeneratedBy.HiLo("attributeTranslationMaxLo");
    }
}

查询和提交开始于每个调用大约3-4毫秒,最终在大约80到100的某个地方结束.

The queries and commits start out at around 3-4 ms per call and end up somewhere around 80 to 100 at the end.

我不一定需要降低整体性能,我只是希望它在每次通话的最初3到4毫秒内保持稳定.

I don't necessarily need to decrease the overall performance, I just want it stabilized at the initial 3 to 4 ms per call.

这可能是怎么回事?

推荐答案

作为ORM,NHibernate不太适合批处理工作,因此会遇到麻烦.但是无论如何,它具有一些功能来帮助将其用于批处理工作.

As an ORM, NHibernate is not well suited for batch work, thus the troubles you encounter. But it has some features to help using it for batch work anyway.

由大卫·奥斯本怀疑 a>,将ISession用于批处理工作(例如数据导入)时,会话一级缓存可能会变得太大.

As suspected by David Osborne, the session first level cache may grow too big when using an ISession for batch work such as your data import.

您可以选择多种解决方案:

There are a number of solutions you can choose:

  • 清除会话(ISession.Clear),这由David Osborne建议并由参考文档.如果每行数据从不引用与前几行相同的某些数据,则在每行之后进行清除不会有任何危害.每50或100行执行一次操作应足够避免会话一级缓存的增长对性能造成的太大影响.根据您的发现进行调整.
  • 会话很便宜.您可以清除而不是清除它,然后使用一个新的. (但是请避免在每一行都这样做,它并不是完全免费的.)
  • 考虑使用无状态会话(ISessionFactory.OpenStatelessSession() ).它们可能更适合于某些批处理.
  • Clear the session (ISession.Clear), as suggested by David Osborne and recommended by reference documentation. If each row of data never references some data in common with previous rows, clearing after every row will not hurt. Doing it every 50 or 100 rows should be enough for avoiding a too big performance impact of growing session first level cache. Adjust according to your findings.
  • Sessions are cheap. You may, instead of clearing, discard it and use a new one. (But avoid doing that at each row, it is not completely free.)
  • Consider using a stateless session (ISessionFactory.OpenStatelessSession()). They can be better suited for some batch processing.

注释:

您当前的实现似乎每笔交易仅节省一个实体.如果您可以保存一堆相同类的实体,则可能会受益于批处理(至少使用SQL Server)并提高了性能.这也将减少连接释放/重新获取的次数,因为默认情况下,SQL连接是在每次事务之后释放的.

Your current implementation seems to save only one entity per transaction. If you can save a bunch of entities of the same classes instead, you may benefit from batching (with SQL-Server at least) and improve the performances. It would also reduce the number of connection releases/re-acquiring, since by default the SQL connection is released after each transaction.

这篇关于随着会话的进行,查询和提交将花费更长的时间的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

09-05 02:51