本文介绍了同步SQL Server数据库的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个新的想法和问题,想问你。



我们内部有一个CRM应用程序。我们使用那种24X7的应用程序。我们还对同一个CRM数据库(OLTP和SSRS报告)进行计费和工资核算。



看起来我们每当在前端进行操作并同时向两个实体进行插入和更新时,我们的应用程序就会冻结,直到该过程完成为止。例如为最近两周的活动提取500名员工的工资。基本上,它总结了总工作时间,从数据库中提取该数字,并写/更新记录说提取完成的位置。因此,对于500名员工,我们正在一起查找大约40K-50K行的插入/选择/更新语句。



在此过程运行期间,任何人都无法做任何事情!我们正在考虑以下选项来解决此问题。


  1. 在下班时间运行此过程

  2. 或复制Dyna数据库的副本。 CRM并在副本上执行此操作(提取数千条记录并运行多个报告)。

我的问题是:


  1. 如何首先创建副本以及在何处创建副本(最佳做法)?

  2. 如何使其实时同步。

  3. 如果我们在复制数据库中选择了语句操作,那是可以的,但是如果我们这样做在副本上插入/更新如何在实际的实时数据库上反映出来? ,简而言之,如何确保原始DB和复制DB彼此实时同步。

我知道我问了太多问题,但作为SQL专家,介入CRM团队并提出建议,您知道我想说的话。



感谢大家提前提出建议。

p>

解决方案

很好地回答有关数据库实时复制的问题,一个好的解决方案是Alwayson可用性组。 p>



尽管我不认为这是您在这种情况下想要的。 Alwayson可用性组通常用于要求非常短的故障时间范围的数据库实例。例如:如果主数据库服务器在群集中出现故障,则它最多可在一两秒钟内故障转移到辅助数据库,而最终用户仅会注意到一秒钟的打h。



我认为您会发现更好的是查看那些正在击中数据库服务器的插入语句,并查看为什么它们阻止了您提取数据。如果他们真正锁定了表,则可能将您的大量读取更改为 nolock读取可能有助于纠正您的情况。



了解哪种类型也将有所帮助分配的资源以及在数据库的核心表上是否有正确的索引。如果您没有正确的索引编制,那么很多查询可能要花更长的时间才能恢复正常,从而导致您的查看被锁定。



最后,如果要提取的表,我建议对表进行分区反对是大。如果按时间段进行分区,这可能会潜在地解决许多磁盘速度问题,并且还有助于优化查询(例如,每X个月进行一次新分区,因此,当查询从一个时间段提取时,它们只会从那个数据文件中提取)





我想说的是,您需要更加关注效率,而不是复制数据库 ,因为您的音量很高,因此从声音中需要类似的东西。我目前有一个每天运行1000万次以上插入操作的sql服务器事务数据库,但我仍然收到针对它的实时报告。您只需要资源和适当的索引即可容纳。


I have a new idea and question about that I would like to ask you.

We have a CRM application on-premise / in house. We use that application kind of 24X7. We also do billing and payroll on the same CRM database which is OLTP and also same thing with SSRS reports.

It looks like whenever we do operation in front end which does inserts and updates to couple of entities at the same time, our application gets frozen until that process finishes. e.g. extracting payroll for 500 employees for their activities during last 2 weeks. Basically it summarize total working hours pulls that numbers from database and writes/updates that record where it says extract has been accomplished. so for 500 employees we are looking at around 40K-50K rows for Insert/Select/Update statements together.

Nobody can do anything while this process runs! We are considering the following options to take care of this issue.

  1. Running this process in off-hours
  2. OR make a copy of DB of Dyna. CRM and do this operations(extracting thousands of records and running multiple reports) on copy.

My questions are:

  1. how to create first of all copy and where to create it (best practices)?
  2. How to make it synchronize in real-time.
  3. if we do select statement operation in copy DB than it's OK, but if we do any insert/update on copy how to reflect that on actual live db? , in short how to make sure both original and copy DB are synchronize to each other in real time.

I know I asked too many questions, but being SQL person, stepping into CRM team and providing suggestion, you know what I am trying to say.

Thanks folks for your any suggestion in advance.

解决方案

Well to answer your question in regards to the live "copy" of a database a good solution is an alwayson availability group.

https://blogs.technet.microsoft.com/canitpro/2013/08/19/step-by-step-creating-a-sql-server-2012-alwayson-availability-group/

Though I dont think that is what you are going to want in this situation. Alwayson availability groups are typically for database instances that require very low failure time frames. For example: If the primary DB server goes down in the cluster it fails over to a secondary in a second or two at the most and the end users only notice a slight hiccup for a second.

What I think you would find better is to look at those insert statements that are hitting your database server and seeing why they are preventing you from pulling data. If they are truly locking the table maybe changing a large amount of your reads to "nolock" reads might help remedy your situation.

It would also be helpful to know what kind of resources you have allocated and also if you have proper indexing on the core tables for your DB. If you dont have proper indexing then a lot of the queries can take longer then normal causing the locking your seeing.

Finally I would recommend table partitioning if the tables you are pulling against are to large. This can help with a lot of disk speed issues potentially and also help optimize your querys if you partition by time segment (i.e. make a new partition every X months so when a query pulls from one time segment they only pull from that one data file).

https://msdn.microsoft.com/en-us/library/ms190787.aspx

I would say you need to focus on efficiency more then a "copy database" as your volumes arent very high to be needing anything like that from the sounds of it. I currently have a sql server transaction database running with 10 million+ inserts on it a day and I still have live reports hit against it. You just need the resources and proper indexing to accommodate.

这篇关于同步SQL Server数据库的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-13 23:05