Best practice – Log shipping / replication for archive / reporting database

Posted on

Question :

We have a setup with a product that has a production database (many customer installations). The production database has been around many years and there are some performance issues that is getting worse. However the customers like to keep the data for generating different kind of reporting and statistics. Sometimes it also import to audit product logs.

So what we want to do is setup an archive / reporting database that the customer handles themselves and the production only holds “active” data and is maintained by a clean-up job that removes old dates.

There are many users +10000 that access the production database daily but very few that access the archive database.

What is the best way to achieve this, build in replication, log shipping or write a synchronisation mechanism?
Performance issues on the production database?

If we for instance decides for logshipping, is it possible to prevent that delete operations on the production database are propagated to the archive database?


Answer :

I would recommend to use log shipping but only if you do not want to prevent delete operations. Log shipping keeps the target database the same as the source database.
Log shipping is a relatively simple mechanism, it just backups and copies over the transaction log and applies it to the target database. It is a bit simpler to setup and to maintain than Replication.

Replication also keeps the source database completely in sync with the target database. With transactional or snapshot replication there is no possibility to exclude delete operations.
The only exception is that in merge replication exclusion of DELETE statements is still possible. However, this possibility will be removed in future versions of SQL Server (see the first note in So that is nr. 1 of your worries I think. Some more considerations can be found here:

Performancewise there also can be a big impact. This depends a lot on the database and the operations on it. So test it in a test environment to see if the performance loss is acceptable.
Personally, I’m not a huge fan of merge replication as I have experienced many problems with it.

Summarizing, if you want to prevent delete operations I would recommend to write your own synchronization mechanism. If you don’t want to do that, and can live with the issues mentioned above, merge replication will also be usable.

I have been VERY successful and happy with Log Shipping from a PROD set of databases to a DR environment. On the DR side, I have the databases in STANDBY mode which allows for “active” queries ( some reports even ). Setup for file movement is at every fifteen minutes. Cut the retention on both sides down to two days !! I figured that as long as I had a valid, functioning SQL environment, that also was doing FULL backups . . . that I did not need to keep a boatload of backup files !!

A VERY important piece of Log Shipping is to MAKE SURE YOU MIRROR the USERS (and permissions) from Source to Target !!

So far this is working perfectly.

Leave a Reply

Your email address will not be published.