So it’s obvious that many times you’ll have multiple instances (even if on the same server) to have a logical separation of databases for security purposes.
But when does it make sense on the performance side to separate databases amongst multiple instances? For instance, say you have 200 databases: 100 of them are heavy OLTP and 100 of them are mostly just OLAP.
What is wise in this scenario? What is the general rule of thumb? Is there such a thing as instance performance tuning?
I don’t think you’re going to come up with a perfect formula. It will be almost impossible to predict who is going to consume most of your resources and at what times. What you will likely need to do is come up with some predictable division (based on information we don’t know, like client A is 24/7 and demanding but client B is 9-5 only). You might start with 50 OLTP and 50 OLAP on one server, and the other 50/50 on the other server. The nice thing about database-level separation is that if one of those clients turns out to be really busy, you can move databases around relatively easy to provide better balance – either trading one busy database for 10 quieter databases on the other instance, or perhaps introducing a 3rd instance.
In my previous job we had a system where 500 clients had databases on a single instance. In order to lighten the load and mitigate unpredictable performance spikes, we pushed the reporting stuff that didn’t need to be real-time to a secondary server. We would perform a full backup and restore each night, restore with recovery, and mark the database as read only, during a window where they were blocked out of the system. Then they could run their reports all day long with absolutely no locking and they were quite happy.