8 steps to consolidate SQL Server instances (part 2)

In “8 steps to consolidate…part 1“, we covered the first four steps of the expert’s process for consolidation:

  1. Tier the applications
  2. Decide on a consolidation model
  3. Review security requirements
  4. Review specifics of data requirements

Here in part 2, we’ll cover the last four steps:

  1. Determine if databases are multi-tenant, or single – Single-tenant applications means you’ll always have new databases.
  2. Get picky about resources – Which databases or servers take up a ton of resources, whether the destination server can handle multiple instances (if that’s requried), etc.
  3. Plan physical resources – Disk layouts, varying space needs, speed requirements, etc.
  4. And, finally…

Interested in consolidation? Watch our recorded discussion, “Learn SQL Server: Consolidation“!

5. Determine if databases are multi-tenant, or single-tenant

  • A multi-tenant application uses a single database for multiple customers
  • A single-tenant application uses one database per customer.

[Video clip: multi-tenant vs single-tenant]

A single-tenant application will continually add new databases as new clients come on board. That’s something you’ll have to factor in when you get into allocating disk.

The other major concern is stupid code doing stupid things.

Stupid Code

[Video clip: stories of stupid code, doing stupid things]

Single-tenant applications often recommend that they have a dedicated SQL Server instance. Worse, some of them assume that they have that dedicated instance.

I worked in an environment where an app’s front end kicked off a stored procedure every so often. The procedure made sure the application login was a member of the db_owner role on every single database on the instance.

This makes sense (….kind of? maybe?) on a dedicated instance because the system is always adding new databases. But when you add a foreign database to the instance, the wrong login gains db_owner on the wrong database. Your security is essentially broken.

6. Get picky about resources

You’d like to think there’s a systematized or automated way of determining what resources a server needs. But it mostly comes down to experience with a particular application.

Baselines

When I have the opportunity to do my due diligence, a few tasks are key:

  • Run perfmon on the existing systems. This gives you an idea of CPU and memory use.
  • Check tempdb growth and user database growth.
  • Find out how big the files want to grow. Just make sure that none of the existing files gets shrunk in the course of a month, then note the end results.

Anomalies

Next, I look for situations where the resources might be temporarily maxed out. For example, if you have a very large database, it’s a good idea to tune that database’s backup to be fast, and compact. But tuned backups use up a ton of resources. You’re answering the questions: “Can I max out the resources for that database? Will it impact other databases on the system?”

Known resource hogs

Consider the same thing for large operations: reindexes, CheckDB operations, and big reports.

A database that needs a lot of resources either needs a very powerful server or a dedicated server. Even when you’re consolidating, sometimes a database has to be on its own.

Alternately, you might need to consolidate a lesser-used database with a top-tier database. If resource contention is a big concern, you could still put the databases on the same server. Just use separate instances, and limit the lesser instance’s resources.

In the end, allocating resources is like index tuning: it’s both an art and a science.

7. Plan physical resources

[Video clip: A brief, sad tale of disk impacting performance.]

Now it’s time to plan physical resources. Lay out disks logically, according to your performance and size requirements.

  • A big, important database should get dedicated disks (data and log).
  • Group sets of lesser databases together on disk. When possible, separate data and log files onto different disks.
  • TempDB always gets its own disks.
  • Remember to account for databases that are more volatile in size. We identified those databases back in step 4 (“data requirements”). These include systems with large imports, or semi-regular archival processes.

Side note: Make sure file growth rates are fairly consistent: 512MB, 1GB, etc. Never use percent growth!

[Video clip: databases that grow and shrink a lot.]

8. And, finally

No plan of operations extends with any certainty beyond the first encounter with the main enemy forces. 

[Commonly: “No plan survives first contact with the enemy.”]

-Helmuth von Moltke, 1871 (via QuoteInvestigator)

We can plan everything beautifully, but reality will always get in the way.

Once you’ve gone through the seven steps above (or your version of it), there’s still one more thing. “Do we need to separate anything onto different servers?”

We all want to over-consolidate…to try to bring 100 servers down to two or five. If you can get 100 servers down to 75, you’ve won. Don’t over-consolidate, don’t try to impress somebody or go too far.

It’s easy to get too many resources on a box…to make the box too big. Big servers tend to be more expensive. And a very big, on-premises server is more impactful when you have to patch, maintain, or replace a part. When something goes wrong, it goes wrong for everything on the server.

[Video clip: P.S. Don’t overdo it.]

[Video clip: Five minutes of Sean being truly joyful about consolidations]


Recorded event – “Learn SQL Server: Consolidation!”

This event has passed, but you can watch the recording here: “Learn SQL Server: Consolidation.

Blue and black graphic with the text: "Learn SQL Server: Consolidation! .... February 17 11:00 AM CST ... (etc)"