We just upgraded the Minion Maintenance suite!

Welcome to the next version of Minion Maintenance! Now you have a single installer for all three modules, plus new features and fixes for the entire backup and maintenance suite!

If you take a look at the page for Minion Backup, Minion Reindex, or Minion CheckDB, you’ll see that the download now says “Minion Maintenance 1.1”. Welcome to the next version!

Now you have a single installer for all three modules, plus new features and fixes for the entire backup and maintenance suite!

Minion Backup and Minion CheckDB got some important fixes, but it’s Minion Reindex that’s the real star of this release. MR was added to the unified installer, got on board the table-based scheduling bandwagon, and received an overall shine, polish, and upgrade. See “New in Reindex”, below!

Download and install

First, download Minion Maintenance from either Minion Backup, Minion Reindex, or Minion CheckDB…it’s the same system no matter where you download it!

Next, extract the files and install using the Minion Install Guide document that’s included in the download.

New in Reindex

New features:

  • Reindex heaps: New setting in the settings tables lets you choose whether to reindex heaps, or not.
  • Database groups: Minion Reindex DB groups for include and exclude.
  • Updated internals: MR now uses the DBMaintSQLInfoGet function, like MC and MB do.
  • Scheduling: The Minion.IndexMaintSettingsServer table provides table-based scheduling, just like Minion Backup and Minion CheckDB.
  • Install: New Powershell installer!
  • “Current” views: Minion.IndexMaintLogCurrent and IndexMaintLogDetailsCurrent show you the most recent set of activity.

Issues resolved:

  • Fixed: Some errors were not captured in the log tables.
  • Fixed: Japanese/international characters in a table name caused errors.
  • Fixed: Performance issues.
  • Fixed: Issue with object names in brackets. If object names have square brackets in the name, it will cause a syntax error. We now surround object names in double quotes instead of brackets.
  • Fixed: Issue with running PrepOnly then RunPrepped immediately after, in the same window.
  • Fixed: Formatting issues in Minion.HELP.
  • Fixed: Updated code “section” comments.

New in Backup

  • Added delete for SyncCmds
  • Fixed: Creating “Christmas tree folders” when using robocopy with multiple files.
  • Fixed: Missing a couple of entries in the InlineTokens table.
  • Fixed: Some sync commands being written to SyncCmds table even though SyncLogs is 0.
  • Fixed: Wasn’t logging properly when a DB is in an AG.
  • Fixed: Formatting issues in Minion.HELP.
  • Fixed: Wrong restore location when restoring to a named instance and the path is being converted from a local drive.  The instance name was being included in the UNC path.

New in CheckDB

  • Fixed: Data explosion for remote CheckDB results pull. Remote runs were pulling all of the CheckDBResult data instead of just for the current run.
  • Fixed: ServerName not populating correctly for remote CheckDB for the Minion Enterprise import.
  • Fixed: MinionTriggerPath wasn’t set correctly for the base folder.
  • Fixed: Databases with periods or other special characters may not import into Minion Enterprise for central reporting. Changed delimiter for the import process to accommodate this better.

Download Minion Maintenance from any of the maintenance pages today! Minion Backup, Minion Reindex, or Minion CheckDB.

Get table properties in Azure SQL Database

Even in Azure SQL Database, you need to know certain things about your tables that can be hard to find. We’ll show you how to get that data.

Even in Azure SQL Database, you need to know certain things about your tables.  For instance, you need to audit table size and row count information, as well as schemas and more.

Data not available

Recently, I found that I could only get some of the properties I wanted when I tried to get table properties from Azure SQL Database using SMO.  I could get table name, schema, and many more, but some random properties – like RowCount and DataSpaceUsed – returned NULL.

After racking my brain for about half a day, I found the problem.  Before getting into that however, let’s get into some sample code that’ll demonstrate the problem:

Simple solutions

The fix turns out to be simple: move to SQL Server 2016.  Something in previous versions of SMO prevents them from reading these properties from Azure databases.  (I’m always able to pull that data just fine from on-premises servers, though.)

But if you’re on SQL 2014, how do you get the 2016 SMO objects?  The easiest way is to download the 2016 Feature Pack.  It won’t say “SMO” anywhere, and there are a lot of packages to choose from.  What you’re looking for is SharedManagementObjects.msi.

Also, starting in SQL 2017, SMO is available as a separate NuGet package.

So even though you’ve got your data in Azure SQL Database, you don’t have to throw your hands up to all admin activity.  Many companies say they don’t need DBAs any more, because their data is in the cloud.  Nothing could be further from the truth.  And hopefully this will get you one step closer to where you need to be with monitoring those environments.

Here’s another article on the need for monitoring your environment, and not just performance.

Pay Yourself First

The biggest problem I see is the huge number of DBAs who have let themselves fall behind. Live in interview mode, and pay yourself first, and you won’t fall behind!

I’ve done a ton of interviewing in my career, both as the interviewee, and as the interviewer.  The biggest problem I see is the huge number of DBAs who have let themselves fall behind. How does this happen?

Middle of the pack

Once you’re a more or less established DBA, you have yourself a job, and you go in every day and you do your duty.  You back up what you need, do the restores you need, fix replication, look at a couple bad queries, and set up a new server for the latest project.  You’re busy as hell at work, and there’s no end in sight to all the tasks and projects and problems.

That’s the problem with being a DBA these days: you’re always swamped with problems. (The real reason is that companies are absolutely dedicated to not taking data seriously, but that’s another article entirely.)

Losing ground

So you work, and you get further and further behind the learning curve because there’s no time to do anything for yourself: no time to learn anything new, pick up a book, watch a tutorial, or even practice what you already know.  You’re always putting out fires!

Then, when it’s time to interview for a new job, you find yourself cramming in the last couple days to try to bone up on your knowledge.  Speaking as someone who’s interviewed a lot of DBAs: this definitely shows! Anyone who conducts any amount of interviews at all can tell when you’re just barely recounting something and when you know the topic cold.

Live in interview mode

Okay, I have a radical, two-step plan for your professional development. Here we go:

  1. Stop cramming for interviews like you’re trying to pass a test.
  2. Live in interview mode.

Interview mode (n.) – The practice of conducting your daily work life as if an interview could happen unexpectedly, at any time.

Take time to study every day.  It doesn’t matter how much, but I think 30 minutes isn’t too much to ask.  Even if you’re not studying to interview, your skills will get rusty when you don’t put them into practice for a while. Chances are, your company won’t ever give you the time to do it, so you have to take that time yourself.

Pay yourself first

Every day when you come in to work, take 30 minutes to work on something for you.  Learn how to do partial restores.  Learn how to set up an Availability Group.  Learn how to add a primary key to a table using T-SQL.  Learn XML, or JSON, or HTML.  It doesn’t matter.  Pick something up that you want, or something that you know you lack.

I call this paying yourself first, which is actually a financial term:

Pay yourself first is a phrase referring to the idea that investors should routinely and automatically put money into savings before spending on anything else. – InvestingAnswers.com

When it comes to your career, make sure you routinely and automatically put time into your development before spending it on anything else.

When someone comes to your desk and asks you to do something, tell them that you’re doing your daily checklist, and you’ll be with them in a few minutes.  (People at work don’t understand that daily checklists are dumb out of style, so they’ll leave you alone.)  Your company won’t give you the time to do this, so you have to take the time.

Study first thing in the morning, before things get started. Once the day really gets going, it’s hard to even remember studying, much less to find the time.

You may not be able to do it every day.  There may be some days when you walk in and there’s some emergency that honestly takes priority.  That’s okay, take care of your emergency.  But outside of that, there’s very little that can’t wait 30 minutes, especially when you’re “doing you’re checklist to make sure things are okay”.

So take some time to be good to yourself before things get crazy every day.  Improve yourself, live in interview mode, and pay yourself first.

 

 

Photo by Tim Gouw on Unsplash

Coding Naked

Let’s talk about CASE tools, and why coding for databases without one feels so….exposed.

I know the title sounds like clickbait, but it’s how I’ve felt for a long time.  Why? Because of CASE tools.

CASE stands for Computer-Aided Software Engineering. Here, I’m referring to data modeling tools like ErWin and ER/Studio.  It disheartens me that the industry as a whole has gotten away from using these tools. In this world of rushing software to market, we’ve forgotten that the data-driven marketplace starts with the word ‘data‘.

A brief history of CASE

I started in IT in the 90s when things moved a little slower and we actually took time to model databases.  [Editor: I somehow managed to resist inserting a gif of an old man and a cane.]  So, I got used to using CASE tools, and by the year 2000 I was quite adept at using the different flavors of these tools.  And I’m going to tell you, I miss them greatly.

The word domain has had a lot of play in IT, because it’s a nice generic container for many things. In the 1990s, a domain was a universal concept when working with a CASE tool. In SQL, we have another word for domain: column.  Old timers like Joe Celko get really irate when you call a domain a column, because they know what it’s for.  But we’ll get to that in a minute. First, let’s talk about the difference between a domain and a column.

Column vs Domain

A column is a single property in a table, which holds the same type of data throughout all rows.  Think of an Excel spreadsheet that stores a list of names.  That’s how data is represented to us, even in SQL Server Management Studio (and practically every other query tool out there).  We see the data as these little on-the-fly spreadsheets.  But, that would be a very myopic view of what a domain is.

In data modeling, a domain is implemented as a column. But, the domain itself is an organizational structure for column definitions.  A single domain can apply to individual columns in dozens of tables, if you like. Let’s take an example.

Let’s say you’re writing a SQL application, and you sit down to create a table for it. In that table, you need a FirstName column.  After some thought, you give that column a datatype of VARCHAR(50).  You then get pulled away, and don’t get back to the code for another week or so. When you return to the code, you pick up where you left off and start creating another table.  The second table also needs a FirstName column. As it’s been weeks, you forget that the first table has a FirstName column with VARCHAR (50), and you’re in a different mindset today, so you give this new FirstName column VARCHAR(75).

This sort of thing happens all the time, especially in projects with multiple developers. See how this app is just begging for bugs?  These bugs can be very difficult to track down.

Developing with CASE and domains

This is where domains and CASE tools come in.  In a CASE tool, you can define FirstName as a domain.  That domain a single entity where you define the datatype, length, nullability, and often even more properties.  Then, when you’re creating tables in your CASE tool, you associate each column with a domain, instead of assigning datatypes.

With a defined domain, every column you associate with that domain automatically gets consistent properties.  Furthermore, if you have a change of heart and decide to change FirstName from varchar(50) to nvarchar(100), you change it at the domain level. Then all the columns that use it pick up the changes.  (I’ve always liked how meta it is that you’re normalizing the data model process itself.)

You can see why the old-timers insist on calling it a domain: because it is.  It’s not a single instance of a FirstName…it’s the domain of FirstName, which every FirstName column will take its properties from.

Fill your data applications with domains, and not columns.  Now that you’ve got a domain for FirstName, you can create domains for every other ‘column’ in your different data projects.  Even better, you can reuse these domains in your different projects, so FirstName is the same in every application you write!  This is what we call a data dictionary.  It’s the right way to do things because it forces you to think about the right data type for the job, and then it enforces it throughout your entire data enterprise.

What’s more, publish your data dictionary! Anyone writing writing stored procedures and other code against your databases should use that data dictionary to determine what datatypes they need for parameters, variables, temporary tables, and the like.  With any luck, they would look up those values in the database anyway, so you may as well expose them all in a single, searchable place.

Coding naked, without CASE

All of this is to show now naked I feel when I code these days.  You see, I don’t have access to a CASE tool any more, because they’re so expensive.  The companies that sell them think an awful lot of the software, while the companies I’ve worked for don’t. So, I haven’t been able to talk anyone into footing the bill in a long time.

The industry as a whole doesn’t see the value in not rushing software out the door, so nobody takes time to actually model data anymore.  [Old mane cane gif, resisted again. -Editor] So, the tools should be cheaper because demand isn’t very high.

All I know is that my projects have suffered as a result.  I’ve had silly bugs and inconsistencies that are pointless, because I was in a different frame of mind when designing different tables, and didn’t realize I was creating my columns with different properties.  Until I can solve my CASE tool problem, it’s like coding naked because I’m exposed to bugs, data anomalies, performance issues due to implicit conversion issues, and probably more.

Let’s get back to a time where we can slow things down and start doing things right.  And vendors, stop treating CASE tools like a precious commodity.  Price them to own and you may see an increase in sales.

"Old man yells at cloud."
Close enough. -Ed.

Have you met Hiro?

Six things to know about Minion Enterprise: 1. You love T-SQL. 2. ME is an enterprise product. 3. No dropping objects on servers. 4. No event storming. 5. Log everything. 6. And it takes just five minutes.

Hi, I’m Hiro! You’ve probably seen me around the Internet now and again. (I do like that picture; it makes me look extra-smart.)

Okay, since we’re really, actually talking for the first time here, let me tell you a few things about me.

1. I’m a robot.

And yes, I can fly. What’s the point of being a robot-spaceship hybrid if I can’t fly?

2. I love T-SQL

I love data in general, really. But I like T-SQL specifically to get at the data. I know, the picture above shows me looking at graphics, but they had me pose with those – they’re actually these removable stickers – just for the photo shoot. You’ll almost never find me using a GUI, because T-SQL is so much more efficient. I can type, like, 1,000 wpm, but I can’t click on a GUI much faster than anyone else.

You’re a DBA, right? You know what I’m talking about.  A GUI limits what can be done with the data, anyway. T-SQL lets me, and you, and everybody, query anything at all with the data available. And let me tell you: I collect a LOT of data, and I keep it in tables. (No XML, no flat files, no proprietary formats. You’re totally welcome.)

I also put together some great stuff, like stored procedures that show you Active Directory permission chains, and alerts for low disk space, things like that.  You should hang out sometime and see all the stuff I made.

3. I like a high-level view

It probably comes from my robot superpower, which is flying.

I’ve seen that a lot of DBAs use tools that make you do a task one-at-a-time, server by server. That takes forever.  Me? I like to report and alert and manage a bunch of SQL Server instances at once. I’ve queried 10 servers at a time, and I’ve changed sp_configure options for 200 servers at once.  It’s what we at MinionWare call the set-based enterprise.

Okay, like with security. I don’t really know why you have to spend so much time on researching or scripting or cloning permissions. For me, it’s effortless. I can make your new junior DBA look exactly like the previous one, down to object level permissions, for all your servers. It’s just one T-SQL command!

4. I don’t create objects on managed servers

That’s a big pet peeve of mine. Look, if I dropped jobs or triggers or something out on managed servers, and then I needed to upgrade myself? That means the team would have to go through this big process and make sure there were plans and rollback plans and on and on. No good. This way, though, I sit on a single server, and any upgrade or change is just effortless.

5. I DO NOT LIKE event storming

One of my main jobs is to alert the team when something passes a threshold. Like, if the network goes down overnight and it messes up backups, you really need to know about that! But I think it’s frankly spammy to send dozens or hundreds of emails about it. Instead, I like smart, condensed alerts. I like to provide exceptions, adjustments, and deferments. Smart alerts. No event storms.

6. I log everything, so you can report on anything

I told you I love data, right? I follow the maxim, “log everything”. I’m really, really clever.  (I’m not egocentric; I’m a robot!) I’m clever enough to give you good reports and views, things you need. But I’m not you, so I won’t be able to come up with every clever thing you might think of.

To make up for that, I log everything. Everything I can think of that might possibly be useful, I collect and store. And I’ve spent lots of time thinking up understandable table names, so you can find the data easily. Again, you’re totally welcome.

6. I’m easygoing

It takes about five minutes to get me settled in and configured.  I don’t like to be a bother.  And once I’m in, I’ll just do my job! I mean, if you have time we can totally hang out, and I can tell you all about what I do. But if you’re busy, I’ll just watch over your systems for you, and send you alerts, and collect data you might want later for audits or disk space projection.

Seriously, I’m the most chill, hardworking coworker you’ll ever have. Download me today, and I can show you what I mean.

Reader, this is Hiro. Hiro, Reader.

Your Service Level Agreement is a Disaster

As a DBA, you’re in charge of getting systems up and running quickly in the event of an emergency. This is all right and proper, right up until you start defining SLAs. Let’s see what went wrong.

As a DBA, you’re in charge of keeping the systems healthy, and getting them back up and running quickly in the event of an emergency. This is perfectly right and proper, right up until you start defining a service level agreement.

A Service Level Agreement (SLA) defines the level of service you agree to provide, to get the system back up after a downtime. An SLA is usually expressed in terms of time. So, if you have a two-hour SLA, that means you agree that when there’s a grave issue, you’ll have the system back up within two hours.

But how did you get that two-hour SLA in the first place? Usually, it goes like this:

  1. The customer explains that they must have the database up within two hours of a problem.
  2. You don’t see any problems with that.
  3. Maybe there’s even a written agreement that you sign, knowing full well your database experience can easily handle a downed database in that time.

Like so many things, this sounds perfectly reasonable on the surface, but doesn’t hold up once it comes in contact with reality.

SLAs in the Real World

SLAs are often poorly thought-out, and very rarely tested. As a matter of fact, most companies don’t even have SLAs; they have SLOs. An SLO is a Service Level Objective – something you’re striving for, but don’t know for sure whether you can achieve. SLOs allow you to have some kind of metric, without bothering to test in advance whether the objective is even possible.

This lack of testing is the primary barrier to achievable SLAs. Lots of factors can impact your ability to get a system up and available after an issue. Let’s take a close look at just two of those factors: hardware failures, and data corruption.

Hardware Failures

When a hardware failure causes an outage, it can take a long time to get replacement parts. Companies work around this – sometimes – by keeping replacement components on hand for important systems, but that’s only sometimes. If your company has no replacements lying around, you are completely at the mercy of the hardware vendor. This can – no, will – demolish your SLA.

Now, maybe your company has an SLA with the parts vendor, and maybe they don’t. Typically, that vendor SLA will be something like a four-hour replacement window…but that’s just when they agree to show up! The vendor can’t promise that the parts will be installed, configured, and running in that time.

So, your two hour SLA won’t work with the four hours it takes just to get the hardware, and the two (or more) hours getting everything set up. Oh yes, plus the time to diagnose the issue in the first place, possibly reinstall SQL, restore the database, troubleshoot additional issues, or all the above. All of this puts you at least three times the SLA you agreed to support.

Conclusions:

  • Consider how long it takes to get replacement parts.
  • If your customer is important enough, keep replacement parts in-house.
  • Account for extra time, for troubleshooting, installation, configuration, restoring, more troubleshooting, and anything else that can come up.

Data Corruption

Database corruption is another outage scenario. Depending on the level of corruption, it may take anywhere from a few minutes to a few hours to diagnose and fix it.

For now, we’ll assume that it’s just a table that’s gotten corrupted. Now, is it the data, or just an index? If it’s an index, depending on the size of the table, it could be a quick fix, or it could be a couple hours. However, if it’s a table, then you may have to restore all or part of a database to bring it back. That brings another host of issues into the fray, like:

  • Do you have enough space to restore the database somewhere?
  • How long will it take?
  • Is the data even onsite?
  • How will you get the data back into the production table?
  • Do you bring the system down while you’re fixing it?

Of course, it could also be the entire database that’s down, in which case you will need a restore (assuming the corruption wasn’t present in the last backup). A few of the things you must consider:

  • Do you know how long that restore will take?
  • Have you done what is necessary to make sure you can restore quickly by tuning your backups, making sure the log is small, turning on IFI (Instant File Initialization), etc.?

Without some foresight, you could easily spend that two hour SLA window zeroing out 90GB of log file. Your 1.5 hours of data restore will put you quite a bit outside of your agreement.

Conclusions:

  • Make sure you have space to restore your largest database, somewhere off the production server.
  • Implement a good CheckDB solution, plus alerting.
  • Practice various recovery scenarios to see how long they take.
  • Make sure your backups are in order.
  • Practice database restores, and get your backups tuned. (Tuning your backups means you can tune your restores, too!)
  • Take these practice sessions into account when you make your SLAs.

The Solution

The conclusions above are a good start, but not at all the complete picture. If you’re keeping a single SLA for any given server, you’re doing yourself and your customers a huge disservice.

First, define separate SLAs for the different types of failure, and define what each specific failure looks like. For instance, if you define an SLA for database availability, define what ‘available’ means. Does it mean people can connect?  Does it mean that major functions are online?  Does it mean absolutely everything is online?  Does it include a performance SLA?  I’ve seen performance SLAs be included in downtime procedures because sometimes a database is so important, if the performance isn’t there, then it might as well be offline.

Next, review SLAs regularly. So, you’ve reasonably determined that you can accommodate a four-hour SLA for the DB1 database. What about, as the database grows?  Are you going to put in an allowance for the database tripling in size?  Surely you can’t be expected to hold the same SLA two years later that you did when the database was new.

Finally, test, test again, and then test one more time just to be sure. In fact, you should be testing your recovery procedures periodically so you can discover things that may go wrong, or lengthen the process. if you promised two-hour downtime and you can’t get your recovery procedures under that time, then you’ve got some re-working to do, don’t you?  Don’t just throw in the towel and say you can’t do it, because contracts may already be signed and you may have no choice but to see that it works. Maybe you’re really close to being able to hit the SLA, and you just have to be creative (and maybe, to work for a company that’s willing to spend the money).

Celebrate! Second Anniversary Giveaway (is now over)

Last year, we gave away free licenses of our SQL Server management software for the company birthday! Let’s do it again, right now.

Edit: The giveaway is now over. If you didn’t make it in time, you should still get a 30 day trial of Minion Enterprise, which will cover ALL your instances, and the free modules Minion Backup, Minion Reindex, and Minion CheckDB.

Take a look at the Introduction to Minion Enterprise webinar recording!

The Giveaway…

Two years ago, we officially became MinionWare and launched the absolutely masterful SQL Server management solution, Minion Enterprise. We have talked to literally hundreds of people at dozens of database events, meetings, webinars, conferences – you name it!  Even better, clients are raving about the software.

Our business model is still: Give away as much as you can.  Our world-class maintenance tools - Minion Backup and Minion Reindex - are still free, and this year we added Minion CheckDB!

Last year, we gave away Minion Enterprise. This year, I thought we’d do something different. We talked about it at length…then decided that the world needs more free Minion Enterprise.

Have Some Free Enterprise-Level Software

From now until 5:00pm (Central Time), on July 20, 2017, anyone who enters, receives three free Minion Enterprise licenses. 

We’ve been thrilled with the reception and feedback we’ve gotten this year. So, let’s do this one more time. Maybe next year we’ll write thank you cards, instead.

Giveaway Rules

Of course there are just a couple caveats so see the restrictions below:

  1. Sign up before 5:00pm Central Time on July 20.  It’s a good idea to just do it now, because after July 20, the offer is over.
  2. Current version only.  Free licenses are eligible for patches and service releases of the current version, but not upgrades.
  3. Includes 3 months of support. Afterward that, a support contract will need to be purchased, if you want continued support.
  4. Additional licenses are available for purchase.
  5. Licenses are not transferable to any other companies.
  6. Sorry, no repeat giveaways! If you or your company have won free licenses from us before, you can’t do it again.

Enter Here

Fill out this form for your free software!

Databases need more than performance monitoring

There is more to databases than performance monitoring. So why are the most popular DBA tools, performance monitoring tools? They don’t even begin to cover the vast majority of DBA responsibilities. What we need is environment monitoring.

Image: performance monitoring pressure gauge
You’re gonna need more info

When you read a job spec for “database administrator”, it does not simply say:

  • Performance Monitoring

It says something like:

  • Database Performance Tuning
  • Database Security
  • Promoting Process Improvement
  • Problem Solving
  • Presenting Technical Information
  • Database Management
  • Data Maintenance
  • Operating Systems

(List taken in part from Hiring.Monster.com.)

So why are the most popular DBA tools, performance monitoring tools? Those are great, sure, but they don’t even begin to cover the vast majority of DBA responsibilities. What we need is environment monitoring.

Environment Monitoring

I don’t have a website I can link to, to give you a definition of SQL environment monitoring. That’s because we’ve been defining it ourselves, for the past nine years.

An environment monitor is a system that allows administrators to examine the overall and specific health of database instances.

An environment monitor should touch on performance, yes. It should also:

  • Manage and monitor security
  • Speed audits
  • Make a majority of common DBA tasks effortless
  • Collect and present as much system information as possible, including service packs, disk space, errors, and more
  • And more
  • And also, lots more

Minion Enterprise is far more than glorified maintenance

This is exactly what we built Minion Enterprise for: to monitor and manage the environment. To take away the Server-By-Agonizing-Server aspect of administration by introducing the “set based enterprise” approach. To automate everything that can be automated, and to make data available to the DBA on everything else.

Get a trial and a demo, and you’ll see exactly what we mean. Monitor your environment, not just your performance.

Distinguish backup file names or pay the price!

So far, no one has found exercise to be beneficial to servers. Purposeless repetitive motion may be good for human muscles, but your SQL Server instance experiences no gain for the pain. Here’s a good example: taking useless backups. Let me explain…

Running around a track
SQL instances shouldn’t do this

So far, no one has found exercise to be beneficial to servers. Purposeless repetitive motion may be good for human muscles, but your SQL Server instance experiences no gain for the pain.

Here’s a good example: taking useless backups.

(“Did she say useless backups? I’ve never heard of such a thing!” Yeah, just wait.)

Backup file names are critical

Traditionally, backup files are named after the database and the backup type, and given a timestamp. So you’ll see something like master_FULL_20170101.bak. If you like to stripe your backups, you might name the files something like 1of5MasterFull20170101.bak, and so on.

But I have run across shops that takes backups without bothering to time stamp the file name: master_FULL.bak.  These shops either overwrite each backup file, with each successive backup (using INIT and FORMAT), or add to the backup set (which I find mildly annoying, but to each their own).

The problem with using the same backup file name over and over is if you have a cleanup mechanism that deletes old backup files!

The same-name cleanup issue

Let’s say that in your shop, you have Minion Backup (MB) installed and running with the following options:

  • INIT is enabled
  • FORMAT is enabled
  • Backup file retention (in hours) is 48, so we’ll keep 2 days’ worth of backups
  • Backup name is set to %DBName%%BackupType%.bak, which works out to DB1Full.bak for a full backup of DB1.

Note: For more information on the %DBName% and %BackupType% inline tokens, see “About: Inline Tokens“ in our help center.

Here is what happens:

  1. On day 1, MB takes a backup of DB1, to \\NAS1\SQLBackups\DB1Full.BAK.
  2. On day 2, MB takes a backup of DB1, which overwrites the file \\NAS1\SQLBackups\DB1Full.BAK.
  3. On day 3, MB takes a full backup of DB1 (which overwrites the same file). And then the delete procedure sees that it has a file from day 1 (>48 hours ago) that needs deleting. And so it deletes \\NAS1\SQLBackups\DB1Full.BAK. Remember, this is the file that MB has been overwriting again and again.
  4. On day 4, MB takes a backup of DB1, to \\NAS1\SQLBackups\DB1Full.BAK.. Then, it sees that it has a file from day 2 that needs deleting, and so deletes \\NAS1\SQLBackups\DB1Full.BAK.

See? From Day 4 on, we’re creating a backup file just to delete it again!

Fixing the issue if you have MB

One excellent way to figure out if you have this problem is to notice that, hey, you don’t have any backup files. Another indicator in Minion Backup is if you see “Complete: Deleted out-of-process. SPECIFIC ERROR:Cannot find path ‘…’ because it does not exist.” in the Minion.BackupFiles Status column.

But the real smoking gun is if you haven’t time stamped your backup files in Minion.BackupSettingsPath. Here’s how to fix that:

That will give each file its own name, and eliminate the chance of this ever happening.

And it will prevent your server from performing useless exercise. Me? I’m going to hit the gym.

Technology: Shoot yourself in the foot faster!

Most of us love technology. And most of us have experienced how blindingly fast technology can provide some degree of cataclysmic failure. Especially when you start by saying, “I should go ahead and do that real quick…”

Most of us love technology. And most of us have experienced how blindingly fast technology can provide some degree of cataclysmic failure. Let me explain.

I’ve been getting better and better about planning out my work week, listing out what needs doing each day, and sticking to it. So I knew that today I would be working a little on a few website updates in the morning, and then the rest of the day would be free for coding.

(I pause here for the audience to have a hearty chuckle.)

So here we are. I log onto the WordPress back end for MinionWare.net, change the wording and format of some text elements, and double-check that they look okay.

The other thing on my list is making sure I have a specific plugin installed. Is it? Why, yes it is! That’s grand, I guess my work here is about done.

But hey, look at that. A bunch of the other plugins need updating. I should go ahead and do that real quick…

(I pause here for the audience’s gasps of horror.) But wait, let me reassure you: I did pause and download a fresh WP backup.

And then I updated three WordPress plugins. Why, oh why did I do that? Why, when I know better?

The website came up as gibberish. Every page, every post, came up as complete gibberish. Of course I immediately restored the WP backup…which did absolutely nothing to help. It turns out that these backups are pretty much for content, not for restoring plugins to a specific state.

Goody.

After a good deal of fighting and rage-coffee, I narrowed everything down to one culprit, killed the plugin with fire, and confirmed that the site was up and looking good.

Why is it so much easier to destroy than to fix?

This is totally a common theme in life. From the big glass bowl my kid shattered in the sink, to the car we (I won’t say who) scraped against a wall, to the appointment we missed. So much of our time is spent cleaning up mistakes, paying to have them fixed, and making up for lost time.

And technology lets you break things so much faster! I can drop a bowl and spend 30 minutes cleaning it up, but I can drop 2 Tb of data without a thought and spend weeks trying to get it back. (I mean literally, that’s what it takes to drop 2Tb…not thinking at all.) I can scrape the paint on my car and just leave the thing scraped…but I can bring down a years-old website with the click of a button.

You see a theme here?

I’m starting to see that a large percentage of an IT professional’s life is (or should be) disaster prevention – teaching yourself to triple check what server you’re connected to, making sure backups are up and running. And another very large percentage is disaster recovery, in one form or another. Yes, of course I mean traditional SQL disaster recovery. But I also mean recovering from the borked website, the forgotten perfmon trace, the third missed meeting this week (where your manager noticed particularly that you weren’t there).

Prevention and recovery

With technology, as with life, automation is a huge part of the solution. But, it’s not the whole solution.

  • We can automate database backups.
  • We can automate WordPress Backups.
  • We must set reminders for meetings.
  • We must set reminders to get the car’s oil changed. (Also: to keep off your dang phone while driving.)
  • We should create standard procedures for maintenance and downtime.
  • We should create standard procedures for managing personal tasks. (I’ve become a huge fan of the Bullet Journal method for this.)

And of course, we can’t prevent everything. So sometimes, we spend the morning staring furiously at wp-admin folders in FileZilla, instead of coding.

Good luck with the chaos.

-Jen