Now you have a single installer for all three modules, plus new features and fixes for the entire backup and maintenance suite!
Minion Backup and Minion CheckDB got some important fixes, but it’s Minion Reindex that’s the real star of this release. MR was added to the unified installer, got on board the table-based scheduling bandwagon, and received an overall shine, polish, and upgrade. See “New in Reindex”, below!
Even in Azure SQL Database, you need to know certain things about your tables that can be hard to find. We’ll show you how to get that data.
Even in Azure SQL Database, you need to know certain things about your tables.Â For instance, you need to audit table size and row count information, as well as schemas and more.
Data not available
Recently, I found that I could only get some of the properties I wanted when I tried to get table properties from Azure SQL Database using SMO.Â I could get table name, schema, and many more, but some random properties – like RowCount and DataSpaceUsed – returned NULL.
After racking my brain for about half a day, I found the problem.Â Before getting into that however, let’s get into some sample code that’ll demonstrate the problem:
The fix turns out to be simple: move to SQL Server 2016.Â Something in previous versions of SMO prevents them from reading these properties from Azure databases. Â (I’m always able to pull that data just fine from on-premises servers, though.)
But if you’re on SQL 2014, how do you get the 2016 SMO objects?Â The easiest way is to download the 2016 Feature Pack.Â It won’t say “SMO” anywhere, and there are a lot of packages to choose from.Â What you’re looking for isÂ SharedManagementObjects.msi.
Also, starting in SQL 2017, SMO is available as a separate NuGet package.
So even though you’ve got your data in Azure SQL Database, you don’t have to throw your hands up to all admin activity.Â Many companies say they don’t need DBAs any more, because their data is in the cloud.Â Nothing could be further from the truth.Â And hopefully this will get you one step closer to where you need to be with monitoring those environments.
Here’s another article on the need for monitoring your environment, and not just performance.
The biggest problem I see is the huge number of DBAs who have let themselves fall behind. Live in interview mode, and pay yourself first, and you won’t fall behind!
Iâ€™ve done a ton of interviewing in my career, both as the interviewee, and as the interviewer. Â The biggest problem I see is the huge number of DBAs who have let themselves fall behind. How does this happen?
Middle of the pack
Once you’re a more or less established DBA, you have yourself a job, and you go in every day and you do your duty.Â You back up what you need, do the restores you need, fix replication, look at a couple bad queries, and set up a new server for the latest project.Â Youâ€™re busy as hell at work, and thereâ€™s no end in sight to all the tasks and projects and problems.
Thatâ€™s the problem with being a DBA these days: you’re always swamped with problems. (The real reason is that companies are absolutely dedicated to not taking data seriously, but that’s another article entirely.)
So you work, and you get further and further behind the learning curve because there’s no time to do anything for yourself: no time to learn anything new, pick up a book, watch a tutorial, or even practice what you already know. Â Youâ€™re always putting out fires!
Then, when it’s time to interview for a new job, you find yourself cramming in the last couple days to try to bone up on your knowledge. Â Speaking as someone whoâ€™s interviewed a lot of DBAs: this definitely shows! Anyone who conducts any amount of interviews at all can tell when youâ€™re just barely recounting something and when you know the topic cold.
Live in interview mode
Okay, I have a radical, two-step plan for your professional development. Here we go:
Stop cramming for interviews like youâ€™re trying to pass a test.
Live in interview mode.
Interview mode (n.) – The practice of conducting your daily work life as if an interview could happen unexpectedly, at any time.
Take time to study every day.Â It doesnâ€™t matter how much, but I think 30 minutes isnâ€™t too much to ask.Â Even if youâ€™re not studying to interview, your skills will get rusty when you donâ€™t put them into practice for a while. Chances are, your company wonâ€™t ever give you the time to do it, so you have to take that time yourself.
Pay yourself first
Every day when you come in to work, take 30 minutes to work on something for you.Â Learn how to do partial restores.Â Learn how to set up an Availability Group.Â Learn how to add a primary key to a table using T-SQL.Â Learn XML, or JSON, or HTML.Â It doesnâ€™t matter.Â Pick something up that you want, or something that you know you lack.
I call this paying yourself first, which is actually a financial term:
Pay yourself firstÂ is a phrase referring to the idea that investors should routinely and automatically put money into savings before spending on anything else. – InvestingAnswers.com
When it comes to your career, make sure you routinely and automatically put time into your development before spending it on anything else.
When someone comes to your desk and asks you to do something, tell them that youâ€™re doing your daily checklist, and youâ€™ll be with them in a few minutes. Â (People at work donâ€™t understand that daily checklists are dumbÂ out of style, so theyâ€™ll leave you alone.) Â Your company wonâ€™t give you the time to do this, so you have to take the time.
Study first thing in the morning, before things get started. Once the day really gets going, itâ€™s hard to even remember studying, much less to find the time.
You may not be able to do it every day.Â There may be some days when you walk in and thereâ€™s some emergency that honestly takes priority.Â Thatâ€™s okay, take care of your emergency.Â But outside of that, thereâ€™s very little that canâ€™t wait 30 minutes, especially when youâ€™re “doing you’re checklist to make sure things are okay”.
So take some time to be good to yourself before things get crazy every day.Â Improve yourself, live in interview mode, and pay yourself first.
Let’s talk about CASE tools, and why coding for databases without one feels so….exposed.
I know the title sounds like clickbait, but it’s how I’ve felt for a long time.Â Why? Because of CASE tools.
CASE stands for Computer-Aided Software Engineering. Here, I’m referring to data modeling tools like ErWin and ER/Studio.Â It disheartens me that the industry as a whole has gotten away from using these tools. In this world of rushing software to market, we’ve forgotten that the data-driven marketplace starts with the word ‘data‘.
A brief history of CASE
I started in IT in the 90s when things moved a little slower and we actually took time to model databases.Â [Editor: I somehow managed to resist inserting a gif of an old man and a cane.]Â So, I got used to using CASE tools, and by the year 2000 I was quite adept at using the different flavors of these tools.Â And I’m going to tell you, I miss them greatly.
The word domain has had a lot of play in IT, because it’s a nice generic container for many things.Â In the 1990s, a domain was a universal concept when working with a CASE tool. In SQL, we have another word for domain: column.Â Old timers like Joe Celko get really irate when you call a domain a column, because they know what it’s for.Â But weâ€™ll get to that in a minute. First, letâ€™s talk about the difference between a domain and a column.
Column vs Domain
A column is a single property in a table, which holds the same type of data throughout all rows.Â Think of an Excel spreadsheet that stores a list of names.Â That’s how data is represented to us, even in SQL Server Management Studio (and practically every other query tool out there).Â We see the data as these little on-the-fly spreadsheets.Â But, that would be a very myopic view of what a domain is.
In data modeling, a domain is implemented as a column. But, the domain itself is an organizational structure for column definitions.Â A single domain can apply to individual columns in dozens of tables, if you like. Letâ€™s take an example.
Let’s say you’re writing a SQL application, and you sit down to create a table for it. In that table, you need a FirstName column.Â After some thought, you give that column a datatype of VARCHAR(50).Â You then get pulled away, and don’t get back to the code for another week or so. When you return to the code, you pick up where you left off and start creating another table.Â The second table also needs a FirstName column. As itâ€™s been weeks, you forget that the first table has a FirstName column with VARCHAR (50), and you’re in a different mindset today, so you give this new FirstName column VARCHAR(75).
This sort of thing happens all the time, especially in projects with multiple developers. See how this app is just begging for bugs?Â These bugs can be very difficult to track down.
Developing with CASE and domains
This is where domains and CASE tools come in.Â In a CASE tool, you can define FirstName as a domain.Â That domain a single entity where you define the datatype, length, nullability, and often even more properties.Â Then, when you’re creating tables in your CASE tool, you associate each column with a domain, instead of assigning datatypes.
With a defined domain, every column you associate with that domain automatically gets consistent properties.Â Furthermore, if you have a change of heart and decide to change FirstName from varchar(50) to nvarchar(100), you change it at the domain level. Then all the columns that use it pick up the changes.Â (I’ve always liked how meta it is that you’re normalizing the data model process itself.)
You can see why the old-timers insist on calling it a domain: because it is.Â It’s not a single instance of a FirstNameâ€¦it’s the domain of FirstName, which every FirstName column will take its properties from.
Fill your data applications with domains, and not columns.Â Now that you’ve got a domain for FirstName, you can create domains for every other ‘column’ in your different data projects.Â Even better, you can reuse these domains in your different projects, so FirstName is the same in every application you write!Â This is what we call a data dictionary.Â It’s the right way to do things because it forces you to think about the right data type for the job, and then it enforces it throughout your entire data enterprise.
Whatâ€™s more, publish your data dictionary! Anyone writing writing stored procedures and other code against your databases should use that data dictionary to determine what datatypes they need for parameters, variables, temporary tables, and the like.Â With any luck, they would look up those values in the database anyway, so you may as well expose them all in a single, searchable place.
Coding naked, without CASE
All of this is to show now naked I feel when I code these days.Â You see, I don’t have access to a CASE tool any more, because they’re so expensive.Â The companies that sell them think an awful lot of the software, while the companies Iâ€™ve worked for donâ€™t. So, I haven’t been able to talk anyone into footing the bill in a long time.
The industry as a whole doesn’t see the value in not rushing software out the door, so nobody takes time to actually model data anymore.Â [Old mane cane gif, resisted again. -Editor] So, the tools should be cheaper because demand isn’t very high.
All I know is that my projects have suffered as a result.Â I’ve had silly bugs and inconsistencies that are pointless, because I was in a different frame of mind when designing different tables, and didn’t realize I was creating my columns with different properties.Â Until I can solve my CASE tool problem, it’s like coding naked because I’m exposed to bugs, data anomalies, performance issues due to implicit conversion issues, and probably more.
Let’s get back to a time where we can slow things down and start doing things right.Â And vendors, stop treating CASE tools like a precious commodity.Â Price them to own and you may see an increase in sales.
Six things to know about Minion Enterprise: 1. You love T-SQL. 2. ME is an enterprise product. 3. No dropping objects on servers. 4. No event storming. 5. Log everything. 6. And it takes just five minutes.
Hi, I’m Hiro! You’ve probably seen me around the Internet now and again. (I do like that picture; it makes me look extra-smart.)
Okay, since we’re really, actually talking for the first time here, let me tell you a few things about me.
1. I’m a robot.
And yes, I can fly. What’s the point of being a robot-spaceship hybrid if I can’t fly?
2. I love T-SQL
I love data in general, really. But I like T-SQL specifically to getÂ atÂ the data. I know, the picture above shows me looking at graphics, but they had me pose with those – they’re actually these removable stickers – just for the photo shoot. You’llÂ almost never find me using a GUI, because T-SQL is so much more efficient. I can type, like, 1,000 wpm, but IÂ can’t click on a GUI much faster than anyone else.
You’re a DBA, right? You know what I’m talking about. Â A GUI limits what can be done with the data, anyway. T-SQL lets me, and you, and everybody, query anything at all with the data available. And let me tell you: I collect a LOT of data, and I keep it in tables. (No XML, no flat files, no proprietary formats. You’re totally welcome.)
I also put together some great stuff, like stored procedures that show you Active Directory permission chains, and alerts for low disk space, things like that. Â You should hang out sometime and see all the stuff I made.
3.Â IÂ like a high-level view
It probably comes from my robot superpower, which is flying.
I’ve seen that a lot of DBAs use tools that make you do a task one-at-a-time, server by server. That takesÂ forever. Â Me? I like to report and alert and manage a bunch of SQL Server instances at once. I’ve queried 10 servers at a time, and I’ve changed sp_configure options for 200 servers at once. Â It’s what we at MinionWare call theÂ set-based enterprise.
Okay, like with security. I don’t really know why you have to spend so much time on researching or scripting or cloning permissions. For me, it’s effortless. I can make your new junior DBA look exactlyÂ like the previous one, down to object level permissions, for all your servers. It’s just one T-SQL command!
4. I don’t create objects on managed servers
That’s a big pet peeve of mine. Look, if I dropped jobs or triggers or something out on managed servers, and then I needed to upgrade myself? That means the team would have to go through this big process and make sure there were plans and rollback plans and on and on. No good. ThisÂ way, though, I sit on a single server, and any upgrade or change is just effortless.
5.Â I DO NOT LIKE event storming
One of my main jobs is to alert the team when something passes a threshold. Like, if the network goes down overnight and it messes up backups, you really need to know about that! But I think it’s frankly spammy to send dozens or hundreds of emails about it. Instead, I likeÂ smart, condensed alerts. I like to provide exceptions, adjustments, and deferments.Â Smart alerts. No event storms.
6. I log everything, so you can report on anything
I told you I love data, right? I follow the maxim, “log everything”. I’m really,Â reallyÂ clever. Â (I’m not egocentric; I’m a robot!) I’m clever enough to give you good reports and views, things you need. But I’m not you, so I won’t be able to come up with every clever thingÂ youÂ might think of.
To make up for that, I log everything. Everything I can think of that might possibly be useful, I collect and store. And I’ve spent lots of time thinking up understandable table names, so you can find the data easily. Again, you’re totally welcome.
6. I’m easygoing
It takes about five minutesÂ to get me settled in and configured. Â I don’t like to be a bother. Â And once I’m in, I’ll just do my job! I mean, if you have time we can totally hang out, and I canÂ tell you all about what I do. But if you’re busy, I’ll just watch over your systems for you, and send you alerts, and collect data you might want later for audits or disk space projection.
Seriously, I’m the most chill, hardworking coworker you’ll ever have.Â Download me today, and I can show you what I mean.
As a DBA, youâ€™re in charge of getting systems up and running quickly in the event of an emergency. This is all right and proper, right up until you start defining SLAs. Let’s see what went wrong.
As a DBA, youâ€™re in charge of keeping the systems healthy, and getting them back up and running quickly in the event of an emergency. This is perfectly right and proper, right up until you start defining a service level agreement.
A Service Level Agreement (SLA) defines the level of service you agree to provide, to get the system back up after a downtime. An SLA is usually expressed in terms of time. So, if you have a two-hour SLA, that means you agree that when thereâ€™s a grave issue, youâ€™ll have the system back up within two hours.
But how did you get that two-hour SLA in the first place? Usually, it goes like this:
The customer explains that they must have the database up within two hours of a problem.
You donâ€™t see any problems with that.
Maybe thereâ€™s even a written agreement that you sign, knowing full well your database experience can easily handle a downed database in that time.
Like so many things, this sounds perfectly reasonable on the surface, but doesnâ€™t hold up once it comes in contact with reality.
SLAs in the Real World
SLAs are often poorly thought-out, and very rarely tested. As a matter of fact, most companies donâ€™t even have SLAs; they have SLOs. An SLO is a Service Level Objective â€“ something youâ€™re striving for, but donâ€™t know for sure whether you can achieve. SLOs allow you to have some kind of metric, without bothering to test in advance whether the objective is even possible.
This lack of testing is the primary barrier to achievable SLAs. Lots of factors can impact your ability to get a system up and available after an issue. Letâ€™s take a close look at just two of those factors: hardware failures, and data corruption.
When a hardware failure causes an outage, it can take a long time to get replacement parts. Companies work around this â€“ sometimes â€“ by keeping replacement components on hand for important systems, but thatâ€™s only sometimes. If your company has no replacements lying around, you are completely at the mercy of the hardware vendor. This can â€“ no, will â€“ demolish your SLA.
Now, maybe your company has an SLA with the parts vendor, and maybe they donâ€™t. Typically, that vendor SLA will be something like a four-hour replacement windowâ€¦but thatâ€™s just when they agree to show up! The vendor canâ€™t promise that the parts will be installed, configured, and running in that time.
So, your two hour SLA wonâ€™t work with the four hours it takes just to get the hardware, and the two (or more) hours getting everything set up. Oh yes, plus the time to diagnose the issue in the first place, possibly reinstall SQL, restore the database, troubleshoot additional issues, or all the above. All of this puts you at least three times the SLA you agreed to support.
Consider how long it takes to get replacement parts.
If your customer is important enough, keep replacement parts in-house.
Account for extra time, for troubleshooting, installation, configuration, restoring, more troubleshooting, and anything else that can come up.
Database corruption is another outage scenario. Depending on the level of corruption, it may take anywhere from a few minutes to a few hours to diagnose and fix it.
For now, weâ€™ll assume that itâ€™s just a table thatâ€™s gotten corrupted. Now, is it the data, or just an index? If itâ€™s an index, depending on the size of the table, it could be a quick fix, or it could be a couple hours. However, if itâ€™s a table, then you may have to restore all or part of a database to bring it back. That brings another host of issues into the fray, like:
Do you have enough space to restore the database somewhere?
How long will it take?
Is the data even onsite?
How will you get the data back into the production table?
Do you bring the system down while youâ€™re fixing it?
Of course, it could also be the entire database thatâ€™s down, in which case you will need a restore (assuming the corruption wasnâ€™t present in the last backup). A few of the things you must consider:
Do you know how long that restore will take?
Have you done what is necessary to make sure you can restore quickly by tuning your backups, making sure the log is small, turning on IFI (Instant File Initialization), etc.?
Without some foresight, you could easily spend that two hour SLA window zeroing out 90GB of log file. Your 1.5 hours of data restore will put you quite a bit outside of your agreement.
Make sure you have space to restore your largest database, somewhere off the production server.
Practice database restores, and get your backups tuned. (Tuning your backups means you can tune your restores, too!)
Take these practice sessions into account when you make your SLAs.
The conclusions above are a good start, but not at all the complete picture. If youâ€™re keeping a single SLA for any given server, youâ€™re doing yourself and your customers a huge disservice.
First, define separate SLAs for the different types of failure, and define what each specific failure looks like. For instance, if you define an SLA for database availability, define what â€˜availableâ€™ means. Does it mean people can connect?Â Does it mean that major functions are online?Â Does it mean absolutely everything is online?Â Does it include a performance SLA?Â Iâ€™ve seen performance SLAs be included in downtime procedures because sometimes a database is so important, if the performance isnâ€™t there, then it might as well be offline.
Next, review SLAs regularly. So, youâ€™ve reasonably determined that you can accommodate a four-hour SLA for the DB1 database. What about, as the database grows?Â Are you going to put in an allowance for the database tripling in size?Â Surely you canâ€™t be expected to hold the same SLA two years later that you did when the database was new.
Finally, test, test again, and then test one more time just to be sure. In fact, you should be testing your recovery procedures periodically so you can discover things that may go wrong, or lengthen the process. if you promised two-hour downtime and you canâ€™t get your recovery procedures under that time, then youâ€™ve got some re-working to do, donâ€™t you?Â Donâ€™t just throw in the towel and say you canâ€™t do it, because contracts may already be signed and you may have no choice but to see that it works. Maybe youâ€™re really close to being able to hit the SLA, and you just have to be creative (and maybe, to work for a company thatâ€™s willing to spend the money).
Two years ago, we officially became MinionWare and launched theÂ absolutely masterfulÂ SQL Server management solution, Minion Enterprise. We have talked to literally hundreds of people at dozens of database events, meetings, webinars, conferences – you name it! Â Even better, clients are raving aboutÂ the software.
There is more to databases than performance monitoring. So why are the most popular DBA tools, performance monitoring tools? They don’t even begin to cover the vast majority of DBA responsibilities. What we need is environment monitoring.
When you read a job spec for “database administrator”, it does not simply say:
SoÂ whyÂ are the most popularÂ DBA tools, performance monitoring tools? Those are great, sure, but they don’t even begin to cover the vast majority of DBA responsibilities. What we need is environment monitoring.
I don’t have a website I can link to, to give you a definition of SQL environment monitoring. That’sÂ because we’ve been defining it ourselves, for the past nine years.
An environmentÂ monitor isÂ a system that allowsÂ administrators to examine the overall and specific health ofÂ database instances.
An environment monitor should touch on performance, yes.Â It should also:
Manage and monitor security
MakeÂ a majority of common DBA tasks effortless
Collect and present as much systemÂ information as possible, including service packs, disk space,Â errors, and more
And also, lots more
Minion Enterprise is far more than glorified maintenance
This is exactly what we builtÂ Minion Enterprise for: to monitor and manage the environment. To take away the Server-By-Agonizing-Server aspect of administration byÂ introducing the “set based enterprise” approach. ToÂ automate everything that can be automated, and to make data available to the DBA on everything else.
Get a trial and a demo, and you’ll see exactly what we mean.Â Monitor your environment, not just your performance.
So far, no one has found exercise to be beneficial to servers. Purposeless repetitive motion may be good for human muscles, but your SQL Server instance experiences no gain for the pain. Here’s a good example: taking useless backups. Let me explain…
So far, no one has found exercise to be beneficial to servers.Â Purposeless repetitive motion may be good for human muscles, but your SQL Server instance experiences no gain for the pain.
Here’s a good example: taking useless backups.
(“Did she sayÂ useless backups? I’ve never heard of such a thing!” Yeah, just wait.)
Backup file names are critical
Traditionally, backup files are named after the database and the backup type, and given a timestamp. So you’ll see something like master_FULL_20170101.bak. If you like to stripe your backups, you might name the files something like 1of5MasterFull20170101.bak, and so on.
But I haveÂ run across shops that takes backups without bothering to time stamp the file name: master_FULL.bak. Â These shops either overwrite each backup file, with each successive backup (using INIT and FORMAT), or add to the backup set (which I find mildly annoying, but to each their own).
The problem withÂ using the same backup file name over and over is if you have a cleanup mechanism that deletes old backup files!
The same-name cleanup issue
Let’s say that in your shop, you have Minion Backup (MB) installed and running with the following options:
INIT isÂ enabled
FORMAT is enabled
Backup file retention (in hours) is 48, so we’ll keep 2 days’ worth of backups
Backup name is set to %DBName%%BackupType%.bak, which works out to DB1Full.bak for a full backup of DB1.
On day 1, MB takes a backup of DB1, to \\NAS1\SQLBackups\DB1Full.BAK.
On day 2, MB takes a backup of DB1, which overwrites the file \\NAS1\SQLBackups\DB1Full.BAK.
On day 3, MB takes a full backup of DB1 (which overwrites the same file). And then the delete procedure sees that it has a file from day 1 (>48 hours ago) that needs deleting. And so it deletes \\NAS1\SQLBackups\DB1Full.BAK. Remember, this is the file that MB has been overwriting again and again.
On day 4, MB takes a backup of DB1, to \\NAS1\SQLBackups\DB1Full.BAK.. Then, it sees that it has a file from day 2 that needs deleting, and so deletes \\NAS1\SQLBackups\DB1Full.BAK.
See? From Day 4 on, we’re creating a backup file just to delete it again!
Fixing the issue if you have MB
One excellent way to figure out if you have this problem is to notice that, hey, you don’t have any backup files. Another indicator in Minion Backup isÂ if you see “Complete: Deleted out-of-process. SPECIFIC ERROR:Cannot find path ‘…’Â because it does not exist.” in theÂ Minion.BackupFiles Status column.
But the real smoking gun is if you haven’t time stamped your backup files in Minion.BackupSettingsPath.Â Here’s how to fix that:
Most of us love technology. And most of us have experienced how blindingly fast technology can provide some degree of cataclysmic failure. Especially when you start by saying, “I should go ahead and do that real quickâ€¦”
Most of us love technology. And most of us have experienced how blindingly fast technology can provide some degree of cataclysmic failure. Let me explain.
I’ve been getting better and better about planning out my work week, listing out what needs doing each day, and sticking to it. So I knew that today I would be working a little on a few website updates in the morning, and then the rest of the day would be free for coding.
(I pause here for the audience to have a hearty chuckle.)
So here we are. I log onto the WordPress back end for MinionWare.net, change the wording and format of some text elements, and double-check that they look okay.
The other thing on my list is making sure I have a specific plugin installed. Is it? Why, yes it is! That’s grand, I guess my work here is about done.
But hey, look at that. A bunch of the other plugins need updating. I should go ahead and do that real quickâ€¦
(I pause here for the audience’s gasps of horror.) But wait, let me reassure you: I did pause and download a fresh WP backup.
And then I updated three WordPress plugins. Why, oh why did I do that? Why, when I know better?
The website came up as gibberish. Every page, every post, came up as complete gibberish. Of course I immediately restored the WP backupâ€¦which did absolutely nothing to help. It turns out that these backups are pretty much for content, not for restoring plugins to a specific state.
After a good deal of fighting and rage-coffee, I narrowed everything down to one culprit, killed the plugin with fire, and confirmed that the site was up and looking good.
Why is it so much easier to destroy than to fix?
This is totally a common theme in life. From the big glass bowl my kid shattered in the sink, to the car we (I won’t say who) scraped against a wall, to the appointment we missed. So much of our time is spent cleaning up mistakes, paying to have them fixed, and making up for lost time.
And technology lets you break things so much faster! I can drop a bowl and spend 30 minutes cleaning it up, but I can drop 2 Tb of data without a thought and spend weeks trying to get it back. (I mean literally, that’s what it takes to drop 2Tbâ€¦not thinking at all.) I can scrape the paint on my car and just leave the thing scrapedâ€¦but I can bring down a years-old website with the click of a button.
You see a theme here?
I’m starting to see that a large percentage of an IT professional’s life is (or should be) disaster prevention – teaching yourself to triple check what server you’re connected to, making sure backups are up and running. And another very large percentage is disaster recovery, in one form or another. Yes, of course I mean traditional SQL disaster recovery. But I also mean recovering from the borked website, the forgotten perfmon trace, the third missed meeting this week (where your manager noticed particularly that you weren’t there).
Prevention and recovery
With technology, as with life, automation is a huge part of the solution. But, it’s not the whole solution.
We can automate database backups.
We can automate WordPress Backups.
We must set reminders for meetings.
We must set reminders to get the car’s oil changed. (Also: to keep off your dang phone while driving.)
We should create standard procedures for maintenance and downtime.
We should create standard procedures for managing personal tasks. (I’ve become a huge fan of the Bullet Journal method for this.)
And of course, we can’t prevent everything. So sometimes, we spend the morning staring furiously at wp-admin folders in FileZilla, instead of coding.