Should I be using progressive backup?

It doesn't crash. It stops. For recovery, that is great. If you have your server configured to send the admin email you will receive a number of warning emails, saying "Disk space requirements less than ....". When the disk space is too small to continue you get an email that says, "Server shutting down ..."

2 Likes

It doesn't have to use the C drives, there's something else going here that produced what you are seeing.
But you mention high-speed spinning platters as your disk i/o: given the size of some of the individual files and the total size of the hosted files: I would strongly encourage you to use enterprise-level SSDs instead of spinning platters. You'll see a massive performance boost.

And break those big files up into smaller ones, along the lines of static vs changing data and you'll get another boost from FMS' hard-linking mechanism.

5 Likes

Hi Wim, great to see you back here.

I would really appreciate your input on the scenarios that progressive backups can help resolve.

Torsten has described the potential data loss that it could protect against in the event of a server crash, which makes perfect sense. However, in my years weā€™ve never experienced anything like this (thankfully, but no doubt there are those that have).

I canā€™t visualise many other cases. If you had accidental data deletion and a lengthy save time, perhaps you could get to the server for the as yet to be updated main backup file, but could you? If you stopped progressive backup or FMS Server surely it would ā€˜flush its cacheā€™ and update the backup file before stopping.

I donā€™t deny PB works OK, but other than Torstenā€™s scenario I struggle to justify using it.

Many thanks
Andy

2 Likes

For me, progressive backups are always and by default part of the backup and recovery strategy.
Working off the two basic questions around setting up the desired backup strategy:

  1. how much data are you willing to lose?
  2. how long can you be down for?

The answer to question #1 determines the number of restore points that you'll need (think 'backup sets') and how often you'll want to do a backup. That's the most relevant here but the answer to #2 will determine the mechanics. If the answer to #2 is '30 minutes' then clearly you don't want to rely on having to fetch a backup from some slow cloud storage where downloading 300GB takes a few hours.

A typical backup strategy that I implement would use hourly backups using regular FMS schedules, and progressive backups running every 5 to 10 minutes. PBs only have two backup sets so they don't allow me to go back far in time to say retrieve something that was deleted late yesterday. But it does allow me to minimize the lost data in case of a server crash. Which is why use both. PBs use a different mechanism then regular backups so they are typically faster which is why running a regular backup schedule every 5 or 10 minutes wouldn't usually work well. Many variables involved here obviously so it is hard to make generalized statements.

Having said all of that. If the FMS machine is a VM (on-prem or cloud) then obviously you can use a much more efficient backup mechanism than the built-in FMS one by using snapshots. Those would backup say the 300GB completely in under a second so you can run those every 5 minutes without affecting the users at all. The thing to remember here is that FMS itself is not snapshot-aware so it takes a bit of work to set it up and obviously it requires access to the hypervisor level to do restores etc. But that's just part of setting up the strategy and the procedures necessary to make it happen.

7 Likes

+100

1 Like

I also have a hard time justifying the progressives.
Because you only get the last 2 backups. And they keep going in the case of data corruption.
I wish you could just just save the difference files so you could roll back a database to an arbitrary point in time.

Jerry

6 Likes