Disk Configuration for FM Server

I have an AWS instance running FMS 19.1 on CentOS 7.8. It has a 200GB disk, of which approx 100GB is free. There are 75 databases that take up 18GB, progressive backups take up 23GB & backups take up 55GB. I am considering splitting this into 2 or 3 disks - 1 for the OS & FMS installation plus either 1 for databases and backups or 1 for databases & 1 for backups + progressive backups if this has benefits.

This would give me the ability to take a snapshot of the OS/FMS disk & spin up a new instance where I could test upgrading to CentOS 7.9 and/or FMS 19.2. Assuming the testing is successful I can then shut down the live FMS, detach the additional volume(s) and attach them to the upgraded OS/FMS instance.

Firstly, does this sound like a good or bad thing to do? Secondly & assuming it is a good idea, what sizes of disks would be recommended, ie which disk(s) should have the most free disk space?

Many thanks
Carl

1 Like

Hello @CarlHenshall

Just a note to say welcome to the Soup. I, personally, don't have the expertise to answer this question, but I hope that maybe someone else might, as I would benefit from seeing replies on it, too.

Again - welcome!

1 Like

At a general level, putting your databases onto a separate disk from your system is a good idea. The point of separation would be to reduce the amount activity on the system drive and to allow different streams of activity on the primary drive and backup drive. It can also make it easy to switch instances, as you can point your systems to the data drives instead of having to copy data to the instances.

Put your backups into a different location if you can. I've run servers with that configuration and it has worked well. With that said, I have servers that backup to the same folder as the live databases, on the system disk. The backups get copied to a different location on a schedule. These servers aren't overworked and the files aren't large and everything works well.

When you have physical disks, the advantage of using separate disks is that you have separate head mechanisms, so they can be in two places at once. That allows the live databases to be accessed at the same time as the other processes occur on the other disk. I'm not sure whether those advantages are translated into AWS. You are probably on SSD, so there are no moving parts. The advantages are most likely to be the ease of configuration, as I mentioned above.

The ability to spin up a new instance for testing is a good thing. You may or may not run into license conflicts if those instances share the same network. I know that in-house we've bumped into that problem and the solution was to isolate the machine running the tests. I'm not sure what will happen in an AWS context.

Your system disk will need sufficient space for the system and swap files and that's going to be different based on systems and hardware. A geeky Redhat/CentOS forum may be able to help with that. The disk containing your running files probably needs to be twice as big as your files. FMS will give warnings at 80%, so that's a red-line that you don't want to cross. The backup disk needs to be biggest - how big depends on the number of backups you plan to store on it at any time.

2 Likes

Thanks Malcolm. I was similarly wondering whether having separate disks actually makes much of a performance difference in the AWS infrastructure.

The info about the disk containing running files needing to be twice as big as the files is great info - just what I was after. A similar rule of thumb for the backups would be interesting. I presume FM makes a new backup of a file & then deletes the oldest so as a bare minimum there has to be room on the disk for an additional copy of your largest file.

I know FMS creates temporary files whilst the solution is running. I'm assuming these are on the OS/FMS disk rather than on the disk that holds the running files. Is there any way to tell how big these get?

Obviously there is a cost to the AWS storage space so I am trying to find an optimal balance between cost & performance.

1 Like

Hi Carl

Good to see you here. Although we do not do this now, as we only use AWS for testing, but we certainly used to use block storage to add EBS volumes to the main instance disk, which worked well.

You are correct about the temporary files, these are created on the system disk. Please remember that container fields, even externally stored, make heavy use of temporary files. The nature of these usually means that the content takes up a lot more space than standard fields.

The storage can add significant costs, particularly if you are storing AMIs, snapshots and your FileMaker backups. We reduce the FileMaker backup costs by only retaining a single FileMaker Server backup on the (in Amazon’s case) EC2 or block storage instance and synchronise this with the (excellent) AWS CLI to S3 storage. The synchronisation batch file doesn’t include the ‘—delete’ suffix so no backups are removed from the S3 bucket during the sync. However, we apply a LifeCycle rule to the appropriate folder in the bucket that limits the number of backups retained. All of this is run from system scripts in FileMaker Server and has worked brilliantly for many years and the S3 storage is much cheaper than the EC2 storage.

Kind regards

Andy

1 Like

Hi Andy

Thanks for the info. The solutions on this server do not have much in the way of container data - it is probably 95% text data. Is there a way to see how much space FileMaker's temporary files are taking up?

Do you have any thoughts on whether it is beneficial from a performance perspective to put data & backups onto different disks on the AWS platform or does the AWS infrastructure essentially mean that there's no benefit to doing this?

We do have a nightly routine that takes the backups into an external backup system (not S3 but a similar principle). You've prompted me to realise I don't really need to keep 7 days worth of backups on the server as I have that elsewhere.

Best regards
Carl

Hi Carl

As mentioned, we only use AWS for testing, so my experience in an AWS production environment is not the best. However, I would point out that within the original Claris/FileMaker AWS Cloud, I believe every instance consisted of the CentOS and separate volume for the data. I’ll let you read into that what you wish to, although it is worth remembering that this was also their mechanism to upgrade systems. Regardless, I wouldn’t imagine they would have set their own cloud up this way if it was poor practice. Again, we don’t use Claris/Filemaker cloud, so I’m only repeating what I’ve read.

I believe it is possible to view/monitor the temporary folders (FMS and FMP) simply by monitoring the folders identified by Get ( TemporaryPath ). This is where I’ve observed the container field activity.

All the best

Andy