External Container Storage Structure

Is there a way to structure the files in case of external container storage on Filemaker Server?
The problem is that Filemaker Server creates one folder to store all data for a specific container field in a specific table, but that folder now contains over 17000 pictures and documents and I think that is a reason adding new documents is getting extremely slow.
My guess is that dividing this folder in subfolders with each less files should be a speed improvement.

Can there be another reason that adding docs is now so much slower than in the beginning, while adding text or numerical data in other fields is as fast as in the beginning?
SSD: 500 GB
Docs folder: 14 GB for over 17000 files

Yes there is, in the container field setup you can specify what the sub-folder structure should look like to limit the number of files in any single folder. You can calc it by client by month or any other means that makes sense.

All OSes have trouble with a great many number of files in a single folder.

5 Likes

Great!
Merci, Wim!
I will check it at once

Graag gedaan.

We've a few rules we use for external file storage.

First we change File -> Manage -> Containers menu from the default [hosted location]/Files/ to [hosted location]/FileName

The reason for this is that we got caught out when adding encryption at rest (EAR) on a few hundred files spread across out customer base. If you have a separated or multi-file solution and leave the default settings, then it is almost impossible to add EAR, as EAR will fail unless it has access to all the externally stored files (EAR will also report failure if a file is missing, but that's another story).

Hosting on FMS using the default is fine, as the RC_Data_FMS folder adds a file name ahead of the 'File' folder, but working locally, as you have to do with EAR, it requires the folder name given after the [hosted location] to be in the same folder as the file. This also helps using fmDataMigration or just uploading using File -> Sharing -> Upload to Host. You cannot just take a backup with the RC_Data_FMS folder, or the folders at first level within this (file names) and upload this way, using the default settings you have to have the extenally stored files in a folder called 'Files', hence in multi-file/separated solutions you're in trouble as you can only have 1 folder called 'Files' for all your .fmp12 files.

As for the Container 'Store container data externally', if you're using 'Secure storage', this sorts itself out. However, if using 'Open storage', we always enter:

Year ( Get ( CurrentHostTimestamp ) ) & "/" & Month ( Get ( CurrentHostTimestamp ) ) & "/"

This gives a folder for each year, with up to 12 folders from 1 to 12 for January to December, hence each subfolder only contains 1 month's worth of files. This has worked very well for us over the years and avoids the problem you mentioned in your original post.

We came across a new client recently, who's incumbent developer had set the open storage settings based on the key field serial number. They had a folder for every single record for every table containing a container field!

We didn't realise how important it is to get external storage setup correctly until GDPR forced us into EAR, the default settings increased the time it should have taken by an unbelievable amount, as the impact is not apparent until it is necessary to take the hosted files and work on local copies.

Kind regards

Andy

1 Like

Thank you, Andy.
I wasn't aware that you could set the path to the files yourself, as also Wim mentioned.
It's indeed very easy, and it's even easy to change afterwards. Filemaker moves all the files to the new folders if it has the right info. Piece of cake.

2 Likes

If you change Month to the Week of the Year, you'll end up with 52 folders per year and not as many files per folder. :slight_smile:

My only contribution here would be that when using encrypted storage for a large amount of data, the obfuscated folder hierarchy that Filemaker creates can be extremely complex, and this can cause performance issues if you're compressing backups with zip etc as part of post backup processing.

I have a vertical solution which uses a webviewer to display images. The solution was originally developed before the new container fields were introduced. A script imports batches of images, creates thumbnails and displays full-size images in a webviewer.

I had anticipated that the number of files in a single folder could be an issue and allowed for multiple folders to be created (the script also moves files into a new folder). However, as far as I know this option hasn't been used. Typically users have 10s of thousands of images in their images folders; a few have 150 - 200,000. Moving the images between folders or displaying them hasn't been an issue performance-wise.

I've thought about using container fields instead of a webviewer but wonder if the number of files/folder would become a problem. I'm not sure under what circumstances this is an issue. Is using FileMaker's container fields more problematic in this respect than other forms of accessing files? I think I would probably use open storage.

Much depends on the intent of the remote container files. They're essentially just a way for FMS to manage the storage and the backup of those files. They are not a document management solution that allows for direct access of those files or displaying them in any other way than just a container field.

If document management features are important I would likely just integrate with a provider that already has most of the features: box.com, dropbox, OneDrive, Google Docs,.... they all have excellent APIs and FM happens to be extremely good at integrating with API.

Two screenshots of a demo file I'm working on to show how powerful an integration with Microsoft's 365 is, for easily sending HTML emails for instance or listing out files on the OneDrive, downloading and uploading files, and since you leave the actual files on OneDrive there is no bloating of the database and you can use all of its native features like creating a share link, retrieve older versions,...

and

I can even ask their API to create a PDF version of a Word doc I have there:

And of course, since it's just all API calls it works seamlessly from Pro, Go, WebDirect and server-side.

Short version: if document management features are what you are after, these are all solved problems that don't need to try and make the Remote Container feature into something that it is not.

5 Likes