Is there a plugin or tool which can determine the size-on-disk of various parts of a FileMaker file, such as field data, index, empty space, etc?
I have a file which has grown quite large, and I'd like to put efforts into shrinking it, and want to focus on Tables which will give me the best return for my efforts.
It doesn't show the size of indexes, but I was able to use this to identify potential hotspots (very large tables), where I could then check Text fields that had "All" indexing enabled that did not need it.
This is more of a technique (and cumbersome to do manually), but one way is to delete your tables one at a time, close the file between each, and record the size change.
There's probably a world in which this can be done automatically with BaseElements SQL function or even OData API + Admin API on linux FMS, but those sound like a lot of work in a different way!
When hosting a file on FileMaker Server, are unused "blocks" cleaned up automatically? I feel as if I've deleted data from a file, and seen the file size shrink. (Using FMS 18).
If not, is it still recommended that we ocasionally shut down the database, copy to a local machine running FileMaker, Open, Save As Copy (Compacted)?
Can the FileMaker Migration Data Tool (FMDataMigration) do any of these features for us?
A tip: large FileMaker databases can be comressed at nearly a 10x ratio using the "XZ" format compression via recent versions of tar.
I wrote a shell script which runs once daily, takes the daily backup folder, compresses it using tar, and copies it to a safe location.
Here are some benchmarking tests I ran:
# Testing more compression formats
# Note that although tar supports xz compression, 'man tar' doesn't list the --xz option (on macOS 10.13)
# see https://apple.stackexchange.com/questions/271842/create-tar-archive-with-xz-compression
# for other benchmarks of bz2 vs xz, see https://www.rootusers.com/gzip-vs-bzip2-vs-xz-performance-comparison/
# Tests using a single 7690 MB FileMaker database in .fmp12 format:
#
# Type Level Size Time
# --------------------------------------------
# uncompressed file 7690 MB
#
# tar bz2 default 1190 MB 12.1 minutes
#
# tar xz 0 1300 MB 8.3 minutes
# tar xz 1 1220 MB 9.5 minutes
# tar xz 3 1130 MB 24.5 minutes
# tar xz 4 1030 MB 30 minutes
# tar xz 5 864 MB 55 minutes
# tar xz 7 827 MB 65 minutes
# tar xz 9/default 827 MB 65 minutes
Conclusion: the XZ format is slow, but offers 25% better compression than BZ2.
By compressing the files, I can keep nearly 10x as many daily backups online.
How does it do in comparison with zip and 7zip? Did you try?
I frequently take down db-files to save a compacted copy because that shrinks the file and seems to kill some overhead - does that have any impact?
Are those times really in minutes? I would be inclined to use compression-level=0. I would much rather have the job done and the processor free than have the job grinding away for almost another hour.
Yes, minutes. However, this batch file only runs once per day, in the middle of the night when almost nobody is using the server. It's an 8 core server, so using 1 core for about an hour is worth the tradeoff for ~30% better compression.
To answer my own question: I just tried it, and my database file dropped in size by about 30%. My conclusion is that FileMaker Server does not automatically free up unused blocks in hosted files.
[EDIT: removed an incorrect claim about file size]
FWIW, FileMaker will likely "unfree" that space again with regular usage, and you don't get any performance benefit from using Save As Compacted. Actually, I vaguely recall Clay Maeckel saying that it might even slightly hurt performance.
Personally, I wouldn't make Save As Compacted a priority unless I was trying to shrink the file before storing it or sending it somewhere.
Also, if the hope is to increase free space on the server, then I'd argue you don't have enough disk space on the server to begin with. FM files and FMS need room to breathe.