Reducing file size

Is there a plugin or tool which can determine the size-on-disk of various parts of a FileMaker file, such as field data, index, empty space, etc?

I have a file which has grown quite large, and I'd like to put efforts into shrinking it, and want to focus on Tables which will give me the best return for my efforts.

FMVis from here: Download the free FMDiff Edition

gives you something like this:

1 Like

That's helpful, thank you!

It doesn't show the size of indexes, but I was able to use this to identify potential hotspots (very large tables), where I could then check Text fields that had "All" indexing enabled that did not need it.

1 Like

This is more of a technique (and cumbersome to do manually), but one way is to delete your tables one at a time, close the file between each, and record the size change.

There's probably a world in which this can be done automatically with BaseElements SQL function or even OData API + Admin API on linux FMS, but those sound like a lot of work in a different way!

1 Like

More questions about file compression:

  • When hosting a file on FileMaker Server, are unused "blocks" cleaned up automatically? I feel as if I've deleted data from a file, and seen the file size shrink. (Using FMS 18).
  • If not, is it still recommended that we ocasionally shut down the database, copy to a local machine running FileMaker, Open, Save As Copy (Compacted)?
  • Can the FileMaker Migration Data Tool (FMDataMigration) do any of these features for us?
  • Has anything changed in FMS 19 vs 18?

A tip: large FileMaker databases can be comressed at nearly a 10x ratio using the "XZ" format compression via recent versions of tar.

I wrote a shell script which runs once daily, takes the daily backup folder, compresses it using tar, and copies it to a safe location.

Here are some benchmarking tests I ran:

# Testing more compression formats
# Note that although tar supports xz compression, 'man tar' doesn't list the --xz option (on macOS 10.13)
# see https://apple.stackexchange.com/questions/271842/create-tar-archive-with-xz-compression

# for other benchmarks of bz2 vs xz, see https://www.rootusers.com/gzip-vs-bzip2-vs-xz-performance-comparison/

# Tests using a single 7690 MB FileMaker database in .fmp12 format:
#
#   Type       Level      Size     Time
#   --------------------------------------------
#   uncompressed file     7690 MB 
#
#   tar  bz2   default    1190 MB  12.1 minutes
#
#   tar  xz    0          1300 MB   8.3 minutes
#   tar  xz    1          1220 MB   9.5 minutes
#   tar  xz    3          1130 MB  24.5 minutes
#   tar  xz    4          1030 MB  30   minutes
#   tar  xz    5           864 MB  55   minutes
#   tar  xz    7           827 MB  65   minutes
#   tar  xz    9/default   827 MB  65   minutes  

Conclusion: the XZ format is slow, but offers 25% better compression than BZ2.
By compressing the files, I can keep nearly 10x as many daily backups online.

To use the xz format:

/usr/bin/tar --xz --options xz:compression-level=9 -cf $ARCHIVE $FILES

2 Likes

How does it do in comparison with zip and 7zip? Did you try?
I frequently take down db-files to save a compacted copy because that shrinks the file and seems to kill some overhead - does that have any impact?

I'm only using standard OS tools for this, so no 7zip allowed.

By default, tar uses 'gzip' which is not as good as BZ2, which is not as good as XZ in my testing.

Are those times really in minutes? I would be inclined to use compression-level=0. I would much rather have the job done and the processor free than have the job grinding away for almost another hour.

Yes, minutes. However, this batch file only runs once per day, in the middle of the night when almost nobody is using the server. It's an 8 core server, so using 1 core for about an hour is worth the tradeoff for ~30% better compression.

1 Like