Let's deep dive and explore Performance

Thanks that is very kind of you.

I am interested in developing a deep conversation on performance with the intention of finding and sharing all the key insights and knowledge of the community.

The biggest negative for FileMaker over the 24 yrs I have been actively developing has been poor performance yet it seems to be the area where least hard information is shared?

I went to the US in January 2011 and met Rick Kalman and his team to start that dialogue which has continued over the intervening 10 years.

Would this format be a good place to develop that dialogue? If so what would be the best means of doing so?

Cheers, Nick


Hi @nicklightbody, On the homepage you'll find the Resources channel. That's the place we've set up for extending the documentation of Claris FileMaker. Start a new topic. There are plenty of people who will be interested in following your thoughts.

1 Like

Hello @nicklightbody, I moved your message to the Lounge which is our channel for doing exactly what you propose: exploring a topic in depth.

Further, as @Malcolm mentions, the key insights and knowledge can be written into a white paper and posted in our Resources channel.

We also have the ability to create work groups and wikis for collaborative work. Depending how you see this, as a conversation or a more structured project that you would like to lead, let @Malcolm or myself know and we can provide the technical means.

1 Like

I've done all kinds of performance testing using different technologies compared to Filmmaker also.

Hi Nick,
I'm glad you were able to meet Rick etc in 2011 and that you've ongoing discussion with them.
But for what I can see, They did nothing from your discussions. They've never invested in performance (except to make their Websdirect stuff barely work), there's barely noticeable improvements, and lots of regressions.

Keep watching. The important part for you @Vincent_L, is going to be that you will need to adjust your development habits slightly to take advantage of the performance increases.

The reality is, everything they have been working on for the past 5 years, is geared to bringing performance enhancements. They had to make a shift to a new approach, but there was, indeed, performance improvements.

@jormond — are you referring to “Cloud Smart”?

No. There were issues with the Page Locking method that was tied to Startup Restoration. So they had to go to a new technical approach to achieve better performance. With that foundation in place, it leads to MORE increases in performance, to MORE streamlining of tech and data streaming. It's really amazing to see the places they are already making advances. It may be slow enough that you don't go "wow that's fast", but over time it's going to get faster.

Cloud Smart is a whole other thing. And it's a good thing, for performance, and for the long term life of the platform. If you listen to Brad's discussion of the future of FileMaker, without cynical ears, you will hear it. I'm also having other conversations, and in Brad's description... I'm hearing many of those things. They are listening, acting, and invested in the future of FileMaker.

1 Like

Here's a nice summary of at least one performance enhancement they've made/are continuing to work on:

I'm really stoked on sharing locks for multi-user systems! It just makes a lot of intuitive sense too.

EDIT: server-side sorting is pretty dang neat too, where applicable. I hope they continue to refine it.

1 Like

I am very conscious that the biggest perceived user issues with FileMaker over many years have been regarding performance. That is why I have focussed on this area for quite a few years, that is why I released my White Paper on Understanding FM Server in 2015 (Understanding And Tuning Filemaker Server Performance: Introduction) and why I developed and open sourced dsBenchmark as a reliable means of comparing the potential load capacity of different FMS Deployments (dsBenchmark - The Filemaker Server Load Tester) in 2015.
However, we are all engineers, building machines and the laws of physics apply.
The more you ask the machine to do the slower it will run and conversely the less you ask it to do the faster it will run.
If you build a FileMaker Solution the right way it will be fast.
The challenge is that FileMaker is very permissive, there are few things you may wish to try that it will disallow, and this can lead to designs that do not perform well.
The biggest problem is that there is not so much published about the basics of performant FileMaker design. There is lots about tricks and tips and less about performance.
I have just built an Image Display App firstly because I needed something to extract, store and display my many images hidden away in iPhone, Aperture, Photos and many backups and secondly because I wanted to demonstrate how efficient the FileMaker is at handling images.
So, this is an open server:
go and add a host at deskspace.fmcloud.fm
There are five nearly identical files ready for you to test on larger and on smaller devices.
There is only a std dev 5 user license so you may need to come back later if it is busy.
The purpose of the nearly identical files is to test which combination of no thumbs / temp thumbs / perm thumbs and open / secure with external storage (but not import by reference) works best - is most performant.
So there are five to try out: dsImages_v22x _on (open no thumbs), _ot (open temp thumbs), _op (open permanent thumbs), _sn (secure no thumbs) and _sp (secure permanent thumbs).
Please try them and see what you think and share your thoughts and observations.
There are the same 1034 images in each App.
I look forward to hearing your responses.
I will share exactly how this is designed a bit later.
I will say that it is Mobile First, in common with everything I have built since 2012.
Cheers Nick



The largest version of this we have running here locally in Cambridge has 350k images in it.

1 Like

Tapping an image opens it in a higher res - the final version is the full res in a responsive web viewer

The easiest objective test to run on each of the instances is Test 2. This scrolls through 500 records on the Edit screen using goto next record and refresh and records the elapsed time in Events, together with all the environmental information, accessible by tapping any line in the events log.
The easiest subjective test is to open View and scroll down the images and see how responsive it is?
Tap the year filter at the top - select a year - and review those images - comment on responsiveness.
Tap an image several times until you get the simplest screen with the controls at the top - this is the zoomable web viewer using a Base64 version of the image - on iOS stretch the image on a Mac or Windows maximise the window. Again observe performance.
Tests 1 & 3 run in cycles of 10 and need you to respond to each image change - again the elapsed time is recorded in the Events log.
Test all runs Tests 1, 2 & 3 as many times as you request - you can cancel and stop the process on any response.
Looking forward to hearing your subjective observations and seeing the objective times in the Events Log.
Cheers, Nick

1 Like


On FileMaker Server Linux performance please see my analysis here on Medium:


go and add a host at deskspace.fmcloud.fm

Has anyone else got in? I'm only seeing the default FMS index.html at this location.

Well over the past 5 years we saw some performance improvements targeted by Claris own agenda : pushing in our throat pricy web direct and cloud stuff. First they did some webdirect speed improvements, much needed because it was slow as molasse, so those perf improvements ends up making webdirect stuff just usable. Obviously given the fact that the max user count of webdirect was (is) laughable (and the CPU/Ram usage per user is insane) they add to do some stuff to improve it, and then multi-user performances (just remember that some of the latest finds performance improvements are a response to a huge performance drop occured in the FM 15 days when several user launched simulatanous finds).

And even then their efforts are kind of questionable (to say the least) : They created a persistent cache in FM 15, that rather than speeding up things noticably, turns out to slow down app loaunching by several minutes. In 19 there's finally a setting to disable this fiasco (and even that they did it the wrong way…)

  • They created the page locking : when I asked about why they did linked it up with Startup restauration Clay said they had to because PL did introduce instabillity. From this moment I knew it was a disaster. And of course that disaster happened (Yes claris can really marvel about the community resilience to 2 features in a row that are fiascos). How could you even design something that introduces instability and to counter this you end up adding a processs (which in itself takes 10% CPU usage). This is flawed on so many levels… I can't understand how any responsible people would greenlight this for release putting user data at risk (yes lots of people go corrupted data about this). Thankfully it's now deprecated.
    By the way I remember your words the year before about it, telling me how awesome their new feature would ba awesome… So pardon me to consider your ever unconditionnal optimism with a hefty dose of salt.

So now, in 19, third time a charm (?) with Shared Locking. But in only accellerates reads, and not writes anymore. Clay said we don't need multrithreaded writes that much… hum ok (I disagree).
But more importanly, it only speeds up heavy multi user usage (which falls in their agenda), and not single user aka engine performance.

I can understand that we need multi-user improvments, but that's only one side of the equation and not the one that affects most users. A lot of users, and I dare to say most, are more concerned about single task speeds. Be it an automated big import or computation, to preare reports for instance, or just the execution of their session. In reality for most deployments, one user will do something intensive at a given time, while others would mostly be iddle. To this common scenario, The new speeds improvments yield nothing.

To me, all the platform, all use cases, even heavy multi-user ones would immensily benefit about single user (this means the engine) speed up. So going to accellerate the engine gives the main bang for the bucks. But unfortunately, there was not a single speed improvements in that department. Maybe ever. And thats the big problem.

  • They introduced while, is it faster than recursion ? not (plus it's compliacted to use, I would have much prefered a For function, whch would have been faster and easier to understand for 90% of people, and enough for 80%).
  • They did introduce valuelist and JSON stuff : They're absouletly trounced by BaseElements and MBS plugin speed.
  • Someone showed how variable concatenation was slower, and slower with the variable becoming bigger, vs Insert into variable. Did they jump on this opportunity to magically speed-up lost of use case : Not.
  • I reported tons of performance issues, that have nothing to do with the design of the solution, did they fix any. No.
  • Clay did say that the calc engine can't benefit of CPU's predicted branching (because of variable creation via let stateement). Imagine this, the calc engine that can't use predictive branching. Fixing this could be donne by a much needed variable creation function we deserve for years

So I don't see any engine true performance improvments.

The problem is that Claris don't think there's a performance problem. Saddly their are confourted in this wrong belief by the most vocal Community members who are even more affraid than Claris themselves to have a single bit of negatitivity written somewhere about Filemaker (cause they fear their end customer may question the platform, and hence their business).

So Community pundits keep saying : you're using it the wrong way, blaming the poor knowledge worker for his bad design choices.

Let's see what are those bad design choices :

  • You're using unstored Calc, don't use them if you want speed. But those same pundits would say that unstored calcs are a Filemaker Strength (one you can't use)
  • You're indexing too much, ignoring the fact that non indexed data is basically useless in filemaker. If you need to use a data, and not just reading it with you're own eyes from time to time only, you need it to be indexed. Period
  • Try not to use too much related fields, cause they slow thing down a ton. Good lord, Filemaker is supposed to be a relational database. Please de-normalize you design to get speed. WTF
  • You have to use narrow tables. This advices came decades after Filemalker birth, not so long ago. But why do you need to use narriw tables : because Filemaker is so poorly designed that it reads all the fields (tahnk got except unstored calcs and containers), when accessing a records, execept just the one it needs.
    Moreover you often have to use wide table of just the above, to get decent speed you need to have your data de-normalized and cached in the main record.
  • Don't use > ≥ < ≤ operator in your relationship cause they'll slow things down to a crawl. So WTF are there here for ?
  • When opening a database, make sure to show one record, cause otherwise Filemaker will load all records. WTF
  • When doing script, maker sure to freeze the window, because otherwise for each record, Filemaker will uselessly redraw everything.
  • Please create blank layouts for all your table to make your script rub onto them, cause otherwise scriptw will be slow.
  • Don't use ExecuteSQL cause that's a kind of emulation, all calls are translated to native filemaker find queries. Ok fine, but why doing a left join with more than one criteria would slow things down by a factor of up to 481 x ! ?
  • Don't use ExecuteSQL if you records are uncommited because that will be slow as hell.
  • Don't use ExecuteSQL because the first time you execute the query it will be 10x slower than the second time. Because Filemaker will have to first download all the required records, and have the query run on the Client.
    By the way, when I uncovered it, I was dissed as hell by those community pundits that called this an absurdity, saying that the query (disregarding all facts) was done by the server… till they asked Clay himself, in a funny QA session were they kind of mocked me, who confirmed that yes the queries are run by the client in the cached data.
  • As the Excellent Vincenzo showed, don't even directly edit a record, put the edits in global, create a json oobject, then aslk the serer usisng PSOS to commit your edits, it will be much faster. Yes even on LAN. WTF, the most basic thing Filemaker does it slowly.
  • Don't use set field script step, cause it's slow as hell. Use imports instead.
  • If you want to import stuff make sure you don't index most of the imported fields (but of course if you wan to use them, then index them, oh by the way if I care to import stuff it's for using it).
  • Don't use too much reationship, cause each relation you'll add to teh graph, even complety unused ones, will put some weight on the solution. Not only when opening, but at run time even if the extra relations are not used at all.
    But at the same time you need tons of relations because most of layout objects rely on them.
  • Don't use spider graph, use that nonsense called anchor buoy instead. Despite that spider graph was proved to be faster for years, except that at some point Claris did change internal indexing to favor anchor buoy to make pundits happy so thay can sell pretty graphs, and diss the konwledge worker for his ugly graph (which yes it's ugly, bit wouldn't matter at all in any other environment, cause computers don't care about ugliness, and you'd have decent search tools)

In fact, most of those bad habits, are in fact using filemaker supposed strength by the book. And making a filemaker solution bearable (aka fast in pundits language) consist of un-using filemaker, bending it to avoid any of its features that makes filemaker filemaker.

So to get a speedy filemaker solution : please don't use Filemaker features, and try using webviewer stuff (forcing you to be the javascript master you precilesly don't want to be).

To me a RAD vendor should make sure its nice feature should be used, and if it publish those features it has to make sure they're speedy. Otherwise it is just selling vaporware.

Unfortunately, those in charge of today's Claris, are those that were in charge of Filemaker Inc, and those that made those bad design choices, and mostly it's imobilism.
It's very very rare that the best people to solve a problem, are the ones that created it.

Oh I know there's a bit of new heads, but they came from inside FMI (aside the useless, and killed right at the start by INSANE pricing, Claris Connect thing,), so they're blinded by their "culture".

If they treat Filemaker technical things, like they do for their "marketing" and their community handling (where thay keep praising the awesome community while completly distroying its main tool : the comunity forums, using the worst forum platform on earth, Salesforces). Then we're in a very bad situation.


There are a variety of themes in @Vincent_L 's post above which I don't feel that I can address.

But - there is one item that stood out to me, which I'll mention:

It is helpful to recognize that there is a wide variety of sub-topics that fall under the broad umbrella of the term "performance", and Vincent named a lot of these sub-topics, e.g. speed of the calculation engine, speed of the script engine, the ability to fully use machine resources, strategies to streamline network usage, existence and nuances of resource-locking, solution design/architecture,and probably many more that I am leaving out.

For me, having this kind of itemization of sub-topics is helpful to a conversation about "performance", because it encourages us to drill down into specifics as we communicate ideas, and that has great potential to assist in terms of clarity.