Could Someone Give me Advice on Optimizing FileMaker Pro Performance?

Hello there, :grinning:

I have been working with FileMaker Pro for a while now and have encountered some performance issues with my database solution. I am hoping to tap into the collective wisdom here to gather some advice on optimizing performance.

Currently; I am noticing delays when running scripts and performing searches; especially as the dataset grows.

Some scripts that involve complex calculations or large data manipulations are taking longer than expected to execute. Are there strategies or best practices for optimizing script performance in FileMaker Pro?

As the number of records increases; I have noticed that searches are becoming slower. Are there indexing techniques or query optimizations that could help improve search speed in FileMaker Pro databases?

Could layout design impact performance significantly? I have used a variety of layouts with different levels of complexity. Are there guidelines for designing efficient layouts that minimize performance bottlenecks? :thinking:

How much does server hardware and configuration affect FileMaker Pro performance? Are there server-side optimizations or settings that could enhance performance for a multi user environment? :thinking:

I am considering using plugins or integrating external systems with FileMaker Pro. How might these affect performance; and what considerations should I keep in mind? :thinking:

Also, I have gone through this post: which definitely helped me out a lot.

I am excited to learn from your expertise and improve the performance of my FileMaker Pro solution.

Thank you in advance for your help and assistance. :innocent:

1 Like

Do you use unstored calculation fields ? These are performance killers. The more records records there are the longer a report will take to display... the first page. And then when you scroll to the second page, the wait start over.


You'll need to tell us what you are doing before we can make suggestions.

Indexing does not affect searches in FileMaker. Strange but true. However there may be ways to speed up your searches. Again, you need to tell us what you're trying to do.

Yes, it can. The great thing is that you can have complex layouts for humans to interact with, and blank layouts for scripts to interact with. If you think the complex layout is slowing things down, tell the scripts to switch to a blank layout to do the processing, then switch back to the original layout after it has run.

You need sufficient memory (FMS doesn't need large amounts), a fast storage device ( SSD ), and fast network. You'll find minimum requirements at Claris Tech Requirements for FMS. FileMaker "chats" with the server a lot and any network latency causes laggy behaviour.

Plenty of people do this with great results. If you are running into any problems you can ask for assistance.


I’ll second the unstored Calculation impact. The more complex the Unstored calculation is the Slower the performance.

There are some great ways to mitigate the impact of unstored calcs by moving them to auto-enter fields and controlling the record update method.

Also, the more complex your table occurrence groups are, the greater the impact on performance. Each relationship in Filemaker is the equivalent of a query therefore, any unstored calculation or other table field on any layout has the potential to evaluate every relationship before displaying in order for the data to be accurate.

ExecuteSQL can also be a performance issue. Any open records in the data set that an ExecuteSQL command uses will wait until the record lock is freed.

When you execute a FIND in another layout, enter find BEFORE changing layouts. That way the records don’t all transfer to the client when you are just going to look at a found set anyway.

As you are well aware, everything in FileMaker is context sensitive. Create blank utility layouts, which can be used for many purposes and never have to load records to have a place to operate from.

These are all generic; your situation may have specifics we can help with. Share some more and everyone here will be happy to help

Also of highly relevant note: THE FOUND SET IS WINDOW (not layout) DEPENDENT.

Switch to a blank layout in the same context as the user output layout, perform your find there then switch back to the original. As long as you stay in one window, the found set travels with the layout change.

Using a blank layout removes a lot of overhead impacting performance.

There are also techniques for using multiple windows and transferring the found set between them. These techniques can make for a good UX

1 Like

I managed to speed up my solution considerably by changing 3 things:

  1. one of my relations was inadvertently equipped with a sorting option. When I removed this, there was already a significant improvement.
  2. several SQL queries were previously executed directly in "set field", which resulted in the record being open after the first query. Any subsequent SQL query was therefore very slow. The solution: Execute SQL queries in variables first and then write to the field value.
  3. The most significant performance improvement was achieved by deconstructing the SELECTOR/CONNECTOR structure. Previously, virtually all tables were connected to each other via the connector. Now I use a pure anchor/buoy structure where all base tables are independent of the other base tables.
    Hope this helps.

Unstored calculations was mentioned a couple times above as a potential performance hit.

I'll add that List View can also be slow if not carefully considered/designed, for example summary fields and unstored calculations on a List View can crawl to a near-halt. It can help to instead calculate the values (and summaries) only once a report (using a List-type view) is rendered.

One final point: It may not be intuitive, but running scripts on a blank (zero fields) Form layout based on the appropriate table occurrence can really provide a boost. I use these "blank utility" layouts off-screen (position left -4,000) for nearly all of my processing scripts.

If you convert unstored calculations into auto enter calcs, here is a method to make that functionally equivalent:

The auto enter calc contains the same calculation that was in the unstored calculation. But if you preface the calculation with a LET statement that can be anything - say set a variable to the current timestamp - then the primary calculation will evaluate. To trigger the auto-enter, set a timestamp field as part of a record commit, and as that timestamp field is the object of the let statement, then the remaining - and likely completely unrelated calculation - will evaluate.

Let (
~trigger = triggerTimestamp


To be safe I run a scheduled script at night, running through each record and setting the timestamp field.

The resultant field is static and therefore has no negative impact on performance

1 Like

What version of FMS and OS are you using? I've seen some noticeable slowdowns in FMS19, but some dramatic speedups in FMS20.