ESS Question

I have a client with a large FMP (20) table, about 700,000 rows with each row about 350 fields.

This table has lots of relationships (in the RG) also.

The problem is that when he navigates to the next or previous record, it's extremely slow.

OK, so I'm wondering if migrating that data to MariaDB and using ESS/ODBC to put that table back into the RG would help speed things up in FileMaker.

The advantage of using MariaDB would be that we can do hundreds of thousands of SQL updates, inserts, etc., in a few seconds directly to MariaDB. But, I'm not sure if the ESS connectivity would be fast enough to justify the entire rearchitecting.

One other solution would be to refactor the huge FMP table, but that's been done already to the max extent possible. There really are 350 attributes per row and 700,000 rows of data so far.

Would appreciate any thoughts on ESS or how to possibly speed up FileMaker (we've already tried the "use a blank layout" approach.)


There is ODCB connectivity you could also consider. The MBS plug-in provides database connection, too.

1 Like

350 fields are lot ! Are there fields kike address_1, address_2, ..., address _n or repeating fields ?

If the size of the table - number of fields - can't be reduced no more, one thing that can be done is breaking the large table in a certain amount of smaller tables, keeping in the original table the fields that are queried most often.

In these 350 fields, are there a lot calculated fields ? Calculated fields can be a factor in slowing down access.

@Torsten mentioned ODBC connectivity, I remember someone mentioned that JDBC is faster than ODBC.

I understand that reworking the Relationships graph can be a chore, but the result can be rewarding. Is the Relationships Graph a Web ? If so it's a factor of slodown.


1 Like

ODBC would be assumed with ESS I thought.

Thanks for your note. :slight_smile:

Thanks for your help.

Yes, JDBC is very fast and we use it for programmatic things where we need 400X FIleMaker speed. However, for using the RG and putting tables in it that can be used on an FMP Layout, I think we need to use ESS/ODBC. At least that's my understanding of ESS/ODBC (hence my posting).


Is it the presentation layer? Is it slow switching records when you are on a blank layout? If not, then your layouts need work.

Is it data? You should be able to establish what the typical data load is for a record. And what network speeds you have. And with those figures estimate the download speed per record. You’ll usually find that data size is not the problem.

Hint: The problem is usually unstored calculations, hidden sort rules, and summary fields.


A really strong second to everything that @Malcolm said.

It's rare that I go so far as to discourage specific decisions, but in this case I'd like to discourage pursuing an ESS approach until there is a really solid understanding of where/how the performance hit is happening, and that understanding makes a good case for ESS helping.

ESS can sound like a promising approach because it's easy to think that the fast back-end is going to translate to fast usage in FMP. But, I've never seen that be the case. It's a cool concept, but it comes with a whole bundle of considerations that I would love to see any dev avoid, if at all possible.

Again, I fully endorse pursuing the lines of thought that Malcolm has sketched. I suspect that is where the biggest wins will lie.


You've received some excellent input here, Oliver. Actually, 20 tables and 700,000 records isn't so "large", but as others have mentioned 350 fields, if in one table, is quite a lot (referred to as "wide" in DB speak).

If the 350 fields is the combined number across tables which are related, @Malcolm 's suggestion to look at the layout level to see what's being presented is an important tip. Avoiding summary fields wherever possible, replacing them with other methods, or presenting such data only when needed can make a huge difference.

If, however, the 350 fields are in one table (ouch) I would suggest looking at breaking them into separate tables. I have a solution which has a number of wide tables, the widest I think is is about 150 fields (admittedly a few are deprecated and will be removed once non-dependency is confirmed). It's a laboratory solution with ~100 tables and thousands of fields. A few of the wide tables remain wide because in this situation the fields are very directly related to the nature of the particular table. Typically, something like 50 different direct test/measurements for a given record. However, anything that could be moved to its own dedicated table (e.g. data from UV/Vis spectrophotometer, or FTIR, etc.) is managed in a related table. This keeps things snappy by presenting the various types of data only as needed.

Also, take a look at script triggers on layouts. This can really cause a drag on record to record nav.


Thanks all! Great comments and suggestions. I will follow up with the client. :slight_smile:

I have a client with a FM DB that might be considered large. 880 tables, 2300 table occurrences, 32 million records (a couple tables with over 12 million records each), 35,000+ lines of code, almost 4,000 layouts, 500 concurrent users.

I do NOT recommend moving to a MariaDB or it's ilk. FM is quite well optimized for performance, IF you make the right PRO CODE design decisions (some of the low-code ease of implementation factors do NOT scale well).

And it is [now] fast (once we redesigned screens and code). The original designer/coder's extensive use of unstored calculations, and Execute SQL, took a serious toll on performance. One ExecuteSQL line in one script, took 6 min 47 seconds to execute. We replaced it with a TOG with the appropriate relationships, and that line ran in .13 seconds. (note; ExecuteSQL will hang on open records in the data set being used, until that record is free - serious performance hit).

I would suspect, as others have alluded to, that there are layout objects on the navigated layouts that are negatively impacting performance - the DB in question is on the small size. Unstored calcs and summary fields are the primary culprits. There are work-arounds without losing the desired functionality. (the problem with low code, is it makes it easy, but not necessarily performant - that's were PRO-code comes into play).

@Malcolm hit on the key elements - unstored calcs (alt: replace with a auto-enter, LET triggered construct), sorts with un-indexed fields or based on normalized data from related tables (an occasion for denormalizing the data structure), summary fields (only show them on demand when needed, NOT as part of the default layout), going to a blank layout to perform context activity as no record content gets loaded, entering FIND mode before switching layouts, then switch to browse mode,, ideally on a subset of records (the initial layout load would otherwise load ALL records, THEN reduce it to the found set. Entering FIND first, removes this big transfer.

There is a ton of other more technical performance tweaks that can be used through judicious design practices.

1 Like

Also note that some benchmarking done by Honza @ 24U, showed that the more relationships a TOG has, the slower performance, at a geometric rate.

The issue is the FM relationship graph is NOT an entity relationship diagram, in the strict sense of the word, but a set of structured queries via the relationship definition. Every time a record changes, it has to evaluate if the relational elements need to be updated as a result.

This knowledge was the death of the famous Selector-Connector model. Although that relational model provided great flexibility with everything related, the subsequent benchmarks proved that it very negatively impacted performance.

1 Like

Good info thank you!

Regarding SQL in FileMaker...IMHO, it's not good. To me, like lots in FileMaker, aside from the crappy performance, it feels half done. For example, doing an ExecuteSQL() doesn't change the layout to what you found (it should, optionally); it just gives you a silly text list. You're on your own from there. Amazingly, like no built-in way to search all your scripts in version 20, there is still no INSERT, UPDATE, or DELETE unless you use JDBC or a plug-in that does.

In FileMaker A simple SQL with a GROUP BY with only 50,000 records beachballs FileMaker for me every time. Hung...must force restart. Not production ready.

On the other hand, I have a MariaDB table with about 7,000,000 records. A GROUP BY query (with no custom indexes created) takes less than 5 seconds. MariaDB, like MySQL, is so fast, it's just amazing.

A query like the 50,000 SQL GROUP BY that hangs FileMaker completes in MySQL or MariaDB in 0.1 seconds.

ESS is not very performant. FM can be, with correct design and in many cases, matches or even beats SQL (the DRACO engine in FileMaker is very much tuned for performance, but bad coding and UI practices can make it perform abysmally.) Much of what DRACO does, is SQL under the covers.

I've had to fix many solutions that others have written that are performance hogs - in many cases, just a few minutes of reviewing the problematic layouts, can fix the performance issue. Sometimes it turns out to be bad data structure - a much harder problem to fix.

BUT good design, layout and script practices can scream. I have a new DB i just put up with 4.3M records. FINDs are sub-second time on any of the 25 fields. We use it to cross-tabulate parts with inventory.

The dichotomy of FM is the trade-off or quick, low-code approaches that work great for rapid time-to-value, but negatively impact performance at scale. ProCode changes can make this issue disappear as well, and highly experienced FM coders shy away from the low-code methods.

As far as SQL it has terrible type checking, error handling, and syntax.
Feels like a UNIX command line compared to a mature, polished GUI. :slight_smile:

1 Like

Some great insights here into SQL in FM.....

Kirk, you are 100% right!
It is just a matter of smart database design, and using techniques to avoid slow calculations and lay-outs and Filemaker will fly through a million-record-database.
I would never jump over to something like MariaDB just to have more speed. Also there you'll have to be smart to keep a million-record-database fast.

My posting was not about "jumping over to MariaDB", but integrating it with FileMaker (for various reasons) via ESS.

I've been using mostly Oracle but also SQL Server for so long, they're my go-to choice for large systems (and their blazing associated performance). FileMaker is great for low-end systems and I love FileMaker's excellent quick UI capability.

Thanks to all for the replies.


I know projects with large databases that perform very well in FileMaker. FileMaker is not only good for low-end systems.

Where I live, Province of Québec, the government launched a Web portal that let car drivers make transactions regarding driver's' licenses and car plate. The backend is Oracle. The day it was on line, it crashed right away ! That was at the beginning of April 2023 and today there are still issues. A catastrophic project.

Edit: corrected typos

On the 350 field “wide” table; FM loads the entire record - all fields - when each record is displayed even if the fields are not displayed. This is a good argument for narrower tables as was referenced in an earlier post.

Container fields - especially if stored locally - consume large amounts bandwidth loading - ideally store them In a Child table and as reference.

Side note on “blazing” SQL: a few years back was running a DB on Sqlserver on. 480 processor $3.5m Compaq enterprise server with Enterprise Windows that maxed out at 3200 order transactions per second, 800 below the current demand. Microsoft and compaq engineers attempting tuning for months.

Moved it to Oracle on a dual processor SPARC (yes it was a while ago) and easily hit 6,000 order transactions per second.

Not the only instance where SQLserver turned out to be a dog.

Not that Oracle is a panacea. Attempted to move an MVS VSAM 72 terabyte (compressed) 400 terabyte data base (90 days worth of data changing daily) to a sun E10k enterprise server running oracle. What ran in VSAM in 60 minutes took 27 hours on this multimillion dollar sun box after all optimizations completed.

The right architecture for the right task …….