SLOW BASIC SQL

We have a simple SUM SQL on less than 30K rows (400 fields per row) using ExecuteSQL().

No joins. No Where. Just SUM.

In MariaDB, this query runs in sub-second, but in FileMaker ... it hangs!

No idea why such a simple query would slow to the point of non-usability so I thought I'd ask for any suggestions.

Thanks.

slow network connection?

Anyone of these records locked?

FileMaker may start downloading the whole table to local cache to run SQL locally. You can avoid that by doing SQL in a script on server (via PSoS).

Also if one record is locked on your client, that slows down the whole thing a lot, so better commit/revert before doing SQL.

Thanks, but no to all of these.

FMP database is local. 128 GB RAM. Fast Computer. No other users.

Appreciate your reply.

The cache size is a GB or so?

So the table has a chance to fit into the cache.

How do I set the cache size? Not sure I understand what you're suggesting, but it sounds promising.

FileMaker Preferences, under Memory.

1 Like

Great tip, thank you!!!!

I had a client last year, where MySQL was horrible slow for some queries.

Like 2000 records is done in a second and 20000 records needed 2 minutes, 50000 records needed 10 minutes.

We wondered why things got so slow with many data. Until we realized that the server had a cache size, which was smaller than the size of that table. For every run over the records to find something like a related record, the MySQL software had to read the whole table into memory to find the record. Once cache size setting got upgraded, so the table fits into the cache, the searches got faster. Now it's like 5 seconds for 50000 records.

Same for FileMaker. If an all data needed for the query fit in the cache in-memory, then the query is much faster, than if FileMaker needs to remove data from the cache to make space for new data.
On the other side, all the cache management slows down data access.
You need to find a good balance between a too much cache or not enough cache.

2 Likes

30k records is still a sum of a lot of records. FileMaker SQL is not as optimised as you might think. To sum records, records have to be downloaded from server to client, so if you have 400 fields of data per record, that's a lot of data. You do say local but I'm not sure if you mean local server or local on a machine?

Also if you are editing any of those records at the time, SQL will perform far more terribly.

What are you summing also? Make sure it's not a summary field :slight_smile:

2 Likes

Does using PSOS obviate the problem of everything getting downloaded to the client? Also, while I'm asking dumb questions, if not using PSOS, does the data need to be downloaded to the client for each and every eSQL call? Say they are seconds apart and that the underlying data has not changed. I'm assuming the answer is yes, but I'm not sure.

Yes. This can be a significant factor in the speed bump you get with PSOS.

On the other side of the coin, any PSOS call means generating a session on the server. That requires resources ( nothing is free ). If you create many PSOS calls you must ensure that your server has capacity.

2 Likes

Running some tests on a large test system. This is a table with about 200 fields and over 8 million records.

I'm running this query, which gets about 30k records:

Set Variable $result = ExecuteSQL ( "SELECT SUM (Salary) FROM WorkerDate WHERE \"Date\" = '4/1/2025' "; ","; "¶" )

Running script on my mac, server is on LAN w/Gigabit Ethernet

Pro 21.1.1 SELECT SUM of 33236 records took 0:00:09 seconds for a rate of 3693 records/second				
Pro 21.1.1 SELECT SUM of 33236 records took 0:00:01 seconds for a rate of 33236 records/second

Running script on server using PSOS

Server 20.3.4 SELECT SUM of 33236 records took 0:00:02 seconds for a rate of 16618 records/second				
Server 20.3.4 SELECT SUM of 33236 records took 0:00:02 seconds for a rate of 16618 records/second				

Interesting findings:

  • running on FMPro, record caching is very important: the second time I run the script the records are already cached locally, so the speed is nearly 10x higher
  • running on FMServer, using PSOS is much faster, but it also appears there is no benefit of caching (which makes sense, as each PSOS script run basically is it's own separate login session).

Notes:

  • both the Date field I'm searching on, and the Salary field I'm SUM() ing are stored, indexed fields.
2 Likes

Are you sure you are only selecting 30K rows?

The first time I benchmarked this, I did something like this:

Go to layout
Perform Find # find only 30k records
ExecuteSQL "SELECT SUM (salary) from Table"

And it seemed to hang forever.

Oops! I had forgotten that ExecuteSQL does not inherit the current found set, so it was operating on over 8 million records.

Ours was just a SUM on a single field. Your results are very interesting.

In our case FMP hangs whether using standalone or via FMS.

Can you share your SQL?

It was just like: SELECT SUM(Quantity) FROM TABLE

SELECT SUM(Quantity) FROM TABLE

That will sum records from the entire table, since ExecuteSQL doesn't care about the current Found set.

If your table only has 30K records, then that's what you intended, but if you meant to select a 30K subset from a larger table, you need to include a

... WHERE X = Y

clause to limit the rows.

We want to sum the entire table as it only has 30K rows. No WHERE clause used for that reason.

I think the problem is what I thought was a numeric field was actually a calculation field, possibly unstored so SQL is waiting on each row's updated calculation.

1 Like

That sounds like a super-plausible explanation. I was kind of wondering if that might have been the case.

1 Like

Thanks Steve. :slight_smile:

1 Like