Why is SQL slower on server than on client?

I have a SQL call which is a little slow to run locally, so I wrapped it up and sent it off to the server. I have a script on the server which sets a variable to the SQL command (see below) and then exits script and returns the result.
Then I have a second "Stub" script which takes the script result and does what I need with it.

It seemed to get slower when I offloaded the SQL to the server so I timed it and it's constantly taking around 28 seconds on the server and around 11 if I just run the server script locally.

I had some substitutions, and some JSON conversions, and some custom functions and I thought they were maybe the problem, so I stripped them out from the code below.

Is this normal? We're cohosting and switched from Windows to Linux. My benchmarks have shown that while some interactions are faster, others are slower, so maybe Linux Filemaker Server just can't handle SQL commands well?

Have others experience this? My Filemaker Server is running 19.5.2 so it's a little out of date, but I checked the release log for the couple of more recent updates and none of them seem to have speed updates or mention SQL, so I'll probably run an update within a few days, but I was wondering if this is a "known thing" and if there's a workaround or "best practice"?

While ( 
	[ 
		~count = 0 ; 
		~count = 0 ; 
		~startDate = Staff_All_FromMicrocredits::StartDate_Engagement_Report;
		~endDate = Staff_All_FromMicrocredits::EndDate_Engagement_Report; 

  		~compare_startDate = (~startDate - 1) - (~endDate - ~startDate ); 
		~compare_endDate = ~startDate -1;     

		~colors = "";

        ~ChartData = "data: [";
        ~Debug = "";

	    ~EngagementActionsTable = SQLGetTableName ( EngagementActions::Date_Engagement) ;
	    ~LASIDClean = SQLGetFieldName (  Staff_All_EngagementActions::LASIDClean ) ; 
	    ~Date_Engagement = SQLGetFieldName ( EngagementActions::Date_Engagement ) ; 
	    ~StaffPrimaryKey = SQLGetFieldName ( EngagementActions::StaffPrimaryKey ) ; 
	    ~MessageTypeAsText = SQLGetFieldName ( EngagementActions::MessageTypeAsText ) ; 
	    ~ParentGaurdianOrStudent = SQLGetFieldName ( EngagementActions::ParentGaurdianOrStudent ) ;


	    ~Staff_Table = SQLGetTableName ( Staff::PrimaryKey) ;
	    ~Staff_PrimaryKey = SQLGetFieldName (  Staff::PrimaryKey ) ; 
	    ~Staff_ActiveUser = SQLGetFieldName (  Staff::ActiveUser ) ; 
	    ~Staff_Nickname = SQLGetFieldName (  Staff::NickName ) ; 

	    ~LearnersAdditionalTable = SQLGetTableName ( LearnersAdditional::LASIDClean) ;
	    ~LearnersAdditional_LASIDClean_WholePath = SQLGetFieldNameWholePath ( LearnersAdditional::LASIDClean) ;
	    ~EngagementActions_StaffPrimaryKey_WholePath = SQLGetFieldNameWholePath ( EngagementActions::StaffPrimaryKey) ;
	    ~EngagementActions_Date_Engagement_WholePath = SQLGetFieldNameWholePath ( EngagementActions::Date_Engagement) ;
	    ~EngagementActions_LASIDClean_WholePath = SQLGetFieldNameWholePath ( EngagementActions::LASIDClean) ;
	    ~LearnersAdditional_PrimaryPersonID = SQLGetFieldNameWholePath ( LearnersAdditional::PrimaryPerson_ID ) ;	    	    	    


        ~Staff_ActiveID_SQL = 
            "SELECT " & ~Staff_PrimaryKey &
            " FROM " & ~Staff_Table & " WHERE " & ~Staff_ActiveUser & " = 1";
        ~Staff_ActiveID_List = 	ExecuteSQL ( 
            ~Staff_ActiveID_SQL ; 
            "" ; ""  );

		~repetitions = ValueCount (~Staff_ActiveID_List)+1 ;

        ~WhichStaffMember = ""

	] ; 
	    ~count < ~repetitions ; 
	[
        ~WhichStaffMember = GetValue ( ~Staff_ActiveID_List ; ~count );

	    ~TotalOutreachSQL = 
            "SELECT COUNT ( " & ~Date_Engagement &
            ") FROM " & ~EngagementActionsTable & 
            " WHERE " & ~StaffPrimaryKey & "LIKE ? AND " & ~Date_Engagement & " BETWEEN ? AND ?";
        ~TotalOutreachResults = 	ExecuteSQL ( 
            ~TotalOutreachSQL ; 
            "" ; "" ; ~WhichStaffMember ; ~startDate ; ~endDate );

        ~TotalOutreachPriorResults = 	ExecuteSQL ( 
            ~TotalOutreachSQL ; 
            "" ; "" ; ~WhichStaffMember ; ~compare_startDate ; ~compare_endDate);

	    ~StaffNameSQL = 
            "SELECT " & ~Staff_Nickname &
            " FROM " & ~Staff_Table & 
            " WHERE " & ~Staff_PrimaryKey & "= ?";
        ~StaffNameResult = 	ExecuteSQL ( 
            ~StaffNameSQL ; 
            "" ; "" ; ~WhichStaffMember  );

        ~ratio = ~TotalOutreachPriorResults /  ~TotalOutreachResults; 

        ~ChartData = If (  ~StaffNameResult ≠ "" ;~ChartData & "{x: '" &  ~StaffNameResult & " " & ~TotalOutreachResults & "(" & ~TotalOutreachPriorResults & ")', y: " &  ~TotalOutreachResults & "},");


        ~colors = If (  ~StaffNameResult ≠ "" ;~colors & Case ( 
                ~ratio = "?"; "#BEBEBE";
                ~ratio > 1.2 ; "#B81D13";
                ~ratio < .8 ; "#008450";
                "#EFB700") & ","
                );

        ~Debug = ~ChartData;
        ~count = ~count +1
	] ;


~ChartData



) // End While

In the code below, why are you using LIKE instead of EQUAL (=) ?

~TotalOutreachSQL = 
            "SELECT COUNT ( " & ~Date_Engagement &
            ") FROM " & ~EngagementActionsTable & 
            " WHERE " & ~StaffPrimaryKey & "LIKE ? AND " & ~Date_Engagement & " BETWEEN ? AND ?";
1 Like

I was reusing that code snippet... does that affect speed? If so I'll definitely change it, but I can't imagine that would cause it to be slower on the server than when I run the same script on the client, would it?

LIKE is a comparison, it is much fuzzier than an exact match, and takes more effort to match.

When you do things on the server you are doing a lot more than performing the task at hand. The server opens a new client session, allocating system resources to the session. The database is being opened and that invokes the OnFirstWindowOpen script. If you use that script trigger you may be performing a lot of unwanted actions when it is running on the server. If so, test for client type and exit the script early.

2 Likes

Like is bad, try to avoid if you can. You certainly don't require it when doing a search on a primary key.

Also when it comes to your SQLGetTableName / FieldName functions, if you can help it (and I know this sounds bad) but hard-coding is faster. The internal SQL tables for obtaining Base table names, and field names, will become slower the more table occurrences you have in your file.

If you need to get field names, you can use a non SQL approach which is just:

GetValue ( Substitute ( GetFieldName ( Staff::ActiveUser ) ; "::" ; "¶" ; 2 )

BaseTableName does require either hard-coding of the name (which is relatively safe, how often do you change base table names..) or the use of internal tables accessed via SQL. However, these queries work off searching for BaseTableName where the tableOccurrence name is something - so again the larger the TO listing the slower this is.

--

Finally , if you are running this on server via PSOS, are you bypassing all your startup script logic in doing so? This can add overhead if you are unnecessarily running startup procedures.

Awesome, super helpful to understand about the overhead of server calls, and thanks for the tip about LIKE. Apparently I don't like LIKE... :slight_smile:

I'm sure I'm NOT using PSOS since I don't know what it is... sounds like a Google project for tomorrow.

RE: GetValue ( Substitute ( GetFieldName ( Staff::ActiveUser ) ; "::" ; "¶" ; 2 )

I think you're saying instead of having a custom function which I pass in the field which performs the operation you mention, if I just put the entire operation in the call it will run faster, am I understanding correctly?

PSOS = Perform Script On Server

1 Like

Derp OK yeah I'm doing that... but I'm not bypassing startup... I'll do that.

2 Likes

You mentioned benchmarks and I'm wondering if you're timing the whole PSOS call or if you're timing the query itself.

If not already, I'd recommend benchmarking the time the query by itself takes when run locally vs on the server. Also, you may need to delete the local cache to get a realistic idea about how slow a local query is when the user runs it for the first time.

With those numbers you can get a clearer picture about what specifically is slow about the workflow, and it may not be the sql query itself (others mentioned stuff like the startup script running, which is a common culprit).

1 Like

Thanks Josh... it seems like most of my problem is caching.

Running PSOS, entire script takes 52 seconds
Timing JUST the SQL on server is 48 seconds
So my startup script is significant and I have to cut that down.

Run locally first run it's 64 seconds, slower than server, but subsequent local queries are around 7 seconds each (local caching), which I think was probably the cause of why I was seeing such a discrepancy earlier in terms of speed.

I changed the LIKE to = and total time on server was 3-4 seconds of which about 2.7 seconds was the actual query, compared to 3.5 seconds locally for the same query.

THANK YOU ALL SO MUCH.

2 Likes

I’m not sure how sending this to the server could help. If a record is in an open state all of the data required for the ExecuteSQL is exchanged between the server and FMP with the associated network traffic overhead. However, if the record is in a committed state, ExecuteSQL is performed on the server.

All of the above recommendations regarding startup scripts and PSOS are valid, but ensuring the record is not open before running the SQL is essential as far as ExecuteSQL is concerned.

1 Like

The difference you’ll see with SQL calls on the client versus server is that the data has to be transferred. Not just the found data. The whole set of data being queried is transferred.

When your data sets are large Claris recommends using relationships in the graph as that allows the server to trim the data being transferred and reduce overhead.

I don’t have the link handy. It’s in the engineering blog.

More TOs, more structural overheads - Nooooooooooo :joy:

It sounds like you're saying "sometimes" when you run ExecuteSQL locally it actually runs on the server anyway... which is good to know, I won't try to push that through a PSOS script to save bandwidth... but I got lost with the "record is not open".

I'm querying a database of interactions and finding a subset of them. At any given time there's probably about a 50% chance that at least one staff member is adding or editing one of the records I'm querying. The layout that I'm on when I call this query doesn't have any of those records "open" but that's not to say someone doesn't.

Am I misunderstanding what you're saying?

You’ve raised a really good question and I can only provide my understanding and am prepared to be corrected.

A record being open is defined as ‘being edited’ or ‘owned’ by the user. If the record is in this state, this is when the data is exchanged between the user and the server when ExecuteSQL is activated. I have not heard of a situation when another user ‘owns’ the record that would trigger network traffic between the ‘executor’ of SQL and the server, only if the ‘executor’ was also the ‘owner’ for the record.

Get ( RecordOpenState ) returns a value of 1 or 2 for a new/modified record that hasn’t been committed for the editing user. However, it returns 0 for other users. If FileMaker applies this same rule to ExecuteSQL, Then I believe my assumption above should be correct.

And then again……….

Commit records ( if possible ) prior to an executeSQL call. Why? If the current user has an open record in the Target table then FMP must download the entire record set to evaluate them before processing the call.

ref: Wim Decorte, 2017

1 Like

@Malcolm, there are always possible variations, but working on the assumption that the user is working in a single window and the active record is in a committed state, then then the SQL will be executed on the server and the entire record set will not be downloaded for evaluation.

1 Like

OK thanks... this is good to know. In this particular case it pulls data for a chart which is in a floating window, so I guess my options are "cycle through open windows and commit them all before SQL or run the SQL and hope it works quickly"... both have downsides.

Yes they do. We always commit before running ExecuteSQL, but have to accommodate losing focus of, say a record in a portal.