Little tidbits from the FileMaker conference

Back from the conference in Liechtenstein, I wrote down a few of the tidbits we learnt.

Missing serial numbers

If you use serial numbers for new records in FileMaker, you may notice that sometimes the serial numbers have a gap. Why could this happen?

  • Someone deleted a record
  • Someone created a record, but never committed it.
  • FileMaker crashed or network got disconnected while a transaction runs, so it never completes.

You need to know that on record creation, the client will request a new serial number from the server. If the record doesn't get committed or gets reverted, the serial value is used, but no record saved. If you like to make sure the serial number doesn't get lost, please call Commit script step after creating the new record to make sure the empty record is definitely stored.

Please consider moving to UUID numbers instead, so people can't get the serial numbers and they are more random but unique.

Export on Server

Did you remember when FileMaker got the ability to create PDF files on server?
In one of the technical sessions at a developer conference we got the information, that this was only possible because the code for PDF generation moved from the FileMaker Pro app into the database engine code. That code for the database engine is used everywhere including Server and FileMaker Go. Once the PDF creation code moved, you could use it everywhere.

We discussed at the conference why you can't export to a FMP12 file on server scripts. Various Ideas came up, but nobody knew exactly. To figure out what code runs when FileMaker does an export, I setup a script to export in an endless loop and sampled the process with activity monitor. In the thousands of lines of code, we can see that most of the export code is in the FMEngine library, but some little parts are in FileMaker Pro itself:

281 Draco::FMExportManager::PerformExport(Draco::FMWindowModel&)  (in FMEngine) + 1032  [0x10cdaaa68]
| 263 Draco::FMExportManager::ExportRecords(Draco::File&, Draco::DBError&)  (in FMEngine) + 360  [0x10cda93f4]
| + 118 exportFMProLayout(Draco::DataExportInfo&)  (in FileMaker Pro) + 308  [0x102924814]
| + ! 95 LAY_GenerateLayout(FMDocWindow&, Draco::FMLayout&, unsigned char, bool, bool, bool, Draco::HBSerialKey<(unsigned char)5> const&, CNewLayoutModel*)  (in FileMaker Pro) + 952  [0x102760fe4]
| + ! : 76 Draco::FMLayout::Commit(bool, Draco::InteractiveErrorHandler*, bool)  (in FMEngine) + 64  [0x10ce2f218]

After seeing this, the answer to the question above is that the code to generate a new layout and to export the layout is in the FileMaker Pro application. To export server side, this code would need to be refactored and moved into FMEngine. There may be a couple of technical difficulties to overcome in order to do this. Or an export on server could skip layouts and just create the fmp12 file with tables and records, but without layouts.

How many CPU cores does FileMaker Server use?

We started a FileMaker Server on a Mac and check how many processes are there. We get 29 processes and over 400 threads. We have mainly this processes running:

fmserverd The server core engine.
fmsased The process running server side scripts.
fmscwpc The process for custom webpublishing
fmwipd The process for the DATA API.
fmsib The process doing backups.
java Some parts still need Java, so a copy of Java runs.
node Some parts of FileMaker use JavaScript and are run with the node application.

With so many processes, it is quite good to have 4 or 8 cores on the server. We can run multiple scripts at the same time for various users with PSoS, WebDirect or Data API and use multiple CPU cores. Every script runs on its own thread, so it can be scheduled to different cores.

The server process itself runs over 50 threads. Several of the threads are open to listen for incoming commands on network sockets. The other processes for WebDirect, Data API and server scripting keep an open connection to the server core and threads listen to take commands. There are threads for logging, timed schedules and various background things happening. In the core all read and write operations to the database have to be serialized and only one thread can at a time modify the structures.

On the end you need to measure how high the load on the server is. We nowadays use virtual machines. Frequently we start with small ones with only 2 cores. That is fine for development where just one or two people use the server at a time. Later when using in production, you may go up to 4 cores. And if more load is there, go to 8 cores or higher. The server on idle should be 1% or less and with normal usage be somewhere at 20% when busy. Why? Because you want to have reserves for peak times when lots of people come or multiple server side scripts run in parallel.

PSoS vs Job Queue

Do you use Perform Script on Server frequently? Please make a simple measurement: Have a script create a new record with current timestamp, do a PSoS with a script, that goes to that record and stores a second timestamp. Then you know how quick the script launches on server. If this is just a second, it may be fine for you. But in huge solutions this can take 10 seconds.

On each PSoS start, the FileMaker engine on the server starts a new session for the remote user. This means a thread starts, which opens a connection to the server core, requests data and loads the relationship graph for the files into data structures into memory. Then it runs the start script of the solution before coming to your script.

Instead, you can run one or multiple scheduled script on the server. These scripts do an endless loop to look for new job records in a job queue table. Each script looks for a new records in the job table and loops over them. On each record, it tries Open Record script step to lock the record. On success it performs the script listed in the record with the parameter given in the record. The script result is stored in a field and the job marked done. After the loop is done, the scripts waits a second before looking again for new jobs.

On the client, you can launch a job by creating a new record in the job table. Then loop with script pauses to wait for the result or come back later for asynchronous jobs. If implemented well, you can have multiple scripts on the server and get from job creation to execution within one second.

Now let's see what we learn on the next conference...

4 Likes

Regarding PSoS vs Job Queue…

@HOnza suggested this a short time ago in one of his emails. There is one important difference between the two I think shouldn't be overlooked:

  • PSoS runs using the caller's account;
  • The job queue runs using an account set up on the server.

So I guess it comes down to the need for speed vs the need for certain security and logging features.

1 Like

Which security and logging features are missed?

The original call to create a record in the job queue knows the account holder and their permissions. The details can be stored in the job queue and used when the job queue script is run.

Security: The queue script would always run in the security context defined by the scheduled script's settings on the server, never in the context of the account name in the queue. This requires the queue script to verify permissions in code instead of relying on Get (LastError). That's a lot of extra and error-prone coding.

Logging: The logging table could not insert account-based information. Less of an issue for me but it would likely fail a security audit.

2 Likes

The queue table is able to record the account information of the user who triggered the script at the time that the script is queued. That can be audited. Once stored with the job in the queue the information is available to the script when it runs, and can be placed into that audit trail too.

As above, the user details can be stored in the job queue, so it is there to be used as required.

Normally a permanently-on process like this would be running with the absolute minimum permissions, well below most users. This means it would normally be asking for increased permissions as required.

Another thing to consider is that a human should not be able to put jobs into the job queue manually, just because they feel like it. The only way to get a script into the job queue is by calling another script that controls what is put into the queue. In other words, only well-known scripts are put into the queue.

If the user's security group has been stored in the job record then it is possible for the processing script to re-login to a known user account with the correct group settings immediately before running the queued script. (These would be accounts set up especially for running job queues, not anyone's personal account). In that way, the queued script is always running within the correct security set.

Regarding missing serial numbers. It is also possible to prevent a serial number being created if the developer is in the same table making modifications at the time a user creates a record.

This was demonstrated some years ago in one of the regular London FileMaker developer get togethers.

Be very careful about what you are doing on a live system whilst users are logged in.

Regards
Andy

2 Likes