Please be rigorous (ie think for maybe 5 seconds first) and try to submit your responses, disagreements, expressions of support and questions under the correct Performance thread / topic.
However, if you cannot find the right place for something about base / core / vanilla FileMaker performance and performance related Plugins please create your reply here and have your say.
If some stuff clearly needs a home we will create a new topic. If it doesn't belong here then a sys admin will move it to the right place.
There is a performance topic of text/data parsing -- not sure where that could fit into discussions.
I think that it is a topic which, unlike most of the others, may not apply to everyone, but for those tasked with writing a script or a calculation which must parse a large amount of text, then it becomes worth knowing.
Considerations include things such as:
Which calculation functions available in FMP are most performant, and which are only performant for smaller input sizes.
Finding ways to favor the use of performant functions over the lesser-performant ones.
Finding a sweet spot, which balances number of iterations of a routine with the effort required to perform each iteration of that routine (by routine, I'm thinking about a smaller/more-atomic parsing task)
Developing general parsing strategies based on observations and lessons learned
With the above, I am largely thinking about parsing volumes of return-delimited text, and also processing large JSON structures.
I think that data parsing is potentially a good additional topic because, from my point of view, FileMaker is brilliant at doing this, creating the parsing functions in the data viewer is relatively easy and they run very fast in local variables.
Another possible candidate for a topic is how to use additional resources to improve performance. I remember years ago when some one showed that a current record indication in a web viewer ran as an independent thread, outside FileMaker, was that correct?
Hence it is perhaps reducing the core FileMaker load?
Maybe @MonkeybreadSoftware can explain the memory and thread implications of a json process - does that bring in external memory and cpu or does it use part of what FileMaker already has?
FM allows for tweaks and twists for performance improvements and doing stuff that straightforward-FM cannot do out-of-the-box. At the same time, we should be aware that some of these twists and tweaks will migrate more or less graciously to higher FM releases.
Addendum: for this reason, I prefer employing a plugin for the extra functionality. This gives assurance that someone (the vendor) takes care of issues that may arise in newer FM releases.
Our plugin resides in the FileMaker process, which loaded the plugin.
So memory allocated by the plugin is part of the FileMaker process (Pro, Server Scripting or WebDirect).
And while a plugin call is running, the script is paused usually.
(except for explicit background activity by the plugin)
@nicklightbody Just curious of what you think of discussing pros & cons of specific techniques applied to a same problem.
I have only one example of this: record hierarchy (multi-level parent-child relationships). This is often represented as writing a parent's id as a foreign key in a child record. It is fast for some use cases and slow for some others. A different model is the nested set model, fast to retrieve a given node's descendants, slow when updates are required amongst other nodes.
Let us know if you feel this type of discussion fits somewhere in how you see the overall discussion about performance be framed.
Thanks @Bobino - yes definitely - let’s explore alternative approach’s and try to understand why one may be more appropriate in some circumstances than another.
Looking forward to thinking through what you describe - just please put it in the right place.
Cheers, Nick
I agree with your idea @Torsten because implementing conditional access to records can, depending on the method used, prove very expensive performance wise.
Briefly, my experience of controlling access to records through expressions (calculations) inserted into access privileges at the record level slowed record display in a listing to a crawl over a LAN but that was FM5 in about 2001. I have more recently used a home brewed keychain method where all records were I think only one relationship away with more success.
@Bobino The nested set model sounds very interesting for me. But it seems quite complex zu implement. Do you have experience with this model? For record sets that are often read but rarely changed, this could be a very good idea. More performant than a tree structure if I am right.
I would like to see an example of such an integration in a FileMaker Database.
I must admit, when I first read it, I didn't quite understand the logic of the calculation.
After looking at the nested sets again, I finally understood how the left and right values are calculated. And I understand how to filter. Very cool!
But I see a problem in the multi-user environment, when elements have to be added or deleted. In that case the left and right values of ALL records of the table have to be recalculated and set. But what if exactly then another user has opened one of the records and is editing it. Then the two values can not be set at that moment.
How could this be solved? I admit this question not being performance relevant. It's more a question of whether it can work that way at all?
Yes, the nested set data model for hierarchies is best suited for cases where you query often against it, but update rarely. So it does outperform tree structure for queries and underperform when the comparison is from the angle of updates. (Technically, both are tree structures, each using a different data model, but we all know what you mean)
For any database backend, keeping data integrity is something that developers need to be thinking about. In this case, integrity means that when updating the structure, we should modify all records we target at the same time or none at all. Anything preventing one record targeted from being updated should get the system to reject the overall update and leave the system in its original state, being the last known good state.
This can be achieved in FileMaker with transactions. That is a topic in itself, so I won't discuss more here, I believe there are some posts you can find about that, both here on fmSoup and on some blogs. I use the Karbon framework for my implementation of transactions.
The only implementation I did of the nested sets was for a demo file I gave for a French-speaking conference. It was simply to introduce the concept and the key differences with the more traditional approach. It was not made to be transactional, because it would have made the whole thing harder to understand (bringing too many new concepts together would simply blur the lines for people learning about those concepts).
I'm glad to see this piqued your interest. I hope you will also be looking into transactional scripting, as that technique can be crucial in a good number of cases.
@Bobino Oh, dear - I really wasn't thinking far enough. I was so busy trying to understand the concept of nested sets that I didn't think about transactions at all. Of course, that makes my question superfluous, since no record is blocked long enough.After all, if I catch an error, I can just try the commit again a second later until it works.
I use transactions almost everywhere. Not only for data integrity, but also for a better user experience. I've often observed that users struggle with remembering to click outside of a field to save the input. That's why I generally prevent accidental comitting. The action is always completed by clicking a done or save button. That way the users understand what’s happening.
So - thank you for opening my eyes!
I am very interested in this nested set model because I want to use a tree structure in an app. Unfortunately, I haven't found a tree structure yet that was fast enough when opening or closing nodes when dealing with more than - let's say - a few hundred records.