Starting with 18 everything gets calculated Doesn't need to be visible. Every refresh of window or record loaded.
Taking the opposite side of the question, where you, unfortunately, have to use unstored calcs :
That's when you want to manipulate the unstored calc values mostly in list view : to search, sort
- When you need to be able to sort a list view from related number field, you need to create a unstored calc to just put the related field in + 0 to get accurate sorting. Because unfortunately the sort order is messed up if the related record doesn't exist. This drives me nuts, I can't see why this behaviour is desirable in any situation.
- If you want to search a related record, it's unreliable and slower if not putting it in an unstored field
- If your value is dependent of a global field
- Export usntored calc values
Sorry, I didn't think of that. You can place objects outside the display area and they will be calculated. Visible on the layout: I mean the objects that "Hide object when" calculation evaluates to false.
There is a practice that some devs were doing (especially prior to the release of the Execute Data API script step), whereby an unstored calculation field is added to a table, which provides a snapshot of the record data in JSON form.
This allowed devs to be able to have rather elegant code for obtaining both a snapshot of record data, as well as a snapshot of a found set, as this field could be referenced by a List() function, or an ExecuteSQL query.
Despite seeing obvious value to this technique, I personally avoided it just because I did not want an extra field added to the table -- perhaps from a "purist" or "low-clutter" point of view.
But -- if the unstored calc is only drawing data from the target record, and it does not reach out to any related data, then I would have to concede that this field comes with little added performance overhead beyond what is already required to pull the target record over to the client in the first place. As such, even though I shied away from this, I see it as a valid example of an unstored calculation which adds value without adding significant overhead.
It might be worth noting that, per one of the other topics in Performance, there is benefit to avoiding widening a table. But, in this case, because the field is unstored, little (or possibly no) additional data should have to come across the wire when the record transfers from server to client.
Have you implemented this in any way? I'm thinking that it would be best suited to a function, no?
Steve, along the same theme, weâve been using the following technique for a while now. First, weâre not fans of normalisation for normalisationâs sake.
We have a number of systems that have linked tables for various reasons such as quantity dependent selling or cost prices for a product or component. In days gone by weâd have setup relationship links using greater or less than to ensure the correct rates are picked up in quotes or orders for example.
However, now we have a single text field in the product parent record that is updated using script triggers as the related pricing fields are maintained. As this data is purely numeric, it was stored as comma separated lists, but is now stored as JSON.
Any tables referring to the parent record can pick up all this sub data directly from the single field in the parent record, without having to refer to any lower level âfeederâ tables. The performance increase is significant with the bonus of a reduction in database structural complexity. The âWhileâ function has been a big contributor to this approach.
@Malcolm I don't personally use the JSON snapshot calc field technique, but I can see where it could be handy when used with a ListOf summary or a List() of that field (or as a field returned in an SQL call). Because of those use cases, I'm not seeing how a function (custom function?) would work out. But, yes -- if it weren't for wanting to get a snapshot of a found set of records, or a List of child records, I'd probably reach for a function.
In my mind I had this hierarchy
Listof ( jsonDataCalc )
-- jsonDataCalc = cf_getRecord('json')
-- -- cf_getRecord
-- -- -- get field names
-- -- -- get field data
-- -- -- build json/xml/html
-- -- -- return string
@Malcolm Ah. I see. Yes -- that looks like a nice way to structure the code. I read your message and thought that you meant a function in lieu of a calc field, which was where I stumbled in understanding.
Unstored calculation field are useful for providing the most recent order total in systems that are not fully locked down into a transactional design.
On the other hand, unstored calculation fields can be expensive in some circumstances, for example displaying the most recent order total in a list view.
One thing that can be done with expensive fields is to place them behind one of the followingâŚ
- A âHide object when" calculation
- A Popover
- An inactive panel (tab or slide)
âŚand display only as needed. This âdisplay only as neededâ technique also works well for summary fields in a list view.
There is only one correct answer on whether to use an unstored calculations field, a field set by script, or an auto-enter fieldâŚ
That answer is: âit dependsâ
@Malcolm I think @tonywhitelive hinted to an answer to your question in his most recent post.
@rickkalman is this still true in 18 and 19?
I have written a consolidation of this discussion.
This is editable by anyone, please free to improve it.
Please add your name and the date at the bottom to mark your input.
I tried to credit everyone who added to this discussion but the system objected to more than 10 users mentioned in a single post, so I removed the @ to enable this record to be made.
Cheers, Nick
Yes! I have some summery fields in inactive panel and it loads super fast if you want to see the sums (that you dont need all the time) you wait a bit. It works the same in 19 they load when they become active
I agreed to share my analyse of the Performance data from the dsImages trials on deskspace.fmcloud.fm. The amount of data is limited only 250 log records so far but I have built a tool - "dsAnalyser" with just two tables and two fields in total with no relationships.
Here is the schema
Here are the current results
The tool uses button bars to display the results.
Each data record has 32 values in a text string, each value is loaded into a global variable asa two dimensional array and that array is then queried by script.
These initial results are not conclusive as there is insufficient data but it appears that the Open External storage with No Thumbnails option is slower on login except that it is mid-range for iOS.
The Secure Storage with no Thumbs and the Open Storage with temporary thumbs seem to generally perform well.
Cheers, Nick
dsAnalyser is on
deskspace.fmcloud.fm
and it is faster there than locally.
You can just use it. See what you think?
The next job is to have it automatically import the data being gathered by the 5 x dsImages files?
Any suggestions about the best way of doing this?
I was thinking of exporting the data from the dsImage Apps as text files into a folder and have dsAnalyser import all files from that folder. Just delete all, and import all to keep it simple?
Thoughts?
Nick
Are they all running on the same server? If so, you have many options, exporting to tmp/ or documents/ and importing the same on a server schedule seems like the simplest way.
Nick, during testing I observed that my iPhone was tagged as being a Mac. When the iPhone was tagged 'Mac', it was connected to the internet via WLAN & router, thus exposing the same IP address than the Mac.
When on 4G, device type was properly identified.
@Malcolm thanks, yes, exporting to .txt and regular scheduled imports to the dsAnalyser look like an obvious option. In the past I have tried relationships between the analysis file and the source which have always been too inefficient. So a âdisconnectedâ data connection makes sense to me.
I did try dsAnalyser with 1m records which were then loaded into an array for analysis. That was a mistake because I was running it from a local client session, hence all the global var data has to be returned from server to client.
I will try it next loading and analysing the data server side and returning the results to the client using name value pairs or json by get(script result) on exit. It will be interesting to see what size data set we can load server-side before we crash the server session. The Linux statistics dashboard that fmcloud.fm can run should enable us to see how the memory is handling a large global variable load.
Another option, as the whole analysis is run with a single script, is to load the data into a local var array. I wonder whether there is any performance difference between $$var & $var arrays run server side?
Thanks @Torsten - that is interesting - I will check the expression that generates that string and see whether it is an error on my part (most likely) or FileMakerâs Get() functions giving incorrect data (least likely) - good spot!