Well over the past 5 years we saw some performance improvements targeted by Claris own agenda : pushing in our throat pricy web direct and cloud stuff. First they did some webdirect speed improvements, much needed because it was slow as molasse, so those perf improvements ends up making webdirect stuff just usable. Obviously given the fact that the max user count of webdirect was (is) laughable (and the CPU/Ram usage per user is insane) they add to do some stuff to improve it, and then multi-user performances (just remember that some of the latest finds performance improvements are a response to a huge performance drop occured in the FM 15 days when several user launched simulatanous finds).
And even then their efforts are kind of questionable (to say the least) : They created a persistent cache in FM 15, that rather than speeding up things noticably, turns out to slow down app loaunching by several minutes. In 19 there's finally a setting to disable this fiasco (and even that they did it the wrong way…)
- They created the page locking : when I asked about why they did linked it up with Startup restauration Clay said they had to because PL did introduce instabillity. From this moment I knew it was a disaster. And of course that disaster happened (Yes claris can really marvel about the community resilience to 2 features in a row that are fiascos). How could you even design something that introduces instability and to counter this you end up adding a processs (which in itself takes 10% CPU usage). This is flawed on so many levels… I can't understand how any responsible people would greenlight this for release putting user data at risk (yes lots of people go corrupted data about this). Thankfully it's now deprecated.
By the way I remember your words the year before about it, telling me how awesome their new feature would ba awesome… So pardon me to consider your ever unconditionnal optimism with a hefty dose of salt.
So now, in 19, third time a charm (?) with Shared Locking. But in only accellerates reads, and not writes anymore. Clay said we don't need multrithreaded writes that much… hum ok (I disagree).
But more importanly, it only speeds up heavy multi user usage (which falls in their agenda), and not single user aka engine performance.
I can understand that we need multi-user improvments, but that's only one side of the equation and not the one that affects most users. A lot of users, and I dare to say most, are more concerned about single task speeds. Be it an automated big import or computation, to preare reports for instance, or just the execution of their session. In reality for most deployments, one user will do something intensive at a given time, while others would mostly be iddle. To this common scenario, The new speeds improvments yield nothing.
To me, all the platform, all use cases, even heavy multi-user ones would immensily benefit about single user (this means the engine) speed up. So going to accellerate the engine gives the main bang for the bucks. But unfortunately, there was not a single speed improvements in that department. Maybe ever. And thats the big problem.
- They introduced while, is it faster than recursion ? not (plus it's compliacted to use, I would have much prefered a For function, whch would have been faster and easier to understand for 90% of people, and enough for 80%).
- They did introduce valuelist and JSON stuff : They're absouletly trounced by BaseElements and MBS plugin speed.
- Someone showed how variable concatenation was slower, and slower with the variable becoming bigger, vs Insert into variable. Did they jump on this opportunity to magically speed-up lost of use case : Not.
- I reported tons of performance issues, that have nothing to do with the design of the solution, did they fix any. No.
- Clay did say that the calc engine can't benefit of CPU's predicted branching (because of variable creation via let stateement). Imagine this, the calc engine that can't use predictive branching. Fixing this could be donne by a much needed variable creation function we deserve for years
So I don't see any engine true performance improvments.
The problem is that Claris don't think there's a performance problem. Saddly their are confourted in this wrong belief by the most vocal Community members who are even more affraid than Claris themselves to have a single bit of negatitivity written somewhere about Filemaker (cause they fear their end customer may question the platform, and hence their business).
So Community pundits keep saying : you're using it the wrong way, blaming the poor knowledge worker for his bad design choices.
Let's see what are those bad design choices :
- You're using unstored Calc, don't use them if you want speed. But those same pundits would say that unstored calcs are a Filemaker Strength (one you can't use)
- You're indexing too much, ignoring the fact that non indexed data is basically useless in filemaker. If you need to use a data, and not just reading it with you're own eyes from time to time only, you need it to be indexed. Period
- Try not to use too much related fields, cause they slow thing down a ton. Good lord, Filemaker is supposed to be a relational database. Please de-normalize you design to get speed. WTF
- You have to use narrow tables. This advices came decades after Filemalker birth, not so long ago. But why do you need to use narriw tables : because Filemaker is so poorly designed that it reads all the fields (tahnk got except unstored calcs and containers), when accessing a records, execept just the one it needs.
Moreover you often have to use wide table of just the above, to get decent speed you need to have your data de-normalized and cached in the main record.
- Don't use > ≥ < ≤ operator in your relationship cause they'll slow things down to a crawl. So WTF are there here for ?
- When opening a database, make sure to show one record, cause otherwise Filemaker will load all records. WTF
- When doing script, maker sure to freeze the window, because otherwise for each record, Filemaker will uselessly redraw everything.
- Please create blank layouts for all your table to make your script rub onto them, cause otherwise scriptw will be slow.
- Don't use ExecuteSQL cause that's a kind of emulation, all calls are translated to native filemaker find queries. Ok fine, but why doing a left join with more than one criteria would slow things down by a factor of up to 481 x ! ?
- Don't use ExecuteSQL if you records are uncommited because that will be slow as hell.
- Don't use ExecuteSQL because the first time you execute the query it will be 10x slower than the second time. Because Filemaker will have to first download all the required records, and have the query run on the Client.
By the way, when I uncovered it, I was dissed as hell by those community pundits that called this an absurdity, saying that the query (disregarding all facts) was done by the server… till they asked Clay himself, in a funny QA session were they kind of mocked me, who confirmed that yes the queries are run by the client in the cached data.
- As the Excellent Vincenzo showed, don't even directly edit a record, put the edits in global, create a json oobject, then aslk the serer usisng PSOS to commit your edits, it will be much faster. Yes even on LAN. WTF, the most basic thing Filemaker does it slowly.
- Don't use set field script step, cause it's slow as hell. Use imports instead.
- If you want to import stuff make sure you don't index most of the imported fields (but of course if you wan to use them, then index them, oh by the way if I care to import stuff it's for using it).
- Don't use too much reationship, cause each relation you'll add to teh graph, even complety unused ones, will put some weight on the solution. Not only when opening, but at run time even if the extra relations are not used at all.
But at the same time you need tons of relations because most of layout objects rely on them.
- Don't use spider graph, use that nonsense called anchor buoy instead. Despite that spider graph was proved to be faster for years, except that at some point Claris did change internal indexing to favor anchor buoy to make pundits happy so thay can sell pretty graphs, and diss the konwledge worker for his ugly graph (which yes it's ugly, bit wouldn't matter at all in any other environment, cause computers don't care about ugliness, and you'd have decent search tools)
In fact, most of those bad habits, are in fact using filemaker supposed strength by the book. And making a filemaker solution bearable (aka fast in pundits language) consist of un-using filemaker, bending it to avoid any of its features that makes filemaker filemaker.
So to get a speedy filemaker solution : please don't use Filemaker features, and try using webviewer stuff (forcing you to be the javascript master you precilesly don't want to be).
To me a RAD vendor should make sure its nice feature should be used, and if it publish those features it has to make sure they're speedy. Otherwise it is just selling vaporware.
Unfortunately, those in charge of today's Claris, are those that were in charge of Filemaker Inc, and those that made those bad design choices, and mostly it's imobilism.
It's very very rare that the best people to solve a problem, are the ones that created it.
Oh I know there's a bit of new heads, but they came from inside FMI (aside the useless, and killed right at the start by INSANE pricing, Claris Connect thing,), so they're blinded by their "culture".
If they treat Filemaker technical things, like they do for their "marketing" and their community handling (where thay keep praising the awesome community while completly distroying its main tool : the comunity forums, using the worst forum platform on earth, Salesforces). Then we're in a very bad situation.