The solution I inherited includes many scripts that check or set a parameter, and then call (re-run) the same script.
In one case, script A intentionally bypasses several pages of code. The script then sets a parameter (customer ID) and calls script A. The second time through, the script executes the previously bypassed code.
This seems somewhat inelegant, but on the plus side, this structure prevents creating two scripts when one will do the job.
Wondering if this is (or is not) considered a “best practice”?
If it works it might be very clever. As long as you can handle the complexity of the script. I know this technique from PSOS Scripts, that recall themselves with a PSOS Script step, when not running on Server.
Yes, I have one such script with three branches, each of which sets a global variable to hide various buttons. The whole thing is somewhat complicated, which is what prompted me to inquire about whether this structure is recommended or not.
Using these scripts, it's possible to generate queues that are slower, when run like this, than when you break out the code into subscripts. I'm not sure what is going on in the compiler - and it may have changed since - but in v14 - v16 I was finding that re-entrant scripts ran more slowly than scripts that called sub-scripts.
Since then I have favoured "controller" or "router" scripts. The controller scripts do the initial checking and setting of parameters, then they call sub-scripts. Often the sub-scripts do a second round of routing.