Hi, I started this project using transaction to interact with the data, I used the transaction module from KarbonFM to do so. I have a situation where one of the script called during the transaction fails because it want to validate that a record exists before doing what it has to do. Because the said record is created as part of the same transaction, the script does not ‘see’ the newly created record via the relationship because it is not committed yet. Is there any way around that?
Thanks for your help
Thanks @jormond, I’ll avoid the commit because if the second half fails for any other reason, I do want to rollback the first half.
From your suggestion, I’ll contemplate either forking into the script to handle bypassing that validation for this specific scenario or replicating the logic in the parent script where I would skip over that validation (but I don’t like duplicating the code). The validation that is in the subscript needs to stay there because it may be called by another script.
Thanks for helping me brainstorm on this.
I don’t typically use transaction, I knew about it, I like what I got from it up to this point, but there are gotchas and the general approach is very different than what I’m familiar with.
It’s a different design pattern, for sure. Once you adjust a little, it becomes much easier.
Related to this, can anyone share how they handle what would otherwise be captured in an auto-enter?
My understanding is when a child record has an auto-enter that looks at the parent record, if the parent record is created during the same transaction, the field with the auto-enter will remain blank. It looks to me like I have to script the set field instead or maybe add the field in the relationship (but sometimes this would break the relationship and it does not work when the child needs to transform the value via the calculation).
If you are doing transactions, you want to, as much as possible control the interaction. Script the Set Field. I try keep data manipulation at the script level as much as possible. Then you only have one place to look.
As you look at karbon, you will see that a lot.
It must be from my reflexes of non-transaction coding. I feel like I’ll end up having more than one script making edits to the same record / table and I’ll need to duplicate the logic. This is why I normally turn to auto-enters for a good part of the data integrity stuff. In my current setup, if I know a record will pre-exist the one I am creating, I do have some auto-enters there. I did get caught when expecting the auto-enter value to be there when both records are created in the same transaction. The framework is there, but when you’re just starting to use it, there are some parts that get slippery fast. Thanks for sharing, it let me catch up faster than I would if I was trying this all by myself.
We have a rule here that auto-enter values are only used within the same table. Any cross table values are usually entered via scripts, mainly due to our separation module and lack of any structure in the data files.
It is possible to use some get arounds for cross table auto-enter calculations using Case or Let, but for the reason above we avoid these.
It comes down to pulling out the reusable pieces, and separating them into their own script. The process of editing a record is really the same most of the time. The only differences will usually be the actual data, and which fields are being set. So handle the business logic before you send it to the script that actually sets the data or creates the records.
Not my use case, but close. How would any of you go about this:
You have a script that creates records tied to a parent. The children added have a % value in a field (positive or negative). When combining all records together (committed or uncommitted), the % value cannot exceed 100 and cannot be lower than 0. Someone could call this script in a loop within a single transaction where values for parent and % are changing (in fact I have no control over the calling mechanism).
Today, I validate against the committed data that the new record (single record within the script) does not bust the validations against committed data.
The problem is if someone calls this multiple times within a single transaction, I need to account for the sum of the changes against the committed data, and because the parent can change, I cannot simply track this in a single variable because parent sums would contaminate each other. Because I am not receiving an array I can traverse ahead of time to do the validation, it feels like I would need to store all this in a dictionnary (json or other) and clear the contents of that global variable. This sounds like there must be another way. Am I missing something?
The scenario is over simplified: we are not really talking about a loop so I cannot use a script result and add it to the next iteration of a loop. Hence the reference to a global variable.
Even in parts of the scripting that I control, this setup occurs and even if I have not yet determined how it will be addressed, it points to a $$dictionnary but I would like to be able to handle it in another fashion.
Unfortunately, the description is a touch too abstract to understand what is happening. Can you describe what the data is, and what the goal is? What is the percentage? How is it calculated?
If I go off your description alone, it’s a matter of 2 separate loops or using ListOf ( summary field ) to pull the data into an array.
The real data is quantities (weight for bulk material) added or subtracted to storage locations having a storage capacity (100 or whatever limit is specific to that one location, not the same limit across locations, 0 being the lowest possible total balance when adding together all the inventory movements). I have a script that adds a single movement as part of a transaction. If the script is called only once during the transaction, I can make my validations against the committed data only. If it is called more than once, when making the validation, I need to account for uncommitted data.
2 storage locations locA & locB where locA.maxCapacity <> locB.maxCapacity
- locA is empty
- locB is empty
- receiving x kg into locA (weight and location are parameters my script receives)
- receiving y kg into locB
This is easy, I compare the qty received in with the balance of the location specified in the parameter. If adding the 2 together does not bust the max capacity of the location I’m dealing with, I process the information, if not I throw back an error.
- receiving z kg into locA
- receiving c kg into locB
- receiving b kg into locB
- shipping out r kg from locB
- receiving s kg into locB
Here, for location A, comparing only against committed data is ok. For location B, script call #3 requires me to account for call #2 when comparing with the balance because the impact of that reception is not committed and does not show in the balance. The same concept applies when shipping out, r kg may exceed the committed balance, but that is still ok if the quantities received in calls #2 & #3 combined with the balance let me cover for r kg. Then again, on call #5, to know if s kg will bust the location capacity, I need to account for all the uncommitted data, but not the uncommitted data that pertains to location A.
Again, I do not have control on how my script is going to be called. Some of this also happens in my own code because some operations ask me to generate movements that are “adjustments” to the inventory as part of the operation within the same transaction.
All of this has additional layers because the structure also has different materials and different owners each having their own balance of a given material (like bank accounts with different currencies). That said, a given location can only store one material at a time, but it can change if the location is emptied and “reset” to contain a new material.