I understand that JSONGetElement {keyOrIndexOrPath} is case-sensitive where "firstName" is different from "firstname" or "FirstName". I am receiving JSON submissions from multiple external sources, where I will have to assume the keys and paths will at least be spelled the same, but perhaps with differing upper/lower/camel-cases.
So how can I guarantee that I can get all of the values I need if the JSON provided fluctuates in how the keys or paths are capitalized?
This does assume, however, that a key will not be re-used in some other non-case-sensitive form within the same structure. Perhaps it would be best to upgrade the example above with some contingency handling in the event that ~matching_key contains more than just one value...
That works great, with a couple of tweaks. The ~matching_key result includes a trailing return (¶) so I substituted for that. And the Let statement was not closed ];
I have 4 levels of keys to parse, so that will get more complicated. I suppose I can get the keys for each outer key and iterate my way through. In my example below I would need to likewise filter each node in the path "Claim.VehicleLocation.Address.Address1", so that "claim.vehiclelocation.address.address1" pulled the value.
I think, if I were in your shoes, I might take the concept that you just perfected, and make dedicated CF for it, e.g. something like JSONGetElementCaseInsensitive( JSON, Key ).
Then, that could be used either in a While calc or a looping script, and the code could still be pretty readable.
If you feel like it, you are welcome to a custom function I once wrote one morning just to test myself to see how long it would take me. It's called: FlattenJsonObject.
Essentially, you feed it a JSON object, and a format parameter, and it returns you various flattened versions of the data. It only works on JSON objects - not on arrays.
A few examples using the JSON that you posted above look like this:
I never got around to sharing this function because I did not feel there would be much demand for it. It was mostly just a diversion for a morning. But -- I could see that it could apply to your situation because it can hand you a list of paths that you could then process using the FilterValues approach.
It's yours if you care to use it, but no offense taken at all if it feels better to roll your own.
Well, not very simple IMHO, Thanks for sharing it. I imported your CF into my solution and plugged it into the "perfected" code above and it worked as advertised. I will worry about dealing with arrays if I have to down the road, but for this application at the moment I am golden.
Super! I am glad it helped. If you run into any issues with it, please let me know. Nice that it is getting some use -- until now it has just been sitting around over here...
I just removed the sample file with the CF to flatten an object because I found a bad bug in it.
@SunrayBrad and anyone else who downloaded the file: My sincere apologies.
I have just reworked the function to patch the bug. Later today I will review and test my changes some more, and if all looks well, I will post the fix.
So that the bug and circumstances are clear:
If the original version of the CF is supplied JSON which uses a newline character (carriage return or line feed), then the function will will beach ball (hang) as it attempts to do endless recursion.
The flaw in my code is that I did not account for the fact that JSONListKeys does not allow us to detect when a key happens to contain a newline. This resulted in a circumstance where the steps in the function which determine whether to perform a recursion can be thwarted, because the list of keys determined via the use of JSONListKeys may be incorrect.
The resolution was to add in a step which validates each key that it operates with, and, in the event that a bogus key is detected, the function now returns an error message.
The above example has well-formed JSON. However, it contains a key with a carriage return in it. This is the type of condition which would have broken the original version of the function that I posted. It is, of course, not a likely scenario, but it is certainly allowed. A fix was definitely in order.
I'll give myself most of the day to take my eyes away from the code -- then I will re-review and test, and post back here.
@SunrayBrad : Sorry for somewhat hijacking your thread here -- I just wanted to make sure I let you and others know about this.
I'm searching around for an answer that I can cite from a spec, but have not looked hard enough yet. What I can say, is that the FMP JSON functions allow it. I understand that doesn't necessarily make it proper, but since it can happen -- even if it is incorrect -- I feel obliged to update the function to handle it in a non-catastrophic manner.
@SunrayBrad : If you could please do me a favor, I'd appreciate it if you could drop me a line to let me know you saw the updated version of the function. Since I know that you did get a copy of the previous version with the bug, I'd like to make sure you get your hands on the corrected version. Usage is exactly the same, so it should be pretty easy to drop in as a replacement.
The RFC says that a name must be a string. A string in JSON is allowed to include whitespace characters, which includes newline. The proviso is that these characters must be escaped.
ValueCount ( List ( "a" ; "b\¶b" ; "c" ) ) // 3
ValueCount ( List ( "a" ; "b¶b" ; "c" ) ) // 4
ValueCount doesn't mind but it's easy to imagine it not surviving other uses cases. For instance, how does fmp handle variables with escaped characters? That's a fairly common usage for keys.Â
Let ( $b\¶b = "I'm a JSON key" ; $b¶b ) // no result
Both of us have posted well-formed JSON examples, but they are not equivalent.
I'm reasonably confident that '\r' and \n' are the very escape sequences used to represent the return and line feed chars within a JSON (quoted) string. Another option to escape these characters would be the \u000D and \u000A sequences.
Edit: Though certainly not something that I'd call definitive proof, this screenshot of the Data Viewer might illustrate one reason why I hold the above belief. Or, if not, it could be a point of departure for identifying any misunderstanding I have.