My point of sale system includes a products table which includes all items in my inventory. My distributor produces a text file with all items in their inventory, which I believe is updated daily.
I’d like to download their text file and query the data to determine if specific items are in stock at their warehouse. I’d like to avoid importing their data file because it’s very large and will take too long.
If I import the text file into a container or text field, can I query the file for a specific string? I’d be looking for one or more product SKUs.
My goal is to go to my products layout, perform a find to get a found set, then display the results including a “green stoplight” or other visual indicator that the line item(s) is in the text file.
The text file from my distributor has 40,000 rows of comma separated text. Perhaps I could import each line into a single field to speed up the import?
Do it on the command line. Grab your supplier data, xport the essential data for comparison. Process the supplier file and push any new data into a new file. A second process could check for modifications. These might need some attention.
There is little you can do natively if the file is in a container field.
The text field can be searched if you import the file's text into a text field. You could use any of the text functions to obtain results. That said, there would be no row concept in a text field. Rows and columns are database constructs, not text constructs.
You could also use FileMaker's File script steps to directly analyse a file's data. There is again no row concept. You are analyzing a data stream, which in this case happens to be a text stream. Doing such an analysis requires a good understanding of the file's format. This especially includes a good understanding of the exceptions that exists in the file. For example, if data can contain the row separator character, say a carriage return, how do you distinguish data from row separator?
You could also use a plugin to help. For example, the MBS plugin contains RegEx functions. Likely faster than FileMaker's native functions.
I realized that there’s only one essential data element — the product SKU — in the distributor’s text file. I’m going to try to import just that field into a normal table.
If the additional effort required to do so is not prohibitive:
I would consider doing a little bit more, and exploring approaches that would allow your code to validate that the data adheres to a format that your solution expects.
That way, if the file format changes on you and the presence or absence of the SKU in that target column changes its significance, you have the opportunity to detect this change and adapt the solution code, as necessary.
Importing just two of the data fields (SKU and Title) took two minutes. I was able to create a relationship that allowed me to create a “green light” using conditional formatting. Next step is to automate the import. Thanks to everyone who offered suggestions.