I'm really happy about the newer import options, that now allow to use various .csv formats
FMP still is very picky about file endings.
What I actually hate is that switching from one import file with semicolons to another one with commas that all but the first match will be removed. The new field-by-field definition has its advantages, but is annoying and inefficient at the same time.
Since I have a dozen of different import sources, I do not use predefined import scripts yet - but is that the way to go? Unfortunately, those import formats are not even stable from the same source, but change maybe every year.
What surprised me even more: There is no progress bar window, but the import dialog remained sticky above all other windows until the import was done. I guess I may need a reboot of my computer since I had observed proper import behavior before. It just makes me uncomfortable that even for FMP20 such import issues are not fixed yet.
Is there a better/easier way to improve imports? I thought about a mail merge export of some dummy records and use the first field definition to update the import format first, then use the match field names operation.
You can always write code to do imports. I do that routinely with very large files and files with formats (lots of them) FileMaker doesn't understand.
Excellent suggestion @OliverBarrett .
Another method is to open the data file directly with FileMaker. This creates a new FMP file with the data. You can then do all the cleanup you need in a disposable file. You can create all sorts of messy scripts that you wouldn't want to have in your production files. You can create relationships between the new data and the target file then do data comparisons, and pull in whatever you need to have a completely accurate data set. One nice thing about having your data-to-import in an FMP file is that, when the data source is open, only the current found set is imported. That allows you even more control over the import process. Of course, if you plan to do this import frequently, having a built-for-purpose filemaker based import controller makes the work easy.
I typically script any imports that are likely to be a repeated exercise. And in most cases import first into an "intermediary" table in which I massage the data (or sometimes just peruse it to confirm that all is well). Of course, this requires a certain amount of consistency with the data sources. For example: I have a data source which includes dimensions entered as either 8.00x6.50x4.75 or the "x" is instead an asterisk or 8.00x6.50 or only 8mm, etc. These get parsed in to dedicated Length, Width, Depth fields. In other cases some fields are blank on certain records and need to have special designations in the final destination, and so on.
There is no progress bar window, but the import dialog remained sticky above all other windows until the import was done.
In scripted imports one can select "dialog: off" once the mapping is complete and it's done in the background. The user never sees any of the import dialogs (including the results). I include error handling in my scripts to notify the user if anything went wrong.
Yeah, I do use fresh tables, too. They have the major advantage that they don't need the updates of the indexing during import.
However, for that appilcation here, there are plenty of field replacements during import on certain fields, while the next import from this import table to the working table is heavily scripted, too.
So for that application I usually do use external data handling, too, especially converting csv to a FMP table*, some times exporting just a subset of that table to a text file**, running a "sort -u" on that subset and import that to my import table.
For the most used repeating overwhelming imports I have some predefined import tables, too, since creating of an empty clone is faster than deleting all records.
This may shrink down unhandable XML-Data with several hundreds of MB file size, converted externally to 100 MB as .csv, then compared and imported for the final amount of gold grains. So I was hoping for improvements here, not that significant steps backwards.
- (and after 30 years I still hate the FMP name suggestion of the extra " converted" or even longer German " konvertiert", reacting still too slow when removing this extra word whether the delete had been accepted or not, leaving me with extra removed characters</rant>)
** repeating the rant that I still can't define a DOS end-of-line export in my Mac, forcing me to open the export in vim, set filetype=dos, :%!sort -u sequence. This makes me furious every time since it always remembers me of an unnecessary extra step I have to do.
If you ever wanted to do this on a table-by-table approach, TRUNCATE TABLE is near instantaneous, even for millions of records.