Is there a way to modify/rename a table fieldname from script?

Does anyone know of a way to modify/rename a table fieldname from scropt?

Use case is importing csv's with no fieldnames and then have FM inspect the data in each column to determin the fieldname... and then rename the F1, F2, F3... etc.

If you're importing and ending up with fields named f1, f2, f3 etc, then you must be importing into a new table instead of an existing table ? This is the only time in which the f1 , f2, f3 naming is used for fields.

Short answer - no.

Longer answer, yes. You can manipulate a CSV file to insert your own "field name" row at the top. There are a number of ways to do this. One would be to read in the text of the CSV and interrogate it to determine what each of the column names should be, then create this text and insert it in the first row of your CSV.

Another, import the CSV into a temporary table, interrogate the data in each column and then create a new record which acts as the column headers. Set the values on this record to the field names. Give this record a value of 1 in a flag field to indicate it's the header. Sort records in this table so that the header is the first record (e.g. by the flag field descending order). Finally, now you can export the records out of this table and into your own table, or into a new CSV which will have a first row of header names. Once done, you can import it as you typically would but this time the first row is the field names...

1 Like

@weetbicks gave you very nice ways to achieve your goals. A few years ago the import functionality was improved in many ways adding a lot of flexibility and features. You can for example map the input field names to the receiving table's field names, quite powerful !

Good Plan. Thanks very much.

Another alternative: If you drag and drop an Excel file onto a FileMaker icon, you end up with a FileMaker database, with field names matching column names, and data already populated. Good place to start from, before merging from this new file to your primary DB.

1 Like

Thank you Kirk for that suggestion. My problem here is that the data is not nicely structured in an Excel file. Rather, and based on the original query, the data is in a csv with the field name actually in the row. Think 'nameFirst=Bill' in the first column and 'nameLast=Smith' in the second column. The third column may be 'AddressStreet=Main' and the fourth may be 'AddressStreetType=Blvd' ... but, AddressStreetType may be missing as a column if there is no data for that column... meaning, I can't count columns and assume that the fields remain consistent... particularly in a 250 column file.

At the moment, I think weetbicks approach is the winner.

Excel has the function to split columns based on a delimiter, if that would work for cleaning up the field level metadata......
Alternatively, you could script splitting the field after importing....
Doesn't fix the missing columns on some record issue, however.... :frowning: