First: on mobile as I cannot reach Reddit at work. So, hopefully formatting is good.
I need some help finding a better way to import large text files and parse the lines into a table.
Wall of info:
The text file is not ” clean”, so if anyone has ideas on how to clean it up before importing, I’m open .
Here is the format: Lines 1 – X are the parameters ursed to run the report. These are not consistent in number, as I could get individual files for each department or one file with multiple departments.
Lines (X+1) – Y are the report details (date/time run, user that ran the report, total number of actual records…)
Then I may have the header row or a new page header. The output seems to be aligned landscape for 8.5 x 11″ paper. So if the first two sections take up a whole page, there will be a “page header” before the actual field headers. The data is broken up across the pages so there is a page header and footer with about 40 rows of data.
Each data line has 48 fields.
Additionally, some of the fields may contain special characters ( usually the single character width tm, c, r marks or single character width fractions. These throw off using a fixed width import spec, and I have tried all TriState options when using an FSO to read in line by line.
I’ve also tried using line input and then splitting out the line on the delimiter and then looping through the array to add a new record via recordset.
Data is propitary, so I cannot provide an actual file for reference.
My most recent attempt was to pull the file into a temp table with only 1 column and then use a wrap around for the split function to see if it would be faster than looping through using vba and setting .Field(i) = splitarray(i)
It is very much slower.
Right now, the fastest I have been able to read/parse is about 10 records per sec. Way too slow as I need to pull in files with 100k+ records. And I am not doing any conversation..every field is coming in as a string.
Thanks for reading this far and also for any suggestions on how to speed this process up.