05-27-2014 03:05 PM
Hi, I've been using this algorithm for some time in CVI 2012, but now in CVI 2013, I have found that ScanFile is not working the same for me.
I'm unclear whether this is a CVI 2013 issue, or just a dormant bug that I had never witnessed until now.
I'm am scanning through an ASCII tab-delimited log file. Some "columns" are text, others are numeric. I prefer to scan through the columns in a non-rigid way; i.e., I'm not strictly following the column order as it exists in the log file. Here's what it looks like:
for (i=0; i<LOG_MAX_COLS; i++) { switch (i) { case LOG_COL_ID: ScanFile(fileHandle,"%s>%i[x]",&unitData[slot].id); break; case LOG_COL_SERIAL: ScanFile(fileHandle,"%s>%s[xt09]",unitData[slot].serial); // discard tab (ASCII 0d09) break; case LOG_COL_REVISION: ScanFile(fileHandle,"%s>%s[xt09]",unitData[slot].revision); // discard tab (ASCII 0d09) break; case LOG_COL_STAGE: ScanFile(fileHandle,"%s>%i[x]",&unitData[slot].logStage); break; case LOG_COL_RETRIES: ScanFile(fileHandle,"%s>%i[x]",&unitData[slot].retry); break; case LOG_COL_RUNNING: ScanFile(fileHandle,"%s>%i[x]",&unitData[slot].running); break; case LOG_COL_COMPLETE: ScanFile(fileHandle,"%s>%i[x]",&unitData[slot].complete); break; case LOG_COL_ELAPSED: ScanFile(fileHandle,"%s>%s[xt09]",elapsedString); // discard tab (ASCII 0d09) break; /* Now fill a placeholder string with the remaining fields so that the file pointer stays in sync.*/ case LOG_COL_DATE: case LOG_COL_TIME: case LOG_COL_VOLTAGE: case LOG_COL_CURRENT: case LOG_COL_STATION: case LOG_COL_SOFTWARE: case LOG_COL_SLOT: case LOG_COL_DESC: case LOG_COL_COMMENT: ScanFile(fileHandle,"%s>%s[xt09]",tempString); // discard tab (ASCII 0d09) break; }
The behavior I'm seeing now is if I have any zeros in my log file in the numeric columns, these are getting skipped until a non-zero value is found. Suggestions?
Solved! Go to Solution.
05-28-2014 04:41 AM
Can you also send a sample line from the file and tell us what you expected from parsing that line and what you got instead?
I think you complain about the behaviour for "%s>%i" type of conversions, am I right?
05-28-2014 07:20 AM - edited 05-28-2014 07:22 AM
Ah, of course, that would have helped huh? 😉
Here is a sample log file. The last four columns in particular are giving me problems. You'll see that last column is really just a line number that increments for each line. The previous three in this case are zero. It's the zeros that aren't being read. And you are right. It's my %s>%i that's not working.
I should add that for me, it's not really clear how the Scan set of functions work. It's not documented how they parse through a file/buffer. I assume there's some sort of file pointer that remembers where it left off? So with each successive call, you plow through a line/file/buffer? At least that's how I've been using it.
05-28-2014 02:48 PM
Ok, I found the bug. It was an insidious little bugger!
It turned out I had a blank value for one of the columns shortly after creation of the log file. Thus, there was a double delimiter (two tabs) and apparently ScanFile was stopping after the first tab and throwing off the rest of the reads.
So it's safest to put non-empty strings or values into each column. Unless there's a safer way to Scan the line.
05-28-2014 11:05 PM
I wouldn't claim it's 'safer' but I prefer using commas as delimiter - I can more easilly recognize them compared to distinguishing spaces and tabs...
05-29-2014 07:23 AM
I suspect you'd have the same issue with CSV files. In other words, if I had this:
0123,Monday,01-23-2014,,5.3625,39.99,14%
My ScanFile would still choke at that double delimeter.
The reason I went with a non-printable was that a lot of my text fields contained commas. It was tricky and my brain hurt too much to figure out the regex involved in scanning those fields.