09-17-2020 12:03 AM
Kevin,
Thank you for the example. After sorting through the information provided in this post I have eliminated the ever growing feedback node. Also, as you pointed out I changed the %f's to %d's because I am expecting signed integers instead of signed floating point numbers. I do have one question and some feedback from the latest vi you shared. The format string you supplied looks like the following:
"\02{"type":\s"log",\s"msg":\s"%u\s%f\s%f\s%f\s%f "
I understand that \02 is the STX character however you didn' put a '\s' after it. Does a space naturally follow the STX character after it is invoked? Also, could all the '\s' characters be replaced by spaces and the format string retain its same value?
Lastly, After running the example vi you provided. It only seems to work on the original iteration and no subsequent iterations. Also, it stops short of the 100 iterations it is set to by default and always runs 54 times exactly. What do you think the problem could be ?
09-17-2020 04:11 AM
@damianhoward wrote:
I understand that \02 is the STX character however you didn' put a '\s' after it. Does a space naturally follow the STX character after it is invoked? Also, could all the '\s' characters be replaced by spaces and the format string retain its same value?
In "\ Codes" display \s means a space character (0x20). There isn't a \s after the \02 because your strings don't have a space after the STX character.
@damianhoward wrote:
Lastly, After running the example vi you provided. It only seems to work on the original iteration and no subsequent iterations. Also, it stops short of the 100 iterations it is set to by default and always runs 54 times exactly. What do you think the problem could be ?
According to your VI, you got a buffer overrun error. How quickly is this data coming in? You may be running into timing issues due to the File IO. Consider using a Producer/Consumer setup so that your file IO can run independent of your data reading. This would allow you to read the port as quickly as possible while still logging it all.
Also, you should try doing a read before your loop. This is to sync up with the messages coming in. It is very likely your first message read is incomplete since you started reading in the middle of a message.
09-17-2020 07:04 AM
I made a few quick mods to the code to demonstrate the Producer/Consumer pattern in an attempt to tighten up the VISA Read loop and avoid serial buffer overflow errors.
This is still debug/troubleshooting code. The pre-read and discard suggestion from crossrulz is a good one to put into final code. I didn't do it here because I put priority on logging *everything*, even the 1st partial data frame, during this troubleshooting phase.
-Kevin P
09-17-2020 07:48 AM
@Kevin_Price wrote:
This is still debug/troubleshooting code. The pre-read and discard suggestion from crossrulz is a good one to put into final code. I didn't do it here because I put priority on logging *everything*, even the 1st partial data frame, during this troubleshooting phase.
If you really want to prioritize logging *everything*, then don't let the producer destroy the queue: the consumer should. Instead, send a message telling the consumer loop to stop just as "STOP" or an empty string. The consumer can detect that, stop the loop, and then close the file and destroy the queue.
09-17-2020 09:17 AM - edited 09-17-2020 09:17 AM
Good point. Having the consumer release the queue is how I'd do it in real life. I took a short cut here for simplicity, but it's true that my posted code could terminate with some strings still backlogged in the queue, unwritten.
But it remains a good point of emphasis to let consumers be in charge of releasing the queues they consume from.
-Kevin P
09-17-2020 01:24 PM
My data is coming every 14 ms from the device. Again, thank you for clarifying that the STX character is the leading character without any spaces. In raw text format it looks to be the case. As I begin to analyze this producer/consumer architecture a question came to mind. Right now in Kevin's 3.0 vi, there is only the data from the device being dequeued off the stack. What if I had a third loop that produced data like a position sensor or accelerometer. If I wanted to add the new incoming data source to the .csv I wanted to save, Would I create another queue then concatenate in the consumer loop? Or would it be faster to just add it in the producer loop through a stream writer? My ultimate goal would be to have both data sources be synchronized as close as possible in time. Thank you guys again for all your help on this.
09-17-2020 04:14 PM
@damianhoward wrote:
What if I had a third loop that produced data like a position sensor or accelerometer. If I wanted to add the new incoming data source to the .csv I wanted to save, Would I create another queue then concatenate in the consumer loop? Or would it be faster to just add it in the producer loop through a stream writer? My ultimate goal would be to have both data sources be synchronized as close as possible in time.
Just use the same queue to enqueue elements to be logged. With this, you should make your queue type a cluster so you can have one element state what the data is and then another element for the actual data.
09-17-2020 07:16 PM
2 independent asynchronous data sources being logged to a CSV file puts you right on the verge of a can of worms. It can be done of course, but now's the time to think carefully about several things:
1. Will you write both pieces of data whenever you receive an update for one of them? Or will some other scheme govern when you choose to write a "line" to your csv file?
2. How will you track timing of the new data? It's best to attach your timestamps as near to the source as possible (i.e., in one of the producer loops).
3. Can you consider a different file format such as TDMS which better supports different data channels at different rates?
-Kevin P