LabVIEW is reading from a serial port where 90 bytes are being sent every 2 msec.
The last byte in each set of 90 bytes increments by 1.
So 0,1,2,..., 255, 0, 1, 2..
The current program waits long enough time to read 540 bytes.
It then looks in those bytes looking for those incrementing bytes.
Is there a better way of doing this?
Solved! Go to Solution.
POst the code as it is today as well as a large data set that represents a typical stream of bytes read via the serial port.
Since the last byte increment using the normal "look for termination" approach will not work so you will have to continually read the byte stream and analyze it in context of multiple messages and "God help us" if a message is dropped and we have to hand the recovery.
Quick question before we wonder off into the weeds...
Do you control the protocol and can fix it before have to develop the interface?
This is a lot like SDLC on the surface (does it use bit-stuffing?)
The serial data is coming from a custom FPGA board. No flow control.
The board as it is running is ending information through the serial port.
The "gap" between the messages is 2 msec.
The LabVIEW program's job is to parse this flow of bytes and have LEDs on the front panel be on/off depending on the bits in the bytes.
Post your code and we will tell you if there are better ways to do what you are trying to do. Having a data set would also be helpful.
You were asked that before.
no dataset; that's my job to save these bytes
How do you know what to catch if you don't have an example of what to expect?
The only thing I know is that the 90th byte of every message increments by 1 so 0, 1, 2, 3 ..., 255 then rolls over to 0.
All the other bytes can be anything.
And there is 2 msec between every message.
The most important "better way" to do this is to do THAT instead.
Meaning, follow the question Ben asked earlier about controlling (and changing) the protocol on the sending side. Are you able to control when to start or stop the serial stream from the FPGA? If not, it's worth some considerable effort to try to get that protocol changed. Otherwise it's gonna take even more considerable effort to develop a robust receiving side.
If nothing about the FPGA serial stream can be changed, the basic idea of your post-processing seems roughly reasonable, but is also rather fragile with all the hard-coded constants. For example, how *certain* are you that the packet size will forever and always be exactly 90 bytes?
You should also think about how you're going to recover if you get your 90-byte frames out of sync again. I'd suggest you maintain a recent history buffer because that could help minimize (possibly prevent) data loss while re-syncing.
Sorry, out of time now.
The only thing that can be changed is the LabVIEW code.
The hardware is set in stone. They have been making this product for many years.
I will set the comments about the protocol and the fragility of the data stream possibly screwing with the framing and talk through an approach that I would consider...
Let's start with the idea that the receiving LV code is up and listening before the widget start spewing the data stream. In that case we would be able to see complete frames at startup.
We could start pulling in bytes and placing them in a 2D array with the dimensions declared by your spec.
After we get a complete set with the count incrementing from 0 through 255 we would able to see an array with the far
left right column being 0-255.
Since the pattern 0-255 is fixed we could index out the last column and compare it to a constant array that is -255. When that condition is met we can say that we are synchronized and can start watch for the 90 byte message and then use the ultimate byte as a "sanity check" that the incoming frame is valid.
So a simple approach would read a byte at a time, stuff at the end of an array that is then reshaped as a 2D array, index out the last column compare with the 0-255 pattern. If the check is valid we are synchronized. If not read another byte and repeat the check. That process continues until your buffer is larger that the 255 X 90 in which case we toss the oldest byte.
So I suggest you look at;
Maintaining an on-going buffer (a shift register is good for this)
Limiting the size of your buffer (255 X 90 is all you need to cache)
Reshaping your array into a 255 X 90
Indexing out column index 89
Comparing aggregates with an array of 0-255
Now you could try to cheat and only look at a subset of the 255X90 array but that would require the code try out a bunch of possibilities (0-4 or is it 6-10 or ...) but that would also open us up to a increase data value (like a simple counter) tricking the framing logic.