LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Asynchronous serial input with an sbRIO FPGA

Background:

 

As part of my capstone project, I'm trying to read data transmitted serially from an IMU. The host is an sbRIO 9602.

 

As far as I'm aware, the protocol is not exactly standard: data is sent asynchronously in packets. Each packet consists of 12+ bytes in immediate sequence, each having a start and stop bit, and then the line goes idle [high] until the next packet. Each data byte is preceded by a frame bit, and only contains 7 bits of actual data, so the packet has to be further processed to get actual useful data.

 

I've seen the FPGA serial I/O example program floating around. The code I inherited was actually based on it, but it's overly complex for this application, and I'm not convinced it would actually work for this protocol at all. I couldn't get it to work, at any rate. I rewrote the sampling code in its entirety twice trying to get it to work, but haven't made a lot of progress. -On the bright side, the current VI is much simpler and more straight forward than my first attempt...

 

 

The problem:

 

I can read the first 70 or so bits of a packet fine, then the program skips a bit. That throws off the start/stop bits, and basically renders everything after meaningless. In this screenshot the data is as read in, in order from top to bottom:

 

 

I'm fairly certain this means my sampling interval isn't perfect [this suggests about 1.4% too long], but I'm totally stumped on how to avoid it. What's worse, this is actually on the lowest possible output setting from the IMU, 2400 baud. Realistically, we're hoping to have it working at either 230.4k or 460.8k baud.

 

The prior version of my code had the packet being read in 10-bit [1 byte] chunks, processing, then reading the next chunk. I encountered exactly the same error. I assumed that having too much code in the sampling process was causing the timing to be thrown off, so I changed it to read off the entire packet into a bit array and then process it afterward [while no data is coming in]. I've attached this version of the code for reference. It's cleaner, but no change as far as the error is concerned.

 

Am I correct in my evaluation, or is there something else going on? More to the point, is there a way of fixing or working around the problem so that I can get reliable samples [even at 100-200x the bit rate]?

 

 

As an aside, I've only been working with LabVIEW for a couple weeks; please tell me if I'm using poor habits or doing anything incorrectly.

 

 

Any help will be immensely appreciated. Thank you.

0 Kudos
Message 1 of 7
(3,180 Views)

Hi Ryan,

 

It's not obvious from your description or your code what is going on thats causing you to read the information incorrectly.  Perhaps it has somethign to do with the delay you've placed in your second loop?  Regardless, my best advice would be to look at some existing code for for FPGA serial communication and go from there.  You can find a Developer Zone article about RS-232/RS-422/RS-485 on a FPGA Target here.  You can find an example of serial communicaiton using a FPGA here.

 

David A

0 Kudos
Message 2 of 7
(3,137 Views)

Hi Ryan,

 

Do you have this program in compatible version for LabVIEW2010 ? if you have it and you can post here, I will be grateful... it seems very interesting and I want to take a look over implementation...

 

Thanks !

0 Kudos
Message 3 of 7
(3,097 Views)

Hello Xenopol,

 

You can actually request the conversion of a VI to a different LabVIEW version at the Version Conversion discussion board.  Regardless, I've gone ahead and saved the VI that Ryan posted to 2010.

 

David A

0 Kudos
Message 4 of 7
(3,080 Views)

Thanks for the input, guys.

 

A somewhat-belated update: I was able to work around the problem to an extent by tweaking the duration to wait between each bit sample. At 2400 baud, the delay should have been 40 MHz / 2400 = 16,667 clock ticks; experimentally, 16,430 was much more accurate. At 57.6k baud, it should have been 694 ticks; using 679 ticks instead removed the bit error.

 

This method seems to work effectively enough, but it's not flawless. At higher transmission rates, we don't have enough resolution to work with; a single tick has too much of an impact to help. We got it working at 115.2k baud, but if we go any higher [sender can do 921.6k] we lose a bit around 20 bytes into the packet, and we can't work around that. [The longest packet is 24 bytes.]

 

In short, we got it working at a slower transmission rate, but the method is less than ideal. I still don't know why the problem occurred in the first place. Any insight or suggestions for improving the code are, of course, still welcome.

0 Kudos
Message 5 of 7
(3,072 Views)

Hi Ryan,

 

I have a suggested methodology, but I don't currently have any example code to share that would get you started.

 

The challenge you have is even if you sample at the exact right baud rate of your incoming signal, the phase of the FPGA clock will not be exactly the same as the source signal.  Now complicate that with your sample frequency and baud rate will always be slightly different, and you will get the sampling drift effect you described where data eventually is clocked in wrong.  On short transmissions, this may not be a problem because the sampling can be re-aligned with a start bit, but for long, continuous streaming, it eventually fails as the sampling and source signals drift out of phase.

 

I would suggest over-sampling the DIO line, using a debounce filter if necessary, and use a measured time between edge detections to constantly adjust your sampling period and phase to keep your sampling aligned with the incoming data.

 

The proposed LabVIEW code I imagine would be a single-cycle timed loop based state-machine.  Essentially the state machine could detect edges that occur near the baud rate you expect to receive, and then would adjust the sampling period to ensure you are sampling the data inbetween transitions while the incoming waveform is stable.

 

With this method running at 40MHz, you would essentially have ~43 clock ticks/samples of each clock cycle at 921.6kbps, and you should be able to pull out the right samples at the right time in the waveform.

 

Hope this helps, and if I find a good example of this, I'll send it your way.

 

Cheers,

Spex
National Instruments

To the pessimist, the glass is half empty; to the optimist, the glass is half full; to the engineer, the glass is twice as big as it needs to be has a 2x safety factor...
Message 6 of 7
(3,068 Views)

Thanks, Spex. That sounds a bit complicated to implement, but it does make a good deal of sense.

0 Kudos
Message 7 of 7
(3,064 Views)