I've got a motor with a 422 serial interface. It can work at various baud rates from 9600 to 115200. I've used this motor successfully at 9600 and at 115200 when connected to a PC in regular LabVIEW. In cRIO, I've gotten it to work at 9600. Then I sent the command that switches it to 115200 and saved that setting. I recompiled my FPGA code with the 115200 baud rate and ran it. Odd thing is, it can read from the motor, but the motor won't recognize any commands written to it. I can tell that because when I send it a series of commands, it clicks in a way that it does when it resets, then it sends its usual string of boot-up info. My interpretation of that is that the send data is going too slowly, so that the motor sees serial breaks between characters, causing the resets.
I the RT. The character array is predefined in the FPGA code. The loop just stuffs the characters into the 422tested this by making a simple program that sends 15 predetermined characters to the motor in an FPGA loop. No data from port. If I add a delay to the loop greater than about 100us, I get the clicking/resetting. Makes sense. At 115200 baud, a break should be about 100us. But when I use lower delays or no delays, the motor still doesn't respond. I can tell that because the 15 character string is the command to set it back to 9600 baud, but after I reboot everything, my old program still can read the startup info at 115200 baud.
Has anyone used the 9871 at 115200 baud? Are there any tricks I should know about?
Oh, the RT/FGPA interface I'm using is the interupt driven one from the examples, not the DMA. I could believe that the interupts could be slow enough that the motor would see breaks, but in my small test program where the FPGA has a preprogrammed string (er, array of bytes), the interupt timing is irrelevant. But it still doesn't work...
Sorry for the long post, but I just realized that my whole serial Break theory is worthless. Breaks must be recognized inside a character. The motor can't care how long it is between characters, right? The 9871 should send out a complete character whenever I give it one U8 byte. So the timing between bytes shouldn't matter. But somehow, it isn't getting valid data, and appears to be randomly resetting...
Any ideas would be greatly appreciated.
I don't have an immediate answer as to why communication isn't working in both directions at 115200. For now, I have a couple of questions. Are you running the FPGA in Scan Mode? Also, have you tried using the example in the example finder for the module to do a pure loopback test at this baud rate?
No, I'm not running in scan mode. Direct FPGA programming.
I was looking at the example in the example finder. I am using LabVIEW 8.5, which I don't have access to right now. In 2010, the example is a loop-back program. In 8.5, I've only been looking at the FPGA code. I never opened up the RT code to notice if it was implemented as a loop-back program. That would be a good thing to try! I'll have to make a loop-back adapter. Also, I'll be unavailable for about a week, so it will be a while before I get back to this. If you have any more ideas in the meantime, though...
I am experiencing a similar and possibly related issue with an NI-9871, for the past several weeks I have been troubleshooting what I thought was a problem with a piece of vendor equipment. We have been using the exact same cRIO system with older variants of this vendor unit but at 19.2 Kbps, this new unit can run at 115.2 Kbps and send data at a higher rate. The problem manifests itself after the unit is running for several minutes messages start dropping out - inspection of the raw data reveals that there are missing characters or runt messages. The message dropouts are cyclic and come and go as the unit runs. I had been operating under the assumption that it was the vendor unit until I made the observation that the system always worked at startup. I added a call to peridoically reset the baud rate of the 9871 channel I am using and low and behold the problem goes away. The ONLY thing I can figure is that the clock in the UART of the NI-9871 must be derived from an FPGA line and it must be drifting, it is likely the clock is calibrated when the call to set the baud rate is made. I am using NI-RIO 3.6 with LabVIEW 10 SP 1.
On monday I am going to put a scope on the data lines and suspect I will see that the ones and zeros are the incorrect period for 115.2 Kbps and that they change or jitter as the unit runs
Has anybody seen this problem before ?
to be clear I have also seen the problem in the other direction as the original post - where I send commands and they are not recognized by the receiving system. Also not using scan mode - we are using a DMA stream. During troubleshooting I added a capability to clear the RS-422 buffer (a Method for the NI-9871) and that by itself has no effect. Only periodically setting (or resetting as the case may be) the UART baud settings seems to fix the problem. It also does not have to be frequently done - in my case I have 32 byte messages at ~ 25 Hz, sending a reset every 4 seconds keeps things happy..... This is actually a pretty serious bug IMHO !