I have an existing application where I use some analogue signals aquired using an MXI-Rio chassis in a control loop on a PXI-RT system. Recently I bought an NI-9149 Ethernet RIO chassis to run a variation of this application. I went with the Ethernet RIO option as the MXI-Rio chassis are very old, and marked as 'Mature' with no newer models available (This is annoying, but there we are).
Anyway, I have come to realise that I am unable to get the control loop running at 1kHz seemingly no matter how I transfer the data from the FPGA. I have tried reading scalar indicators, array indicators, or using DMA FIFOs (which is what I use in the MXI-Rio setup). Whilst the data bandwidth is fine, it doesn't reliably go at 1kHz. Sometimes the are over 6ms between samples. Now with the FIFOs, I can "catch-up", but this doesn't work for the control loop. The control loop is far too complicated for me to run on the FPGA, so what am I supposed to do? Would you expect this behaviour?
This seems like it would be a fairly trivial application. Should I have gone for the EtherCAT RIO chassis? The reason why I didn't is because I've used EtherCAT with Labview before, and it's not a great implementation - The EtherCAT Master only implements the basics (no way to configure PDOs for example other than editing the xml ESI files for instance) and adds a big overhead in having to run the Scan Engine.
Any feedback would be appreciated...
Sounds like you are pretty experienced - so I expect general comments about the trying to do real-time control over a non-determinsitic Ethernet network down to msec sample times will be well understood by you.
I've no real idea if 6msec is what would be expected over a well conditioned ethernet link, and maybe you are at the nominal limit. I am sure you have thought about minimising / stopping what other traffic there could be on the network - whether that is other devices or minimsing the data being passed from chassis to bare minimum for the control.
What about some fundamental things - have you looked at cabling / connected switches to make sure the base speed is 100M or maybe 1000M depending what the hw supports, rather than having something limit the it to 10M. Perhaps even try a point to point cable (crossover) bypassing any switches. I'd expect there are some low level settings in the ethernet adapters (both ends) that might allow the performance to be maximised - I vaguely remember looking at things like packet sizes.
I also vaguely remember something in a project we were doing about comms over ethernet, and doing something in Labview to force an ethernet packet to be sent rather than waiting until the packet was full and then sending - I can't remember much about that, other than reading about it (but I suspect it isn't appropriate to FIFO DMA). Might be worth you searching on that sort of thing.
You say that the control is too complex to put into the FPGA, but can it be split (maybe a fast inner loop) and a slower outer loop - or can some of the complexity be pre-computed offline (e.g. using LUT instead of complex maths functions - depends what it is). Is 1KHz really needed ? Maybe the performance hit might not be big if running at 250Hz?
Hope this helps, but as I say, you may have already thought of these things.
Thanks for taking the time to reply.
I've set up my system with just a direct link between the Ethernet RIO chassis and the RT PXI system. My hope that was when there wasn't any other traffic on the network, although it wouldn't be deterministic, it would be so fast as to appear so. I guess that was naive.
I guess you're talking about the Nagle Algorithm which can be disabled for TCP connections to stop it waiting for larger packet sizes. I would guess that this has already been turned off for the FPGA -> Host comms, but unfortunately it's all hidden, so I can't really see how it works. There must be an abstraction layer to allow the same methods to work over Ethernet, MXI, and whatever bus is used in the CRIO systems, but it's all hidden.
I think the only solution I can make work with this hardware is to slow the control loop down as you say.
Thanks again for your insights,
Another thought - but depends on what type of control algorithm you are running:
We built an online, real-time simulation that had dynamics - the application that called the point-by-point simulation was supposed to be regular but wasn't, so we had the equations in the simulation formulated to cope with variable step sizes. If you have a dynamic control (e.g. PID) then it should be fairly easy to formulate the equations to cope with a sample time that varies.