I´m trying to program an acquisition routine in a cRIO9047. This cRIO lets you program both in RT and Labview. The idea is to do the acquisition and send the data to the host PC. I did it in Labview and seems to work well but I wonder if this is a less reliable solution than programmed in RT.
cRIO9047. This cRIO lets you program both in RT and Labview. The idea is to do the acquisition and send the data to the host PC. I did it in Labview and seems to work well but I wonder if this is a less reliable solution than programmed in RT.
You program that cRIO with LabVIEW, both the RT target and the FPGA target.
Why do you want to differ between "LabVIEW" and "RT"?
Thanks for your message.
I wonder, given that this cRIO lets you program both ways, what am I loosing doing this way compared to the other.
given that this cRIO lets you program both ways, what am I loosing doing this way compared to the other.
Which "two" ways are you referring to?
You program the cRIO using LabVIEW…
I know what I did, I used labview, I didn't use RT, that's why I am asking
Ok, now we are on the same side when we say "use LabVIEW".
But what are you referring to when you write "I want to use RT"?
I call RT when you typically use time-loops, RT-FIFO... to do the acquisition instead of using DAQ-mx palette, it's probably not a correct terminology...I realised now. 🙂 But the question is still relevant.
It's probably not a correct terminology...I realised now.
Yes, that is not the correct terminology...
You either use the ScanEngine or FPGA when you don't use DAQmx in the RT target!
I call RT when you typically use time-loops, RT-FIFO... to do the acquisition instead of using DAQ-mx palette🙂
Your cRIO allows to use DAQmx: when it's sufficient for all your measurements then you should stick with it!
When you need more special things you still can use the FPGA-related functions...
Here's the issue -- determinism, that is, being certain that actions take place at known times without the Computer/Operating System "interrupting" and introducing "uncertainties" in the timing. I learned about this with older (pre-PC) computers. One had almost a hundred different "interrupts" (whereas the original PC had, if memory serves, 8), so you could say "If so-and-so happens, the PC will start running code to handle that in two instruction cycles".
In the PC, what prevents "determinism" is the Operating System. Windows is doing many things simultaneously, and one never knows when an e-mail is received, a virus scan starts, memory is allocated/deallocated, page-swapping happens, etc. A Real-Time OS (like Linux-RT on RIOs) isn't "listening" to the Internet. A Real-Time OS does things like saying "How would you like to use one Core of this multi-Core processor for this critical Real-Time task?". In a Real-Time OS, it's not that the OS is fast, but that the timing is quite "constant" and "reproducible".
You still need the PC (a.k.a. the Host machine) to handle communication with the (very slow, in computer-terms) Human Being, including Displays (quite slow), Keyboards, File Systems, and running "loops in parallel". The Real-Time Target, running its Real-Time OS (where time determinism is important), you avoid Displays (no Front Panels, no Charts, no "examining data"), File I/O, and Large Arrays (you export those to the Host, which has the Memory Space and time-luxury to handle them).
The really nice thing is that the Programming Paradigm is largely the same -- LabVIEW on the PC is largely the same as LabVIEW on the RT-OS, and a "recognizable" version on the FPGA.