This is the start of what I hope can be the best resource for using Actor Famework on Compact RIO.
First of all, Actor Framework can be deployed on cRIO as of LabVIEW 2012.
I believe I am the first customer to ask to deploy AF on cRIO. Early on I worked directly with NI and niACS to deploy AF on cRIO. So, I have been doing this for a while. I have learned a lot along the way as I am sure have others.
I would like to keep the discussion focused on AF on cRIO, not AF in general. So if you have other questions not specific to cRIO let's try to keep those in the another AF discussion.
I expect to revise this as the topic evolves, so this should be "evergreen". I expect to expand the topics and link to more in depth material as it is created.
Deploying an AF project from the Integrated Development Environment (IDE)
Deploying AF in a Real Time Executable (RTEXE)
Communication from a cRIO to a PC in AF using the Linked Network Actor (LNA)
Creating a messaging Interface
Interacting with data on an FPGA using AF on RT
Building Executables
Using Source Distributions
Determinism
Time Delayed Send Message wrecks determinism in other loops
Error When Deploying from IDE stating that a particlular VI is broken
Phoenix, LLC
CLA, LabVIEW Champion
Check Out the Software Engineering Processes, Architecture, and Design track at NIWeek. 2018 I guarantee you will learn things you can use daily! I will be presenting!
I have a cRIO project that uses three loops: Signal acquisition (back and forth from fpga), internal timing and a Ethernet communications. I have not written the Ethernet part yet, as it's the simplest and we don't have the hardware yet anyway. It's in LV2011 but I have downloaded LV2014 and done the AF tutorial.
Seems that in principle the AF would be very good for this application. However, after reading what you wrote above I'm leery of wading into this and getting into performance issues. Is AF that much worse than normal LV code on the RTOS, or did I misunderstand what you wrote?
First off, I don't want to scare people away.
My understanding is that this is an OOP in LabVIEW problem and not an AF problem. So, if you are going to use classes and dynamic dispatch you "may" run into problems.
I am deploying thousands of VIs in hundreds of classes. I use object composition heavily.
More later.
Casey
Phoenix, LLC
CLA, LabVIEW Champion
Check Out the Software Engineering Processes, Architecture, and Design track at NIWeek. 2018 I guarantee you will learn things you can use daily! I will be presenting!
Time Delayed Sned Message wrecks determinism in other loops
- This method appears to be a "blocking" method and other methods do not run until this method returns. Upon return there is a huge spike in CPU utilization
This is hopefully being fixed (Ref).
Have anyone run into any issues using multiple network streams on the CRIO or do you typically keep one actor that whose main responsibility is pushing data between the CRIO/Host. I had a project where i tried to give each subactor a network stream so that different host subpanels could read data separately and connect/disconnect at will. My CRIO did not like opening/closing so many streams and the CPU usage shot way up. I eventually had to switch back to Network Shared Variable due to time constaints on the project.
drjdpowell wrote:
Time Delayed Sned Message wrecks determinism in other loops
- This method appears to be a "blocking" method and other methods do not run until this method returns. Upon return there is a huge spike in CPU utilization
This is hopefully being fixed (Ref).
Was already fixed in LV2014.
Jed394 wrote:
Have anyone run into any issues using multiple network streams on the CRIO or do you typically keep one actor that whose main responsibility is pushing data between the CRIO/Host. I had a project where i tried to give each subactor a network stream so that different host subpanels could read data separately and connect/disconnect at will. My CRIO did not like opening/closing so many streams and the CPU usage shot way up. I eventually had to switch back to Network Shared Variable
due to time constaints on the project.
The general recommendation is "One actor whose main responsibility is pushing data between cRIO and Host." If you do anything else, you're going to be breaking the actor tree with all the issues that introduces. Obviously you *can* do it, but we generally would advise against it.
Now, your particular use case has many separate subpanels that are totally independent from each other, which makes it less like breaking the tree and more like running several trees at the same time.
But I can't comment on the cRIO efficiency of network streams.
I make multiple connections.
3 System Level (UI and 2 watchdogs)
4 Subsytem Level
I also have instances where I make connection between a main subsystem on one chassis and a remote portion of that same subsystem on another chassis.
I also can't comment on performance other than to say it works.
Phoenix, LLC
CLA, LabVIEW Champion
Check Out the Software Engineering Processes, Architecture, and Design track at NIWeek. 2018 I guarantee you will learn things you can use daily! I will be presenting!
AristosQueue wrote:
The general recommendation is "One actor whose main responsibility is pushing data between cRIO and Host." If you do anything else, you're going to be breaking the actor tree with all the issues that introduces. Obviously you *can* do it, but we generally would advise against it.
This would have been unreasonably unwieldy in Casey's system, or one of comparable complexity. The component actors in his application export a *lot* of controls and indicators, and pushing all of those messages through the higher level systems would have bloated them with methods whose sole task was to forward data to the next level. And, I suspect, that bloat would interfere with Casey's ability to mix and match his component actors in future systems. (I invite Casey to comment further on that, since it's his design and his company's IP, and I don't wish to speak out of turn.)
I don't think attaching multiple UI actors to a system breaks the actor tree, and I certainly don't think it's contraindicated. Treat the UI actors as nested actors of the components on the RT side. At that point, you've given them the same logical standing as the actors that manage hardware I/O. Those actors are, after all, managing the flow of data into and out of the tree.
On the host side, the individual UI actors form a hierarchy as we expect, but again, the nested actors are just managing their I/O.
niACS and I are agreeing here... I maybe didn't state it right... multiple UI actors is the equivalent of that "totally independent" actor trees thing that I alluded to above. Yes, I think it is the right choice for an application that has such independence (which Casey's does and it sounds like Jed's does as well).