I inherited a project last year that uses a cRIO, it's my first time using one of these. Since then, almost every time I use it is an uphill battle. If I want to deploy working code it takes about 4 hours brute-forcing different combinations of Connect, Deploy, Launch, Reboot and staring at this sort of screen:
Here are the frequent problems I have:
All in all, it's a frustrating waste of time - how has your experience with cRIOs been, and do you have any workarounds to this unreliable deployment platform?
Solved! Go to Solution.
For larger projects it sometimes seems easier to just build and deploy the built rtexe rather than try to interactively deploy.
Thank you, I see how that would help with deploying my own code (my last bullet), but it seems the rest of the platform should be more stable. For example, Connecting to the cRIO should be simple and rarely error... asking NI MAX to Refresh the cRIO status shouldn't lock up either of those things.
I haven't used a cRIO much, but I'm currently doing some things with a sbRIO and a myRIO. I find LabVIEW 2021 with LabVIEW Real-Time a pretty stable platform for developing and running code. I stopped using Shared Variables about a decade ago, replacing it with Network Streams (multiple, two for Host to Target and Target to Host Messages, others to transfer data from the RIO to the Host. Works quite well, haven't had the kind of problems you've described.
We have tried to make both the Host and Target code as simple and "State"-driven as possible, so our Host and Target routines appear to be fairly simple (they all fit on a standard Laptop screen). We did have some compiling and loading slow-downs and occasional hangs with LabVIEW 2019, but things seem to be working smoother with 2021.
If the IP settings for the cRIO, sbRO, myRIO match the settings for the port on which they connect to my computer I really have rarely problems. Sometimes it can happen that the hardware seems to get unresponsive after to much hacking in the code, but that never has been something that could not be fixed with a power cycle.
I do prefer to use a dedicated network connection with its own private subnet however and use static IP addresses. DNS has proven problematic as the DNS server can decide to assign a new IP address to hardware at any time and connecting from the LabVIEW project by DNS name is not something that seemed to be very reliable in the past. Also other things on the network really can potentially mess with a reliable connection, which is why I prefer to use a private dedicated network to communicate with that hardware.
This is with LabVIEW 2018 and 2019 currently as the two projects I'm mainly working on have had a quite long development history and we normally don't change LabVIEW versions during a project, unless there is a VERY VERY good reason to do so.
You made me smile so hard! I did cRIO dev for 5 years and it was always the same but it slowly improved up to 2021, which I use now.
I add three fails that annoyed me with cRIO:
I must say that these network issues are worse when connected with Ethernet-over-USB. Network stability probably plays a role. Also make sure to "Disable RT App on Startup" when developing. Use something else to get CPU and Mem info. I used bash logger script, or you can use NI Distributed System Manager, which for me worked at about 40% of cases. Good practice when developing a feature for a longer time is to clone your top VI and then removing the unnecessary parts for testing - it speeds up everything. Don't keep multiple targets in a project even though you can - opening a project takes a lot less time.
Oh, I almost forgot. It happened to me I think three times. Something went wrong with a VI file and while there were no problems running through the LabVIEW project, the app failed to run as startup. No error. Of course that it always came up like one day before departure to customer. I desperately looked everywhere in linux, found all the empty or nondescriptive logs... No git back then. So I deleted half of all the loops and tried. Nothing. Then the other half. App started. Added a quarter of deleted code. Not worked. You see where this was going. So by performing bisection method on code (with iteration taking 5-20 minutes) I found a VI, actually one single element (once it was a diagram disable structure) which when removed and placed anew fixed the problem. When it happened for the first time, I fixed the problem at 3 am of the second day, oh boy.
PS: The real IT guys frowned when I told them to restart a Linux computer with "reboot;reboot" command.
Thanks for the great advice everyone!
And especially Vit for letting me know I'm not alone. I can't believe I forgot to include "target was disconnected" happening immediately after a successful deployment. Infuriating!
One common theme is that LV 2021 seems more stable that previous versions - I'm on LV 2020 right now, so upgrading might move to my short list.
DNS has proven problematic as the DNS server can decide to assign a new IP address to hardware at any time and connecting from the LabVIEW project by DNS name is not something that seemed to be very reliable in the past.
I've worked with our IT department to configure this so that the DHCP server always gives the same IP address to my hardware's MACs. This is the best of both worlds, because they can still manage the equipment and I can use DNS, but the IP doesn't change (which could otherwise confuse software that used the old DNS/IP mapping).
My PC and cRIO are on the same 1G switch and I see almost no network traffic, so I assume the network isn't causing these issues in my case.
I stopped using Shared Variables about a decade ago, replacing it with Network Streams (multiple, two for Host to Target and Target to Host Messages, others to transfer data from the RIO to the Host. Works quite well, haven't had the kind of problems you've described.
I tried this (technically using Actor Framework messages over Network Streams) but had issues with it. For my application, I want the PC to know the cRIO's latest state; I don't care if samples drop. Since Network Streams are lossless, I found that any client hiccup (Adobe/Windows checks for updates, etc.) could create a backlog that was hard to recover from. Over the course of a week, it would sometimes build up and the PC was showing values from 4-5 minutes ago, which is dangerous. Maybe you could share a bit more how you do this successfully?
We have tried to make both the Host and Target code as simple and "State"-driven as possible, so our Host and Target routines appear to be fairly simple (they all fit on a standard Laptop screen).
This is interesting. Could you elaborate on what you mean by "state-driven"? I've written PC-based LabVIEW code for years, and about once a month I wonder if I'm approaching cRIO-based development all wrong as a result of that original experience.
Ok, it's been a few months and the stars aligned that I had time, the tester had a maintenance window, and improving deployment got some priority... I found a few things that seem to have helped my specific deployment:
There will probably still be quirks, but it's noticeably faster and less frustrating now. I guess the moral of the story is "update your damn system" 😄
Oops, I'm sorry that I missed your previous reply, where you asked about what I meant by "States", and perhaps how I kept Host and Target "in synch". [Note that "Host" and "Target" are from the prospective of the LabVIEW Project, so "Host" is the code on the PC, "Target" is the code on the cRIO].
Both Host and Target are running what I call a "State Machine" (though I've been told this is not technically correct), where I use a Queue (and now I use a Messenger Channel Wire) to pass "State Messages" to a (simplified) Message Handler that I call my "State Machine". The Message is a cluster of "Next State" and any optional "Args", coded as a Variant. The last thing the Host does in any State is to (sometimes) enqueue the next State. Just outside the Case Structure and before the surrounding While loop is the loop Error Handler that manages Errors (but let's not go there now ...).
There are a minimum of 4 Network Streams in my design. One is for Host->Target Messages, when the Host wants to tell the Target to "Do this State, with these Arguments, now. The second is Target->Host Messages. The remaining are all Target to Host. One that I call "Events" informs the Host of when the Target does something significant, which includes when it "changes State", and the remaining Streams are for Data that the Target generates and the Host streams to disk (possibly doing some processing as the data fly by for display purposes).
Works quite well.
There will probably still be quirks, but it's noticeably faster and less frustrating now. I guess the moral of the story is "update your damn system"
Not so much updating your damn system as much more to make sure you use the exactly same version on every system. Even bug fix releases f1, f2, etc. will often trigger a recompilation of the VIs.
And absolutely important too is to make sure the installed Realtime system software on your cRIO/sbRIO matches exactly with the LabVIEW version you use to develop and deploy your applications.