11-12-2015 03:05 AM
i think what you want to have is an automatic tool,
that "splits" some application that is running on RT via dev-environment
into two on build. where one goes on RT and the other on UI
i have no idea what is involved in something like this, but i think it is non-trivial to implement
it for general usage.
11-12-2015 03:08 AM
maybe you can have a look into network shared variables,
and only use these from an UI app.
(if that is possible for your use-case)
11-12-2015 03:08 AM
Quite simply I'm not using streams because of:
1) Time pressure (in my experience its not as easy or quick as it may seem and bugs, race conditions, etc between two asynchronous systems are much more time consuming to find)
2) Added complexity in an already complicated system which is deployed in several plants around the world (5 windows PCs,3 different labview applications, 4 GigE cameras, 1 CVS1458 RT system. These are already using a custom TCP/IP comms protocol but tinkering with this requires very careful consideration in case I break a working system which took some time and effort to get bullet proof)
3) Why re-invent the wheel when NI have a wheel which will almost definitely work better than mine????
Again, maybe I'm missing something?
The specific question derived from that question is : Is there a reason I cant use the same method used by the development envirnonment?
Can someone at least agree that this communication system does exist within the development environment to very effectively create a remote front panel for the RT system? Or explain to me that there isn't one and I am wrong in that assumption? Please??
11-12-2015 03:14 AM
@jwscs wrote:
any specific reason, why you didn't use the network streams?
For my application there might be several PCs connected to a single RT target at any time, some of those PCs dont use LabVIEW.
I have used network streams before and didnt have any problem with them either, also if you need bidirectional comms, you would need 2 streams rather than 1 tcp connection.
11-12-2015 03:17 AM - edited 11-12-2015 03:27 AM
The problem here is really that you do not have the same code executing when you do a remote debug session as when you run the RT application standalone. In the first case the code gets compiled with debug settings and all and deployed to the RAM of the realtime device and then started from there, from within the development environment on your host. The executable code is hooked up and connected to a copy of the panel on your host. As you mention this works remarkably well but it has several implications. For one the code as executed on the RT target is anything but running realtime. It's good enough to work in most cases fine, but the whole remote hooking does have impact on the code execution. And that impact can be significant although you often don't notice as the whole idea of such an execution is to debug your code with breakpoints, single stepping, highlight execution and what else, which disturbs the realtime behaviour way beyond what the remote hooking could ever cause on its own. So the realtime behaviour of such an application is certainly less than it could be in a real compiled and installed RT app. The simple fact of having to include the front panels will have already some impact, hooking into this to get all the data transfered to your host an even (much) bigger one. Such an application is definitely less realtime than a fully deployed app, no matter what you do or don't do.
When you deploy the compiled rtexe to the target it gets compiled as a standalone executable with most of these premises removed. It then is copied physically to the storage media on the target and launched from there. Hooking into it after the fact is a lot more complicated and failure sensitive, and for the most part not really possible because most of the debugging code is removed, and the link to the source code on your host completely lost. While it's theoretically possible to allow for a project to hook into a remote executable compiled with enough debug information including diagram and all front panels kept intact, it's also a pretty complicated challenge to present to the user a simple to use experience if anything in this process goes wrong, such as an (even minor) mismatch between your local code and the deployed code. So it is mostly a usability issue. When you debug your code you work in the LabVIEW development environment and expect to be bothered with such issues, but when you develop such an application and let it run by others they do not expect such problems and will throw their hands up in the air if it would happen.
In fact I think it is possible to hook into an rtexe that was compiled with the diagram code intact and debugging enabled, but not from a simple LabVIEW executable but from a project within the deployment environment. But that is hardly the experience you would want to give to an end user, aside from the fact that you need the development environment installed on his computer.
Basically it comes down to the fact that the code to hook into remote apps over the network is part of the LabVIEW development environment and has specifically been exposed through the project framework. There might be a possibility to use that through VI server in your own application but it is almost certainly only executable in the development environment and therefore would require a full LabVIEW installation anyhow. And even if the interface is exposed through public VI server methods it's going to be messy to use in your own application. Most likely a lot messier than implementing your own inter-application message methods.
11-12-2015 03:31 AM
@rolfk nice writeup thx
@deceased that makes sense, thought there was something else to learn 😉
11-12-2015 04:00 AM
YES! Thank you!! That was the answer I was looking for.
Now I have no problem implementing my own method....
11-12-2015 04:53 AM
i can sympathize with you .. RT development is not easy .. i have 4 VIs for the Network Stream Endpoints to maintain,
and the message system with variants that ahve to be packed/unpacked in several steps is a pain for me too.
and since it is my first big dev-project with RT i have no best practices to go on.
so .. keep your head up!!
11-12-2015 07:43 AM
I've used several "communications" methods for LabVIEW RT. Initially we used VI Server "remote calls" for single variables and FTP for large arrays of data (LabVIEW 7). When Shared Variables were introduced, I tried using them, but eventually gave up as I found them, shall we say, "flakey" (and not totally reliable). I now use Network Streams, and find them totally reliable and easy to use. A current project has 4 Network Streams running in parallel -- two are bilateral QMH "injectors" (to allow the Host to send a "Message" that will appear in the Remote's Queued Message Handler, and vice versa), one is a buffered "stream multi-channel sampled data to Host disk (1KHz)" data stream, and the last is a "stream asynchronous time-stamped Events to Host disk" data stream. Works like a charm.
Bob Schor