LabVIEW Real-Time Idea Exchange

Community Browser
Top Authors
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

The real-time controllers have a "Time server" IP input in their setup. That is great, but it is not great that the time server has to be a piece of NI software (Logos). If it was possible to specify an NTP server for example (and/or other standard protocols), this feature would be much more useful.

 

Most of the time we need the PACs to be synced with a third party system.

Allow MAX to set the time and date to RT targets

Hello LabVIEW Enthusiast,

 

NI has introduced malleable VI function since LabVIEW 2017. It is a great function to have in advanced LabVIEW application development.

But "Stall Data Flow.vim" function is missing in Real-Time Target OS from timing function palette since 2017.

It is not like you can not use malleable VI in RT side, I have copied same VI from desktop VI to RT VI. It works fine without any error or broken arrow.

 

NI should include this types of VIs in RT package by default.

RT has Type Specification Structure too, which creates question for me on this missing VI.

 

If you have any reason(s) to share on this why it is not included on RT side, Please share your thoughts.

RT Side.JPG

Host Side.JPG

Thanks,
Vinal Gandhi

Download All

Many networks have NTP (network time protocol) time servers locked to a GPS signal with claimed accuracy of better than 1 millisecond. NTP seems to be the de facto standard for distributing accurate time to systems accross a network. I would like to see a VI that can use this widely available protocol to set the system time with as much accuracy as possible.  The RT Set Date and Time VI only permits specifying the time down to 1 second.

 

A forum contributor has provided a VI to read the NTP protocol, but uses the RT Set Date and Time VI to set the RT target system time losing any resolution beyond the second.  This VI eliminates the greater precision available using the NTP protocol.  The contributed VI also does not implement any of the advanced algorithms to improve the accuracy of using NTP over a network which are described at NTP.org and at links that may be found there. 

 

Very tight synchronization (millisecond or better) of many RT targets is possible when the NTP protocols are fully implemented.  I need accurate millisecond resolution on many RT targets over a wide (several thousand square miles) area.  These RT targets recieve their data streams via radio telemetry so signal propagation time over the area is essentially zero.  Processing of the data relies on the relative accuracy of time-tags generated at multiple sites.

 

 

I'd like to create a timing source from an RT FIFO reference to drive a timed loop. The timing source would tic whenever there an element is placed into the RT FIFO, facilitating a timed loop to be configured for reading elements from the RT FIFO as they become available. This would only be available for blocking mode on read. Once the loop awakes, the user can then read the FIFO with a 0 timeout to get the data.

 

This would provide all the nice features of timed loops to loops that need to be timed by data showing up in the FIFO. Right now you have to place a while loop that blocks on the RT FIFO and wrap it in a timed sequence for CPU assignment and priority.

 

Also, combined with the idea posted for multiple timing source for a single timed loop, could be very powerful.

I propose having the timed loop accept one or more timing source(s). The loop would wakeup and execute if any of the chosen timing sources tic'd. If multiple sources tic'd, then the 1st timing source would execute first.

 

This would ease the creation of more complex timing schemes on the host. For example, application needs to wake up every 50ms and whenever a timing source tics. Both wake up events need to operate on the same data and neither should happen at the same time. Typically this means you write two timed loops and a lot of handshaking overhead... or a single while loop with a lot of timing structure built into it and probably encompassed by a timed sequence structure so you get priority and CPU assignment.

 

All of this would be much easier if a single timed loop could handle multiple timing sources. All of your data would be in the same shift register(s) and you don't have to write any handshaking code to prevent simultaneous execution.

 

Obviously the GUI for this would be somewhat complex, but for the programmatic inputs you can simply turn most of them into arrays.