It would allow an RT executable to deploy shared variable automatically
(and not from the project explorer or the host computer as mentionned here)
If you have a deployed RT system running the scan engine you may have IOV channels with scaling that was configured and deployed from the project.
What happens when you need to change a scaling factor after the system has been deployed? You would need the development tools to do this.
Since the smarts for this is already built into the LV IDE why not put the same code into MAX for remote viewing and modification of IOV properties.
It would be nice if LabVIEW RT could give more output during the startup process of a LabVIEW real-time executable. For example if the rtexe is started at all or not, if there are missing dependencies during the loading process or other usable stats.
In my case I had problems with the German codepage. The build process was done without failure. But the rtexe didn’t start at all. I used special German characters like “öäü” in some VI names. And the rtexe couldn’t be loaded for that reason. But I didn’t get any message.
So please improve the debug output for LabVIEW RT.
Having used our LabVIEW 2011 StateChart module with great success on a CompactDAQ system, I now have a new project using a CompactRIO 9075 which will
be controlling a hydraulic test unit. I would like to again use our StateChart Module so the functionality of the machine is understandable to all stakeholders.
The best NI example project I could find is the Chemical Mixing Example w/StateChart. It is exactly what I need except that it is a "headless" architecture. I need a 1:1 networked communication to my host PC for recipe entering and data logging much like the Bioreactor Example in the CompactRIO Developers Guide. See attached architecture jpg. I need to know how, specifically, to add a Host UI and incorporate network streams for the Host Command sender -> RT Command Parser.
In other words, what software architecture would combine the Chemical Mixing Example (w/StateChart) with the Bioreactor Example?
It should be nice to create a low level CAN toolkit which could create or parse the 64 data bits of a CAN frame, without having to use the existing heavy tools.
What i would like to see is a list of low level VI's like readBoolean, readInteger, readFloat, readSingle ... writeBoolean, writeInteger ... with direct handling of the data coding (Intel / motorola)
These VI's could read/write channels directly to the 64 data bits of the frame.
These VI's are not very difficult to build ... and everyone who has already use the frame API, without DBC or NCD has done this work before.
My need is to create an official, validated list of low level channel read/write ... without having to use the channel API or CAN engine which ar not very powerfull.
For example :
ReadInteger ( in data, in startBit, in integerLength, in coding(Intel/motorola) , out integerValue )
ReadUInteger ( in data , in startBit, in integerLength, in coding(Intel/motorola) , out UIntegerValue )
ReadFloatCodedInteger( ........ offset, gain )
When you speak with NI CAN users they all say ... NI CAN is too slow ... I think that this toolkit could help many CAN users to come back to NI CAN.
The top would be to create, using this low level API and a little bit scripting, a wizzard which could generate automaticaly the read and write VIs for selected frames of a DBC/NCD file. ( A frame = 1 read VI, One write VI and 1 cluster per frame containing the channels )
The generated result could be something like a polymorphic VI.
Every Fixed-Point number allocates 64 bits (8 bytes) or 72 bits (9 bytes) if you include an overflow status. Working on a cRIO project that I need to acquire and save a large amount of data to process later when the RT CPU has free time to do it (is not acquiring more data nor transmitting data). Since the data is acquired in 12 channels, 51.2 kS/s, with 24 bit each point, the fixed-point allocates 64 bits but uses only 24, wasting 5 bytes (62,5%) for every single data point acquired.
As a workaround, I developed two VIs one to compact the data removing the 5 unused bytes of every Fixed-Point, number keeping the 3 bytes with the data. The second VI does the reverse job. With this I reduced the information to 37,5% of the original, saving space and time to save the information in non volatile memory. Maybe there's a way to do it directly, but I didn't figure it out.
My idea is to add, in some way, the possibility to use the minimum amount of bytes needed by the data, at least for storage purposes. It would be nice if NI add an option to have fixed point in two memory allocation modes:
A number of times, I have found the Real-Time Clock configuration screen in Measurement & Automation Explorer to be very limited. With only the options to set the Time-Zone/Daylight Savings Time/UTC, I feel like there could and should be more!
Adding a feedback to show what the current RTC is set to similar to the Windows Clock configuration would be GREAT! Additionally, add the functionality of the RT Set Date/Time VI to Measurement & Automation Explorer.
We frequently run into issues where the Daylight Saving Time value is different between the US and the rest of the world (frequently=twice a year).
I believe all of this can be resolved by adding 1 layer of abstraction to translate a base RT system clock to the user-specified specifications to be added to timestamp data, schedule tasks, and so-forth.
The console mode of RT controllers has very limited functionality. It would be great if all or most of the settings of the controller were editable from the console.
File browsing and transfer, plus software loading, would be nice too. We have had cases where the controller is operational and holds valuable data, but because the network interface has gotten damaged we have had no way to extract that data...
it would be very nice if i could give users in the Web-based configuration more specialized filesystems rights.
In the moment i only can give the user the right to write on the whole filesystem or on nothing.
In my case i would like to sent our customers updates of the startup.exe, which they can install their self via ftp. Without them having access to everything on the filesystem.
Please vote if you would like to have this option,
I think the configuration property pages for SoftMotion axes are missing the parameters for maximum velocity, maximum acceleration and deceleration for individual axes.
When performing single axes moves it is somehow easy to control these parameters because they are wired to the appropriate property nodes when configuring the move itself.
But when performing moves with axes in a coordinate space the move parameters given are for the resulting vector, resulting in unknown velocities for the individual axes.
When configuring the maximum velocity and acceleration/deceleration for an axis there should be the option to
a) Generate an error when the requested move can not be solved within the limits
b) Coerce the limiting value to its maximum and solve the move using the new values, generate a warning
Different versions of NI-RIO only support certain versions of Labview, see link below. You can not have Labview 8.5.1 Real-Time & 2011 Real-Time co-exist on the same PC because NI-RIO 4.0 only supports up to 8.6.1. Wouldn't it be nice if you could install multiple version of NI-RIO just like you can have multiple version of Labview?
Currently, when you add a new (not existing) cRIO controller and chassis to a LabVIEW project, there is no check as to whether this is a valid configuration or not. For example, you can successfully add a cRIO 9072 controller with a 9112 chassis to a project, even though the 9072 is a controller with an integrated chassis. I believe that the LabVIEW Project interface should notify the user (via dialog box) that this is not a valid configuration before they can add modules and start developing code to use an invalid configuration.
I think this would be a good option since I have run into the problem during debug where a previously used Network Stream did not get destroyed and then when the code tries to reopen the stream you get an error that the stream is still in use. Since the refnum is lost there is no way around this dilemma.
I did not see any options for this in labVIEW. I saw only you can play around writing and reading sound files and perform some manipulation to sound file.
Why not we compare two sound files extensively and indicate about similarities. For example, a system is running with constant sound(Motor is running in good case) and other hand the same system is running bad after some days or years(sound is different and louder with noise).
Second, if we can compare a sound with another sound, that would be interesting and possibly useful in applications. I know that many sensors out there for this operation but thought of interest to see it digitally in labVIEW.
If any thing wrong with my idea, please let me know.
Many networks have NTP (network time protocol) time servers locked to a GPS signal with claimed accuracy of better than 1 millisecond. NTP seems to be the de facto standard for distributing accurate time to systems accross a network. I would like to see a VI that can use this widely available protocol to set the system time with as much accuracy as possible. The RT Set Date and Time VI only permits specifying the time down to 1 second.
A forum contributor has provided a VI to read the NTP protocol, but uses the RT Set Date and Time VI to set the RT target system time losing any resolution beyond the second. This VI eliminates the greater precision available using the NTP protocol. The contributed VI also does not implement any of the advanced algorithms to improve the accuracy of using NTP over a network which are described at NTP.org and at links that may be found there.
Very tight synchronization (millisecond or better) of many RT targets is possible when the NTP protocols are fully implemented. I need accurate millisecond resolution on many RT targets over a wide (several thousand square miles) area. These RT targets recieve their data streams via radio telemetry so signal propagation time over the area is essentially zero. Processing of the data relies on the relative accuracy of time-tags generated at multiple sites.
If I have variable library with virtual folders and I choose "Export to..." only top level variables are exported to the file. I suppose that also variables inside virtual folders should be exported. I think virtual folders make structure clear and export/import is useful if you are working with several projects in different computers. I would like to see them work together.
If you try to select "Run As Startup" before having built the application you are stopped by LabVIEW with the phrase "The Real-Time application has not been built. You must build the application before deploying."
This is bad behaviour. Instead LabVIEW should silently do the requested task, including any prerequisite ones (the build), or at least offer me to grant it access to do so. The Run As Startup choice already includes the deployment option (no complaints from it if I have not done that yet), there is no reason why it could not do the build as well.