Most modern hardware oscilloscopes offer a feature to do a limit test by defining a limit band from a reverence signal by applying a plus and minus offset in the X and Y direction to a reference signal.
With the “Configure Mask and Limit Testing” Express VI it is possible to use a constant for upper and lower limit or define a table to act as upper and lower limit.
With the new function “Def. from Ref.” you can create the table content from a reverence signal.
Mostly it is not enough to add a constant value to the reverence signal in the Y-Direction.
An extreme example is a square waveform.
By simply adding a constant value to the Y-Direction you got no tolerance on the rising and falling edges for time base jitter.
Such a function would be nice to see in a future version of the “Configure Mask and Limit Testing” Express VI.
I was thinking it would be nice to call the DETT programmatically, in particular:
1) Start and stop a capture
2) Save and load a configuration
3) Save the log file to a user-defined location
(4) Perhaps even compare log files, although this bit can also be done using existing technology
As we get used to more multithreaded programming ideas, such as the actor framework, the DETT will become more important as debugging with independently executing clones is not straightforward. One such really nice feature for the AF has been created by niACS (https://decibel.ni.com/content/docs/DOC-44158).
The next step to me now is to allow it to be called, for example, during a unit test, so that I can measure performance. There is clearly a chicken and egg problem, because the code in which I call the event may preload the code under test, but I can easily imagine being able to set up and tear down the test using different VI's.
How will this help?
1) We can do something correctly, record the trace, save the file and then diff a unit test against the good file, checking that it still produces the same output (if we fire user events as niACS did with the AF). This might not be the best way of unit testing (we ideally develop the test before the code, a la Test Driven Development) but we also should be allowed perhaps to look inside the process (white box testing) to see if more calls are used, say, than the last time the code was run.
2) We can create performance benchmarks for code with tests that are easily re-run. Sure, we ideally should use the same machine with no more software than the last time the test was run, But we can also
a) See variation across different machines, platforms, etc.
b) Assess code smells at a quantitative level
c) Assess upgrades across versions of Labview
I didn't find this idea immediately, perhaps something similar has been posted already?
I would like to propose an extension to the Index Array function, to accept an array of numbers or booleans on the index input. The picture below illustrates the idea. The new Index Array function would replace the for loops in both cases, making for a neater and more readable diagram.
Say a 1% discount on the annual DSRL fee for each unresolved CAR per company/organisation.
If you think 1% is too much, please suggest a percentage that you think would be acceptable.
If NI did this it would really show how much they care about the quality of their products and also that they value the feedback of their users.
It would also be an incentive for users to report issues and for R&D to fix them.
Nowhere in the Help for the Initialize Array function is it indicated that this is perfectly fine:
Unfortunately this situation could result from a Ctrl-B operation deleting a broken wire to the "dimension" input, and this could remain undetected for a while (and difficult to debug).
Since an empty array can be generated in other simpler way (for instance creating a constant), I am arguing that this "feature" of the function is more harmful than it is useful and should be replaced by a Mandatory Dimension Size input.
I am using ActiveXServer property of build LabView apllications to realize Inter Process Communication.
Unfortuneately the CLSID (a unique identifier that allows each aplication to register to the ActiveX registered service on Operating System level) will be changed every time the project setting are changing (e.g. if a new file will be added to aplication).
The official workaround from NI support is to copy once the CLSID properties out of the project file (see attachment, lvproj has to be opdened with an editor, e.g. NotePad++) and to reuse these initial CLSID properties after every change to the project. This means you have to e.g. add source file, save and close lvproj, open in editor, replace all proerties with initial ones, close lvproj, open lvproj in labview, build your application.
Of course this workaroung works but makes no fun and is a bit a shame...
So my idea is to get an additional Option in "Advanced" section, nearby "Enable ActiveX server" setting, to define a fixed CLSID.
For all interested people a bit more background why this is causing trouble:
Imagine you have to write automated software with labview and teststand for an testbench... If you e.g. have 10 vi's used by Teststand to cummnicate to an application (you will use an application because you like to save your code from operator changes or because it makes it much easier to distribute your application to several testbenches). Every time you make changes to your application, rebuild it and distribute it, the teststand used vi's will have a problem to find the registered service because the rebuild application uses now an different ID which will not automatically recognized by your vi's. So you have to open all of your 10 vi's and have to link these to the new service. Afterwards you have to distribute these vi's too.
DAQmx Create channel Vi is not to Labview coding conventions. The CJC Channel and CJC source are both "control" inputs but they are wired to the right side of the VI connector pane. There are enough spare terminals on the connector pane to allow this VI to be recoded correctly.
The Drag and Drop event Drop has an element named "Available Data Names". This is a needleslly very long element name and wastes a lot of BD space (the whole vertical area can't be used for a case structure, for example).
Something like simply "Data" would be enough..
Changing a graph's axis mapping (linear to/from logarithmic) is super easy using the context menu in edit mode. But as far as I can tell, this feature is not available at run time:
It's inconvenient to have to stop a VI to change the axis mapping (and sometimes impossible, e.g. in stand-along applications) and I believe that most users would expect to find lin/log controls in the context menu, right next to autoscale. Why not enable this feature in run-time? If, for whatever reason, a developer doesn't want these options displayed, it would be easy to delete them from the run-time context menu.
It's certainly possible to change the axis mapping programmatically, but I would prefer if it were built in to the graph control, and enabled by default. It's frustrating to manually create axis scale control elements, over and over again, every time an application doesn't permit stopping the VI to change mapping from the edit-mode context menu.
The "%s" format specifier currently reads characters until a space, CR, LF or tab is encountered. This can be quite cumbersome in some cases:
It would be nice to have an additional text format specifier to read text until a CR or LF is encountered:
An even better solution might be to read text until a character from a user-defined character set is encountered (in this case 0D=CR; 0A=LF; 09=TAB):
Left shift registers (For Loop, While Loop) can be either initialized or not.
In general, the programmer knows what is needed for the correct behavior of the code, but code modification (say a broken wire followed by a Ctrl-B) can change this status:
If this last step is performed without remembering that the shift register needs to be initialized for the rest of the code to function properly, an insidious bug can result.
My suggestion: Let the user specify whether a shift register initialization is required or not (just like a VI connection can be specified)
Currently, you can drop a control on a queue control to change the inner type of that queue
There's no such shortcut for network streams
I would like this shortcut to be enabled. Also, maybe have the network streams optionally show the type in the control:
Waveform charts are really useful but have an annoying bug. Every now and then (maybe once every 10 seconds depending on the update rate), the digital displays blink to zero even though a zero value was never written to the chart. I use these charts frequently in HMI type displays and explaining this behavior is always part of the training for new operators ("Don't worry if this critical sensor goes to zero for a second unless the line on the chart also goes to zero, then you should freak out and hit E-stop"). This has been brought a couple times over the last 8 years (http://forums.ni.com/t5/LabVIEW/Waveform-chart-dig
I'd like sometimes to have more control over what to delete from the compiled cache.
If I am working on a certain project and I'm having trouble with RT deploys and inlined VI code change propagation (it's a real thing) then I have found out that deleting the compiled code cache can help a lot. Thing is, I don't really want to delete all of the cache, only those related to a sub-set (current project / directory on disk).
It's not a huge deal-breaker but if the technical barriers are not great, it would be a nice choice to have.
At a recent NI days there was discussion indicating NI are looking at alternative, non-binary formats for .VI files, particularly as this could simplify version control in terms of comparison/merging of file modifications.
This is not really an idea I'm looking for popular support for- just wanted to say to the NI chaps, please seriously consider alternatives to XML as I'm not alone in thinking it's a bit bloated and more effort to parse (CPU and implementation) than competing formats. JSON maybe? I just mention this off the top of my head without having put any serious thought into it, which I have no doubt R&D will do.
Type def with an additional Super Strict mode in which case even the actual values are copied across instances. Changing the value in the super strict type def (for example an integet type def being changed from 0 to 1) cause all values to be changed. It would act as a linked constant that auto-updates all instances. Maybe in this mode you can only create it as a constant or greyed-out control.
If there is a option for overlap in the structures (while loop, for loop, case structure) it will be is easy for wiring with deleting the structures as shown below.
In the following picture error cluster overlapped so that it came inside the structure without disturbing the wiring