It's got to be a duplicate, but I could not find it...
A significant number of vi.lib VIs are still outputing error codes (I32) instead of an error cluster.
For instance, the famous Ramp.vi:
returns an error -20006 if you ask for zero samples. Type in this value in the "Explain Error..." window of the Help menu and you get:
So it's not that the error code is mysterious and cannot be interpreted (I must say I was a bit puzzled by this discussion on error codes).
NI has to fight with this problem themselves. For instance, here is the code you find in the NI_AALPro.lvlib:AAL Resample Filter Prototype Design.vi:
What is that "?!+Magnifier" VI , you'll ask (AAL Error Information.vi, in the same library mentioned above)? I am probably supposed not to post it, but I will nonetheless, considering what it REALLY does:
Yep, it simply returns the same numeric error code value (again) and the call chain for the VI generating the error (but it won't tell you that the real source is the DLL called in the "Kaiser" VI above). I assume (but I can't prove) that the codes returned by the analysis library are among those recognized by the Explain Error VI.
It is not only an annoyance to not be able simply connect VIs using an error cluster wire, it does not make error handling particularly easy (basically, the way I read the answers of Aristos Queue and Norbert_B in the thread I quoted above is: "reverse engineer our VIs if you really want to 1) get the complete list of error codes it can output, 2) understand their cause").
My suggestion: Hire a couple of interns to sift through NI's VI librairies and change error code outputs into error cluster outputs with proper messages.
Obviously, for compatibility reasons, open previous code with an added unbundle primitive which will return the old error code (with a list of warning after the first compilation). You've done that before and we have survived.
The Ignore all Feature implemented for loading the vi's has to be extended for Mass compiling also. When we do a mass compile on a larger code its a pain to ignore the items individually.
TDMS can be a really useful format for saving large amounts of data, the problem I have is that the defrag function can take a long time to execute with no feedback to the user. This means that there's no way way of reporting back to the user an estimate for how long the defrag will take or even whether or not it is still alive. I understand that an estimation of the defrag time remaining may be a tall order but having a status flag reporting back that it is still active (maybe with a time stamp so you can double check it's still going) would be a great help.
I think it would be nice if LabVIEW was smart enough to know that when I drop a For Loop around scalar inputs it doesn't auto-index output tunnels - but rather uses Shift Registers - for matching inputs and outputs.
The common use case for this is with the Error input/output - it annoys me how it becomes an Array output.
As it is already wired, inline and not broken, dropping a For Loop around it should not break my code!
Reference or Class inputs are other use case too - I want to pass the same thing around not create an Array.
Shift registers are better than non-auto-indexed tunnels (other option) as they protect the inputs on zero iterations.
This would remove one step required for most use cases, speeding up my development experience.
This is a follow through for Zekasa's Idea posted here. It was suggested that the function be a separate one, which I agree. I would like to see this in LabVIEW's basic package, without adding things onto the install, or coding it.
I don't know how many times I've added a case statement post-programming, but I do know that there isn't an easy way to make a tunnel the case selector. Usually I delete the tunnel and then drag the case selector down and then rewire, there should be an easier way. For loops and while loops have an easy way to index/unindex or replace with shift register, why can't a case statement be the same?
It is time to put a dent in the floating point "problems" encountered by many in LV. Due to the (not so?) well-known limitations of floating point representations, comparisons can often lead to surprising results. I propose a new configuration for the comparison functions when floats are involved, call it "Compare Floats" or otherwise. When selected, I suggest that Equals? becomes "Almost Equal?" and the icon changes to the approximately equal sign. EqualToZero could be AlmostEqualToZero, again with appropriate icon changes. GreaterThanorAlmostEqual, etc.
I do not think these need to be new functions on the palette, just a configuration option (Comparison Mode). They should expose a couple of terminals for options so we can control what close means (# of sig figs, # digits, absolute difference, etc.) with reasonable defaults so most cases we do not have to worry about it. We get all of the ease and polymorphism that comes with the built-in functions.
There are many ways to do this, I won't be so bold as to specify which way to go. I am confident that any reasonable method would be a vast improvement over the current method which is hope that you are never bitten by Equals?.
Many of the Mathematics and SIgnal Processing VIs retain state, rendering them unusable inside reentrant VIs: http://digital.ni.com/public.nsf/allkb/543589DF37B
Many of the VIs in this list (all those in my current application, unfortunately) can only work with single-channel data. When manipulating multi-channel data, you can work around that fact by running the channels serially through the VI you need, but that (1) takes much longer for large data sets or several channels, and (2) is not an option when performing live manipulation of streaming data block-by-block.
I ran into this problem while developing code in the Actor Framework, where Do.vi and Actor Core.vi (the two main framework methods) are both Shared Reentrant. Now that AF is a native feature in LV, I expect that more people will run into problems with these VIs.
We need stateless versions of these VIs so we can use multiple copies in on a multichannel data set. You can probably keep backward compatibility by pushing the core logic to a new stateless subVI and keeping the shift register or feedback node on the main VI's diagram.
The number of parallel Instance is currently capped at 64, independent of hardware. This limit should be raised.
First Reason: Since even 64 bit Windows 7 supports up to 256 cores, it would be reasonable to raise that limit to 256.
(Even the next version of windows mobile (8) will support 64 cores. (Mobile! On a Phone! ). Obviously the upcoming hardware is fast moving in that direction.)
Second Rason: Sometimes it is useful to generate many instances, even if we have fewer cores available, for example maintain individual data in a large number of identical reentrant subVIs. (Such an usage example where we want many instances even on a single core machine can be found here)
Idea: Raise the max number of parallel Instances of a parallel FOR loop to 256.
I have recently an application in which i had to get the image of VI front panels, in order to insert in a report.
(This front panels contained XY graphs)
Just one more information : The VIs from which i get the images are called in a loop. I don't get the image of an already shown front panel !
(Getting an image of a shown VI on a button click event works fine ! but it is not my need. My need is to view many VI's to get their front panel.)
The problem is that sometime (more often in executable) the front panel image was not completely updated.
Or i get the image of the preceeding VI call ! The Graph update works asynchronously !
(Even if i checked the "synchronous update" property of the graph )
I think the problem is because i am calling the VI to rapidly ... without any wait ... My need is to generate the report as rapidly as possible.
I tryed to add a wait in the VI ... but the wait delay depends of the amount of data inserted in the graph ???
I tryed also, an event loop, defer panel update ... any many other features ... but with the same bad behaviour.
It would be nice to had an event which could tell us that the graph is updated ! So i could wait on this event !
Or, it would be nice to had a method node on front panels which could "force update" !
After the method call return, we would be sure that all graphical objects of the front panel are up to date, synchronously.
Thanks for help.
Sorry for my bad english.
A simple Idea, though I'm not sure how simple it will be to execute.
Make run-time environments (RTEs) backwards compatable.
Allow a LV2010 executable to run on a computer that only has the LV2012 RTE installed.
Saves hard drive space and install time. The LV RTEs are 600mb or so. Yes, hard drive space is far from being expensive, but it's annoying having to set through 5 installs just so you can run programs from 8.5, 2009, 2010, 2011, and 2012.
In the Context Help for the Prompt User for Input Express VI (Functions palette » Programming » Dialog & User Interface), it says that you can use the VI to prompt the user for a password. However, there is not an option for a "password" data type when configuring the VI, and thus any curious onlooker would be able to read your password if this VI was used! Why not add a "password" type to the configuration options (see picture)? Sure, you can build your own VI to do this already, no problem, but it still kind of makes sense to have the password data type as an option.
Auto-indexing of arrays in for and while loops are a nice luxury in LabView. One option that could save much time would be a menu option to turn on conditional indexing, this would expose a boolean terminal under the auto-index icon to select if the current itteration should add the itteration to the array or skip it. From an execution standpoint there would only be a minor performance hit (could still preallocate max array size on for loops and automatically return used subset). This could also work for autoindexed in but would have less use that the autoindeded out case. I know I have built many conditional arrays inside of a for loop and it requires a case selection and a build array making the code less readable and requires time and thought. It can also be less efficient than a compiler can do.
See the example below which would run a for loop and only build array of < 0.1
As soon as we have more complicated data structures (e.g. clusters of arrays), a large portion of the FP real estate is
wasted taken up by borders, frames and trims, etc.
We need a palette full of "Amish" controls, indicators, and containers that eliminate all that extra baggage. We have a few controls already in the classic palette, but this needs to be expanded to include all types of controls, including graphs, containers, etc.
A flat control consists of a plain square and some text (numerical value, string, ring, boolean text, etc). A flat container is a simple borderless container. A flat graph is a simple line drawing that would look great on a b&w printer. A flat picture ring looks like the image alone.
They have a single area color and a single pixel outline, if both have the same color, the outline does not show. They can also be made transparent, of course. If we look at them in the control editor, there are only very few parts.
Now, why would that be useful?
Let's have a look a the data structure in the image. There is way too much fluff, distracting from the actual data. If we had flat objects, the same could look as the "table" below. Note that this is now the actual array of clusters, no formatting involved! It is fully operational, e.g. I can pick another enum value, uncheck the boolean, or enter data as in the cluster above.
Many years ago in LabVIEW 4, I actually made a borderless cluster container in the control editor and it looked fine, but it was difficult to use because it was nearly impossible the grab the right thing with the mouse at edit time.
The main problem of cours is that the object edges completely overlap, making targeted seletion with the mouse impossible. (For example the upper right corner pixel is the corner of an array, a cluster, another array, and an element at the same time.)
So what we need is a layer selection tool that allows us to pick what we want (similar to tools in graphics editing software). It could look similar to the context help shown in the picture with selection boxes for each line. Picking an object would show the relevant handles so we can intereact with the desired object. Another possibility would be to hover over the corner and hit a certain key to rotate trough all near elements until the right element is selected, showing it's resize handles. I am sure there are other solutions.
As a welcome side effect, redrawing such a FP is relatively cheap.
for a complete validation of a software, it is mandatory to perform tests. When testing, we differ between two major tasks:
- static code analysis
- dynamic code analysis
For static code analysis, we got the VI Analyzer toolkit. It automates the task of looking into the sources (front panel and block diagram) and compare the current layout against recommended layout (style guide). Additionally, it detects known sources of issues for lack in performance or even crashes and misbehavior during runtime.
So it makes perfect sense, to define requirements that static code analysis has to be performed before proceeding to dynamic (runtime) code analysis.
Yet, VI Analyzer tasks have no option for creating such a link as a field "comment" or "description" is missing for the configuration!
I suggest to add such a field in the cfg-file level (exported VI Analyzer configuration) where we can add strings like [Covers: <req_ID>] as we are used to in LV code.
I’ve already put up ideas, about 7 weeks ago, for four development boards that could be LabVIEW targets:
1) LabVIEW for Raspberry Pi (current kudos 139)
2) LabVIEW for Arduino Due (current kudos 74)
3) LabVIEW for BeagleBoard (current kudos 49)
4) LabVIEW for LM3S9D96 Development Kit (current kudos 15)
I wanted to leave it at that to gauge LabVIEW community/user interest, however an exciting new board has just been introduced, which is too good to leave out. It’s the Texas Instruments Stellaris Launchpad.
It’s very attractive for three main reasons:
1) It is very easy to get LabVIEW Embedded for ARM to target this board (a Tier 1 port)
2) The microcontroller is powerful with many useful on-chip peripherals
3) The price is extraordinarily low.
The Stellaris Launchpad features are:
The most interesting feature is that it costs $4.99 including postage. Yep, just under five dollars! Including postage! I’ve already ordered two!
The Texas Instruments Stellaris Launchpad can be programmed using the free Code Composer Studio in C/C++ or the free Arduino IDE using Energia from github. Both great ways to program. It just needs LabVIEW as the third exciting programming option.
Wouldn’t it be great to program the Stellaris Launchpad in LabVIEW?
The Timing palatte is looking bad with all thes gaps. A simple fix would be to fill these holes with useful functions. I'm proposing 3 and attaching 2 from my re-use code. (I may re-create the third later)
Time to XL.vi (Attached): and its inverse, XL to Time.vi
12:00:00.0 AM Jan 0, 1900 is a pretty common epoch (Base Date) for external programs and converting from LabVIEW epoch shows up several times a year on the forums. and Time to excel has a few solutions to threads under its belt. Moreover for analisys against external data from other enviornments you are often using Access, Excel, Lotus... All share the same epoch (and Leap year bug) in their date/time formats. These vi.s have been pretty useful to me although the names may change to avoid (tm) infringements
Time to Time of Day.vi (Attached) has also been in my arsenal and proves both valuable and get on a few threads per year on the forum.
The gaps in the palatte make it a perfect fit
At present when an external call is made via system exec.vi you have 2 options wait on completion true or false. If set to true labview suspends until the external app finishes, if set to false labview carries on but can not wait on an output from the external application. There should be a possibility of getting the output from the system exec.vi when when wait on completion set to false.
To give a real world example. I have a large test program that basically monitors a comms bus (udb in this case), and reacts to requests received on it. It also monitors the health and goes into a fault state if nothing received for over 100ms. WHen I call my external application (connects to wlan and gets some data and returns it to labview) labview pauses while this happens, this in turn forces a timeout error. If i set wait until completion false then i dont get the data from the external app. I have implemented a work round (use a temp file which is monitored from labview and set wait on completion to false). Would be much simpler if a method could be implemented in labview itself