I propose to replace Max & Min for two elements, which we could resize like some array functions for 3, 4, 5,... elements when you know how many you have to compare.
I actually have 5 elements coming from a bundle, I have to do this
It would be very useful that RT FIFOs could be of type lvclass as long as the class' private members are of static types (perform the same check that is done for clusters when you try to use them as the type for RT FIFOs).
Get contents of all XML Node Types:
As a beginner at XML parsing, it would be great if LabVIEW had a VI (like Get Node Text Content) but for every node type (it would get the contents, whatever they may be).
All node types stated here:
are possible and may require a different set of property/invoke nodes (some have children, some don't, some have values, some don't... and so on).
Inputs: Node handle
Outputs: Raw node contents (whatever XML is contained within the tags of the Node handle), Value (where applicable), Name (where applicable)
Write general string to XML node type:
It would also be great if there was a VI (or set of VI's) that could take an input (as a string) convert it to the w3 conformant XML. It's important to use the w3 definition, rather than LabVIEW XML for external compatibility.
Inputs: Node handle, Node Type
Outputs: XML string of Node Type
I often have code in my apps where some error-out nodes are not wired, simply because the errors are generally not of interest to me or the error wiring would clutter up my block diagram. Typically this happens a lot in UI handling code where a lot of property nodes are used. For these parts I would rely on the automatic error handling for debugging purposes. One of the drawbacks of this method is that program execution is suspended when the automatic error handler kicks in. Even worse if this happens for code that is in a loop. You're only option then would be to abort the app, which e.g. is no good for your reference-based objects, etc.
I would love to have the ability to just specify my own 'Automatic Error Handler', enabling me to decide what to do with the unhandled errors. Just logging them is what first comes to mind, but maybe also do some special stuff depending on the type of error, just like a 'normal' error handler. I want to be in control!
Added values of this is that your application then has a catch-all error handler which enables you to at least log every error that occurs, even if not wired through. (Everyone forgets to wire some error-out that they actually did want to wire one time or another don't they? ;-))
Ofcourse the proposed setting in the image would ideally also be available programmatically by application property nodes.
I use disable structures and conditional disable structures more and more as my coding starts to spread over multiple targets (Host, RT and FPGA).
I like to include some debugging indicators for my code so that I can (with the proper conditional disable symbols set) debug my code more easily but still remove the bloat for actual release code.
What I have noticed is that controls and indocators which are disabled int his way are NOT accurately represented on the FP. As such I am surrently unable to determine by looking at the FP of a VI that perhaps half or all of the visible indicators are or are not actually being used in the code.
Even when the code is running, the controls and indicatory which are actually disabled are still visible (and supposedly still available over VI Server for example). I think these controls should be actually removed or at least have a visual indication that they are diabled on the BD (distinct to the appearance caused by writing to the "Disabled" property of the control).
The LabVIEW help states: "When compiling, LabVIEW does not include any code in the inactive subdiagrams of the Conditional Disable structure" but I question how true this statement really is.
Although these controls are DISABLED (Not present in the source code)........
Here they are.....
This raises issues on the FPGA level more urgently than on the PC side, but I feel the sentiment behind the idea is the same.
Of course things get more compilcated when the controls are connected to the connector pane, but perhaps simply prohibiting the presence of a connector pane terminal in a conditional disable structure would solve that problem.
We're witnessing more and more requests to stop LV hiding important information from us. In One direction we want to be able to know (and some want to break code) if structures are hiding code.
Others want LV primitives to give visual feedback as to how they are configured, especially if that configuration can have an effect on what's being executed or how it's executed.
Examples include (Please please feel free to add more in the comments below)
Array to cluster (Cluster size hidden)
Boolean array to number (Sign mode hidden)
FXP simple Math (Rounding, saturation and output type hidden)
SubVI node setup (When right lcicking the subVI on the BD and changing it's properties - show FP when run, suspend and so on)
Sub VI settings in general (Subroutine, debugging)
I know there are already ideas out there for most of these (and I simply chose examples to link to here - I don't mean to leave anyone's ideas out on purpose) but I feel that instead of targetting the individual neurangic points where we have problems, I would like to acknowledge for NI R&D that the idea behind most of these problems (Some of them go much further than simply not hiding the information, and I have given most kudos for that) is that hiding information from us regarding important differences in code execution is a bad thing. I don't mean to claim anyone's thunder. I only decided to post this because of the apparent large number of ideas which have this basic idea at heart. While many of those go further and want additional action taken (Most of which are good and should be implemented) I feel the underlying idea should not be ignored, even if all of the otherwise proposed changes are deemed unsuitable.
My idea can be boiled down to the fact that ALL execution relevant information which is directly applicable to the BD on view should be also VISIBLE on the BD.
As a disclaimer, I deem factors such as FIFO size and Queue size to be extraneous factors which can be externally influenced and thus does not belong under this idea.
Example: I have some Oscilliscope code running on FPGA and had the weirdest of problems where communications worked fine up to (but not including 524288 - 2^19) data points. As it turns out, a single "Boolean array to number" was set to convert the sign of the input number which turned out to be completely wrong. Don't know where that came from, maybe I copied the primitive when writing the code and forgot to set it correctly. My point is that it took me upwards of half a day to track down this problem due to the sheer number of possible error sources I have in my code (It's really complicated stuff in total) and having NO VISUAL CLUE as to what was wrong. Had there been SOME kind of visual clue as to the configuration of this node I would have found the problem much earlier and would be a more productive programmer. Should I have set the properties when writing the code initially, sure but as LV projects grow in complexity these kinds of things are getting to be very burdensome.
After much frustration searching the Labview help and NI website I finally came across the reason why my project kept coming up with vi file conflicts and/or using the incorrect version of a vi. Apparently when searching for a vi, if there is a windows shortcut (.lnk) in the search path, it follows it! Now this is a very powerful feature but a dangerous one too. Apparently this has been a feature of Labview all the way back to version 1.0 This fact is not mentioned anywhere in the Labview help but I did finally find this article: http://digital.ni.com/public.nsf/allkb/B43C655BA37
In my case I have lots of example code on my PC. I often put shortcuts to similar code in the same folder with VIs and project as a quick way to reference alternate methods of accomplishing similar tasks. No problem, I'll just turn off this feature in the VI Search Path page of the Options dialog, right? Much to my surprise there is no way to turn this off.
Suggestion: Please add an option to disable this feature in the VI Search Path page of the Options dialog. Even if this option is dismissed and not implemented, please at least add this information to the Labview help, perhaps in the Paths Page (Options Dialog Box) if not in several other places in the help. It would certainly have same me hours of frustration and lost productivity.
First of all, this idea only makes real sense, when using SINGLE ELEMENT QUEUES (SEQ)!
The idea is, that you dequeue an element of a SEQ and garantee, that the element is returned (enqueued) to the SEQ by using an In-Place structure (see picture).
This would make it impossible to "lose" the data, because of a programming error....
The recently introduced Raspberry Pi is a 32 bit ARM based microcontroller board that is very popular. It would be great if we could programme it in LabVIEW. This product could leverage off the already available LabVIEW Embedded for ARM and the LabVIEW Microcontroller SDK (or other methods of getting LabVIEW to run on it).
The Raspberry Pi is a $35 (with Ethernet) credit card sized computer that is open hardware. The ARM chip is an Atmel ARM11 running at 700 MHz resulting in 875 MIPS of performance. By way of comparison, the current LabVIEW Embedded for ARM Tier 1 (out-of-the-box experience) boards have only 60 MIPS of processing power. So, about 15 times the processing power!
Wouldn’t it be great to programme the Raspberry Pi in LabVIEW?
Well, this idea has haunted me for a couple of years, and now I think it's time to break it. I feel the For-loop, the While-loop, and the Timed loop are so similar that they are begging for a merger. It would simplify, and with a little thought strengthen, the API, to have a single configurable Loop Structure instead. What's the difference between a While-loop and a For-loop with a conditional terminal anyway? Have you ever wished for iteration timing information being available inside your For-loop (I know I have)? "Oh but those structures have been around forever, we can't touch those"... Well, what happened with the stacked sequence structure? Please read on for a minute or two and tell me if I'm losing my marbles here. And please chip in with your own modifiers, since LabVIEW is growing in (sometimes unnecessary) complexity. Thus:
Instead I propose the Loop Structure which when initially drawn looks like this:
The above is basically a loop running forever (don't worry, you can stop it), but it can be modified to do many many other things, just be patient . One feature of the loop structure is the box in the upper left corner, which is quite similar to what we have in a For-loop today. This will, no matter the configuration of the loop structure, always show the current iteration setting of the structure. By default that is never-ending, but if you drag in a conditinal terminal you change the loop behavior to a While-loop (note that I suggest a simpler way to get to the terminal than via the right-click context menu):
Arrays can be wired to the structure border as usual to give a For-loop like behavior. The count terminal changes from "Inf" to an "N" to indicate that it's a finite albeit at edit-time unknown number of iterations:
You can wire out of the count terminal inside the loop structure as usual to get the count at run-time of course. If the iteration count can be deducted at edit-time a number will appear instead of the "N":
This number is blue to indicate that it is automatically calculated. You can just type in a new number if you wish to run a different number of iterations, in which case all the usual ideas on this Idea Exchange about what should happen to auto-indexed tunnels apply. If you override the count manually the number will be in black text:
You can of course combine different exit conditions, in this case a fixed number of iterations with a conditional terminal wired as well for possible early exit:
The automatically calculated count terminal aids in determining if the loop actually runs the desired number of times:
All the usual stuff about tunnels, shift registers and so on apply to this structure as well, but on top of that it can also be configured as you're only used to within a timed loop. Consider how valuable some of these parameters and settings could be for ordinary loops, for error handling and for timing for instance. But the main feat is that this is still the same loop structure - it will simplify the palette a lot:
And now an additional feature that ties some of the parameters from the timed structure together with ordinary loops: this loop structure is event-enabled! I propose stuff like this (we're only scratching the surface with this image):
It's late where I am now, so I'll stop now, but all of the above makes it extremely easy to do things you simply can't do today - what about a Priority Structure?:
So, is it time to consolidate the ever-evolving loop code of LabVIEW into one structure to rule them all?
I suggest an option added to the Open VI Reference primitive to open that VI reference without any refees. I suggest option bit 10, i.e. option 0x200h:
The demand for this arises when you want to access methods and properties of VIs that may be broken, while on the other hand you don't have any need to run those VIs - for instance in one of my current tools (BatchEditor) where I'm tasked with investigating hundreds of VIs simultaneously of which some could be broken or missing dependencies. Other situations would be tools for traversing project trees for instance. Opening a large number of healthy VI references takes a while, and when something is broken in a VI, opening even a single reference could take 30 seconds to minutes.
Currently you can suppress the "loading" and "searching" dialogs by setting option 0x20h on the Open VI Reference primitive, but that only loads the refnum silently as far as that will get you. Opening the refnum takes the same amount of time as if you could see those dialogs, and you are rewarded with an explorer window if a dependency search fails. I just want a way to open a VI refnum without even starting to look for dependencies, thus very quickly and guaranteed silently.
The relevant people would know that this request isn't that hard to implement, as it is kind of already possibly using some ninja tricks. I'd like such an option to be public.
A longstanding issue with the "active plot" property is that is throws an error if fewer plots currently exist. Conversely, if we wire data containing more plots, the graph automatically adapts to that.
The main problem with this is that the order of operations matters. We need to write the value first, followed by the active plot properties. Many times we already know what kind of plots we want (color, name, etc.), even if one of the plots is only added in a later step or the terminal is written a nanosecond later due to code scheduling. Workarounds mean excessive sequentialization because we need to enforce strict order: (1) write data (2) update plot properties.
The current behavior is annoying, and there are many forum examples where that was the cause of the problem (example).
Two suggestions can address this:
(1) if an active plot property is written, and that plot does not exist yet datawise, it should be created automatically and the node returns without error. (Of course all other missing plots up to that number need to be created too, they can have default properties).
(2) Maybe there should also be a property for "number of plots" than can be written to define the number of plots.
There is a construct I am quite fond of in pointer-friendly languages, using iterator math to implement circular buffers of arbitrary data types. They are a little bit slower to use than straight arrays, but they provide a nice syntax for fixed sized buffers and are helpful in cases where you will be prepending and appending elements.
I am pretty certain that queues are implemented as circular buffers under the hood, so much of the infrastructure is already in place, this is mostly adding a new API. Added bonus: the explicit circular buffer can be synchronous, unlike the queue, so for example you can put them in subroutine VIs.
It should be easy to convert 1D arrays to/from circular buffers. Array->CB is basically free, the elements are in order in memory. CB->Array requires two block copies (most of the time). This can be strategically mananged, much like Reverse or Transpose operations.
You can implement most of the following two ideas naturally:
Circular buffers would auto-index and cycle the elements and not participate in setting 'N'.
You can do 95+% of what I wanted to do with negative indexing:
A lot of the classic divide and conquer algorithms become tractable in LV. You can already use queues to implement your own stack and outperform native recursion. A CB implementation of the stack would be amenable to subroutine priority and give a nice performance kick. I have done it by hand for a few datatypes and the beauty and simplicity of the recursive solution gets buried in the implementation of the stack. A drop-in node or two would give you a cleaner look and high-octane performance.
Finally, perhaps the most practical reason yet: simple XY Charts.
As for appearance I'd suggest a modified wire like the matrix data type. Most if not all Array primitives should probably accept the CB. A few new nodes are needed to get/set buffer size and number of elements and to do the conversions to/from 1D arrays. The control/indicator could have some superpowers: set the first element, wraparound scrolling (the first element should be highlighted).
How about having a timeout occurrence as an input for functions which support timeouts?
I am illustrating a single use case with queues (and a notifier) but I would see this as being beneficial to nearly ALL functions with timeout inputs.
Sometimes we'd like to wait for one of a few different functions (an example of mine which springs to mind is the Dequeue primitive). At the moment we must essentially poll each primitive with a relatively short timeout, see if one responded and then handle accordingly. This is (for me at least) ugly to look at and introduces polling which I generally don't like. I'm an events man.
What I propose would be that instead of simply defining a timeout in milliseconds we can define both a timeout (in milliseconds AND an occurrence for the timeout). If we wire this data to several primitives (all sharing the same occurrence) the first primitive to receive data triggers the occurrence so that the others waiting in parallel return also.
In the case where no data arrives, each function waits the defined amount of time but upon timeout DOES NOT fire the occurrence. This would cover corner cases where you may want different parallel processes to have different timeouts (Yes there are cases although they may be rare). It is possible to change the "priorities" of the incoming queues in thie way.
Background info: One example where I could use this is for RT communication. Here we multiplex many different commands over TCP or UDP. On the API side it would be beneficial to be able to work with several strictly-typed queues to inject data into this communication pipe while still maintining maximum throughput. I don't like using variants or flattened strings to achieve this multiplexing.
Being forced to poll means the code must decide between efficiency (low CPU usage) OR response time (setting a very low timeout). Although the CPU load for polling may be low, it's not something I personally like implementing and I think it's generally to be avoided.
There IS a LVOOP solution to this particular problem but not everyone can use LVOOP in their projects (for various reasons). I can envisage other use cases where interrupting a timeout would be desireable (Event structure, wait on ms, VISA read and so on and so forth).
Classes? OOP? ... Huh?
Even if you don't (yet) work with LV classes, you may have noticed that they are starting to become increasingly widespread in the LV world. In fact, the excellent new Actor Framework that ships with LV2012 relies heavily on classes. LV classes are great but they can impact on your performance as a developer as your application becomes larger. I'd encourage everyone to click the magic KUDOS button for this idea, since classes will likely affect us all sooner or later!
Most class-based architectures contain some degree of linking. One form of linking is inheritance where parent-child relationships are implicitly defined, and another form of linking arises from nesting libraries where classes (e.g.) are placed inside other libraries.
Unfortunately as the linking increases in a project, the IDE starts to become very sluggish! Those who have worked on mid-sized class-based applications know the symptoms:
For many projects these symptoms are a minor annoyance, but as your project grows they can become a serious impediment to productivity. Why should it take over 30 seconds to modify a class's inheritance?!
Obviously careful design can reduce linking to some extent, but that just postpones the pain. The reality is that all class-based projects start to suffer from these symptoms once they reach a "resonable" size.
Improve the responsiveness of the LV editor when working with classes.
Others have written about this topic well before me. Here are a few relevant discussions:
Feel free to link more!
When programming with large applications, often times you'll have clusters carrying a lot of information. If you hover over the cluster wire and observe Context Help, you might see something like this:
This Context Help window above is rather large and doesn't necessarily make it any easier to see the structure or contents of the cluster. My proposed idea calls for the ability to expand or collapse the cluster contents within the Context Help window, such as this:
What do you think?
I am extensively using the Array to Spreadsheet string Primitive and most of the time I never used the Format string input (Use to wire an empty string constant) and still I am getting the right result what I expect. So I think it would be better if the Required Terminal is changed to an Optional Terminal.
It is known that the Array to Spreadsheet string is Polymorphic, but when we wire an array of I32 and DBL the output string is DBL format only. It would be good if the output String adapts the data type that is wired unless otherwise specified by the Format string.
Currently when we use implicit property nodes and you want to create a "VI Snippet ", the image automatically switches to explicit property nodes. It would be good if this tool will work in this way.