Maybe this has been already requested elsewhere and I'm missing it....
but it would be useful to have a Wait (ms) with connectors for error in and out.
This can help keeping the BD clean...
Good idea, it should be applied at all timing functions
my XNode to solve this.
For applications such as this, it would be handy to have a pair of pass-through terminals that would accept ANY type of data (numeric, cluster, array, etc.) on the input and pass it through to the output.
Seems like a good idea - I end up framing it in sequences which looks cluttered. I like the ideas of passing out the ms timer value and (usually!) skipping on errors too.
I could not Kudo this fast enough.
As long as it SKIPS THE WAIT on an input error, I'm all for it.
This is well and truly the highest scoring user request with currently 825 kudos. It's also very easy for NI to implement. Why isn't it just done?
Yes, this is the most popular idea. I have been given reasons for it not being done, but I don't exactly buy them. Anybody who makes the argument they did obviously hasn't done any instrument communications where you need to wait immediately after sending a command to allow the instrument to respond.
Just keep up the pressure and I'm sure they will buckle soon enough.
I really don't see why we can't have both versions. The addition of error clusters causes problems in some scenarios but I would wager that 95% of the uses of this primitive supports error in and out quite well.
National Instruments is declining this idea.
At 826 Kudos as of this post, it is the highest ranked idea on the Idea Exchange. Rejecting it is not something we are doing lightly. It has received attention from the R&D team during two versions of LabVIEW as we have evaluated the impact of this change. The final decision is to leave it unchanged.
There were ardent supporters of the idea within R&D. All of us are now in agreement to not change this node. We know we are going to receive criticism for this decision. We expect that many of our users will reject the reasons given in this post because they will weigh the priorities differently than we did. The goal of this post is not necessarily to talk you around to our position. Instead, we only hope to show that we have given this due consideration and that we are trying to do right by our customers, even if we ultimately decide not to implement the request.
The idea is actually two separate ideas, although this may not be immediately obvious. The first is for a way to control the sequencing of the Wait node. The second is for a way to perform a conditional Wait.
The conditional Wait was the easier of the two problems to solve. Based on the comments here and conversations with customers, it was clear that we needed a node that could be configured as either "Always Wait" or "Skip Wait On Error". We created several versions of this node looking at different ways to present the configuration. The easiest option was to simply replace the primitive in the palettes with a PolyVI that selected between two inlined VIs, one with a case structure and one without. We invesitgated creating a new configurable primitive. One way or another, this issue seemed solvable.
We ran into issues with the sequential Wait behavior.
The current recommendation from National Instruments is "add a sequence structure around your Wait primitive to sequence it." This extra work is exactly what many of you do not like doing and why this idea has so many kudos. But it turns out that this extra work has one benefit -- you don't do it unless you have to.
There is a pattern of usage within LabVIEW that we do not wish to disrupt: the use of Wait Milliseconds to throttle the iterations of a While Loop. When you place a Wait Milliseconds node inside a While Loop, you get very different behavior depending upon whether your run it in parallel or in sequence to the rest of the loop code. The generally desired behavior is for it to run in parallel. Many users, however, wire their error inputs. It is reflexive, emphatic even. R&D observes this time and again. If we put an "error in" terminal on the Wait Milliseconds primitive, we believe (based on evidence) that users will almost always wire it up, resulting in generally undesirable loop behavior. Telling them to leave it unwired is like telling water to flow uphill. It is unnatural. Even worse, if it has an error out, they need to wire that up. "What if it causes an error?" We observe that they add a lot of unneeded code to deal with an error that will never come. This is part of a larger conversation about not putting error terminals on purely functional nodes that cannot generate errors that is ongoing both within R&D and with our customers.
When a node takes a refnum as input, it generally can generate at least one error: "refnum is invalid". It also generally needs serialization up against other nodes that use refnums (programs that use refnums rarely use just one). Neither of those necessarily applies to the Wait Ms node. It is frequently used along side pure dataflow code to control the rate of execution. It cannot generate errors of its own. In our opinion, that makes it a node that should not needlessly encourage serialization. Requring the sequence structure around the Wait primitive discourages its serialization to a degree that seems, in our observation, to be appropriate.
So, after much observation and exploration, we felt that we could not improve usage for one set of users without hurting usage for another set of users, and we felt that the status quo was the lower pain point. For that reason, we are rejecting this idea.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.