I am struggling with my Event Structure event list and the corresponding list of cases in the parallel consumer loop Case Structure.
Both have currently over 100 cases each and finding one or scrolling down to access the latest one has become painful due to the lack of a scrollbar in these lists.
For instance, here is the Event Structure list:
Same goes for the list of controls in a Local Variable (and other objects, I am sure).
There is no reason why such lists do not have a vertical scrollbar when that corresponding to a Enum do have a scrollbar:
Or is there?
Suggestion: All long pulldown lists should have a vertical scrollbar
99% of the time when I use a Diagram Disable Structure, I am disabling code with an error cluster wired through. I don't want to lose the errors coming in, just the single operation, so I manually wire the error cluster through the empty case each time.
I've talked to others in the past about this and it would be nice for LabVIEW to be all-knowing and wire all datatypes through that match, but that would definitely lead to conflicts and mistakes. Error clusters, on the other hand, are simple are nearly always a single wire in and out.
Simply auto-wiring the error cluster input to the output would make the debugging process much easier.
Code with disabled operation:
When you have no run-time engine installed the popup window appears and offers you to download latest RTE from NI website.
My question is, can this popup window be re-addressed to some other location such as network folder for example where I can put downloaded RTE files?
I belive that will be definitely more suitable for the company users, where they may need to install a hundreds of RTEs an it is really counterproductive to download them from the Internet every time.
It would be useful to have something like a referenced comment. You can place this comment in the block diagrams of several VIs of a project, and by editing one instance of this comment, it will change all instances at one time.
The comment describes the channel list of an application:
AI_00: Torque [Nm]
AI_01: Pressure [bar]
AI_02: ValvePosition [%]
You place this comment in the DAQ-VI. But it would be helpful to have the identical comment in the MeasFile-VI and/or in the VI that combines the channel and scaling informations to a 2D-string array, so you can present all these informations together (e.g. in a multi cloumn list box or something else).
When you later add some new channels e.g., it will be annoying to edit all these comments step by step, something like a 'comment type def' would be a practical solution.
LV currently allows you to swap wire positions (on BD) or terminals on the connector pane just by clicking on connection point while holding Ctrl key.
So why to not implement this also for the Case Selector Terminal?
Then you would be able to wire Case Selector Terminal in two clicks:
Of course swapping cursor will only appear if you select terminal that is considered as compatible/valid input of the Case Selector Terminal (based on auto-proofing which will exclude arrays clusters images, references and so on).
Okay, poor title but I can't summarise any better than that at the moment.
So I'm sure many of us use the old trick of creating a re-entrant VI so we can use a loop structure within the VI (that usually executes only one cycle per call) to allow us to store persistent (static) variables within the VI between calls. The problem with this is that all the headaches that come with a re-entrant VI in terms of not being able to run 'with the lightbulb on' to debug then arise.
Most of the time, I only need a single instance of this VI, so how about an alternative to re-entrancy like 'allow statics' that gives me the stored info but only allows a single instance to be used, and gives me the ability to run 'with the lightbulb'?
Of course, I have made an assumption that limiting to a single instance would make this feasible without major architectural modifications. If not then this is one for the scrapheap.
While working a project utilizing NI cDAQ-9184 Chassis, I doscovered the following problems.
1. There is an undocumented "heartbeat" protocol between the remote chassis and the host computer.
2. There is no documentation that explains how this "heartbeat" works.
3. There is no specification for how long the heartbeat is lost before the cDAQ Chassis goes into the "lost heartbeat" mode.
4. There is no way to poll the cDAQ Chassis to discover if it lost the "heartbeat" since the last time you accessed the Chassis. This is very important if you have a long time between data reads.
5. Once the "heartbeat" has been lost, the chassis essentially unlinks the 'module side' from the 'ethernet side'. There is no way to poll the Chassis to find out if it is in this state or not.
6. If you give the chassis a 'Self Test.vi' command, a Chassis will normally respond in only a couple of hundred milliseconds. If you give the same command to a Chassis that has lost the heartbeat, it will "re-link" the Module side of the chassis with the Ethernet side of the Chassis. However, this takes several seconds, which is undocumented, and the "Self Test.vi" returns no information telling you that a "Re-Link" occurred.
7. We have no way of knowing if the Chassis experienced a power loss.
8. I received no definitive answer if the modules in the cDAQ-9184 Chassis retain their configuration if power is lost to the Chassis.
Here is the main problem. If you set up the cDAQ-9184 for an acquisition, and experience a "Lost Heartbeat", the Chassis returns an ERROR when a read command is executed. That error gives you insufficient information to know what has happened with the Chassis. At that point, you are reduced to reconfiguring the cDAQ Chassis.
Suggested new software tools:
A. A "new PING" tool in the Functions>Measurement I/O>DAQmx-Data Acquisition>Advanced>System Setup. This would allow a user to "ping" the IP address of a remote chassis to insure the physical connection is established and that power is ON at the remote Chassis.
B. Anew tool in the Functions>Measurement I/O>DAQmx-Data Acquisition>Advanced>System Setup, most likely a modication to the Device Node. We need the ability to ask the Chassis if it has lost the "heartbeat".
C(a). A new tool that would instruct the Chassis to "Re-Link" if a loss of heartbeat occurred.
C(b). A modification to the Measurement I/O>DAQmx-Data Acquisition>Device Configuration>"Self Test.vi" that would return a response if the Chassis "re-linked" the Module side with the Ethernet side.
D. A register that is set "High" upon power-up. The user would be able to set the Register "Low". At any time, the user could poll the Chassis for a Loss of Power occurrance.
Example: I graph a temperature input, using auto-scale on Y. The end-customer complains that the temperature suddenly started rising at the wrong time. I try (futily) to explain that the rise was only 0.01 degrees, but the scale on the graph expands it to fill the screen. Then someone else comes in, and we repeat the conversation.
What I'd like is a feature to keep auto-scale, but set the minimum span. For example, set the minimum span to 10 degrees:
The story is similar for integer charts. Too often, a value is on the bottom of the chart, or on the top, and all you see easily are vertical lines
With an auto-scale-min-span of 1.1, it would look better:
I use a lot of very generic programming, so may not know if I should set the minimum span to 1.1 for boolean values. For that case, it would be great to have a feature that makes the auto-scale go X% beyond the min & max values. In this case 5% would give the results above.
One more feature (one I've written programmatically, and it's a MAJOR pain): set auto-scale to only scale up if the data is less than 50% of the current span, and scale down as needed (but over-shoot to anticipate more scaling). It's irritating to watch a graph constantly scaling up and down, especially an XY graph.
And finally (I've also done this programmatically, and it would be tough to make automatic): lock several chart scales together. If an operator changes scale on one XY graph, the others change to match.
So building off of this (Very-simple-improvement-on-Block-diagram-clean-up-
I am making VI's like this:
They have many single frames floating around, some nested. Nearly none are sequential. Inside each frame is a complete "logical unit".
I find that when I make my whole code like this, candidates for being converted to Sub-VI are obvious. They have more wires going in, more stuff going on, nesting, and it is all in the same box.
A good example of what they look like when done is this (thanks X.):
Often I don't know what the "done" version looks like until I work through the bugs, requirements, refinements, and unhappy-paths of the "rough" version.
Want to have graph display a certain scale unless values go outside scale min or max and then do autoscale but only in direction which scale bounds were crossed.
Normally want graph to display X scale 0 to 10 to display to user:
If set same graph to autoscale would get the following graph that user could interpret as values are swinging all over the place but this could just be noise and I do not want to display this format to user:
So I want a solution that incorporates manual scale and autoscale by autoscaling only after scale limit is exceeded. Asume get a data point of 13 which is above the max scale range of 10, graph would do a single autoscale only in direction above 10 to change max to 13.
Would be Graph Scales property. Option disabled if Autoscale was selected.
I know can use property nodes to programatically do this in my program but it is much more involved having to constantly check to see if values have gone outside range and then issue a single Autoscale.
An RT program can be ran either from a host PC (what I call the "interpreter mode"), or as an exe in the startup directory on the RT controller. When running from the host PC (for debugging purposes), it allows front panel "property nodes" to execute properly as you would expect. After building, and transferring to the RT app to the startup directory on the RT controller, the program errors out on the first occurance of a front panel property node. The reason is obvious; a front panel is non-existent in an RT application, hence the front panel property nodes are rejected. Of note, no errors or warnings are generated during the RT app build operation.
Recommend that the build application simply ignore the front panel property nodes as it ignores the front panel in general. This would allow the programmer to retain the same version of the source code for either mode of operation.
When selecting block diagram items from the "search results" screen, the resulting highlighted area on the block diagram always seems to be on the extreme edges of the screen. This requires a finite amount of time scanning all four corners of the screen for the item. Recommend that the "found" item always be positioned in the exact center of the screen to eliminate this issue.
A pretty simple request that would help to save some time and diagram space.
I would like to see initialize array have a size input that also accepts arrays when defining the dimension.
I find when working with images or 2d arrays and doing some manipulation I often end up with array size wired into index array wired into intialise array.
This is my 2nd day on LabVIEW and it's pretty frustrating. How are we supposed to see what we're doing? There's tiny little icons and connections and I can't even hit ctrl+scroll up. I literally get a headache after 1-3 hours. For other reasons, I can't change the resolution on my screen either.
How is this intended? If it's not intended and there's a legitimate reason, then a huge popup should come up at the beginning and explain the reason because clearly others feel it's a bad design and the proper way to solve this problem isn't intuitive. Ctrl+Shift+N doesn't help -- it moves already small icons around the screen and they remain the same size.
LabVIEW sounded cool but now I'm just dreading it for the smallest, yet dealbreaking, reasons.