I suggest an option added to the Open VI Reference primitive to open that VI reference without any refees. I suggest option bit 10, i.e. option 0x200h:
The demand for this arises when you want to access methods and properties of VIs that may be broken, while on the other hand you don't have any need to run those VIs - for instance in one of my current tools (BatchEditor) where I'm tasked with investigating hundreds of VIs simultaneously of which some could be broken or missing dependencies. Other situations would be tools for traversing project trees for instance. Opening a large number of healthy VI references takes a while, and when something is broken in a VI, opening even a single reference could take 30 seconds to minutes.
Currently you can suppress the "loading" and "searching" dialogs by setting option 0x20h on the Open VI Reference primitive, but that only loads the refnum silently as far as that will get you. Opening the refnum takes the same amount of time as if you could see those dialogs, and you are rewarded with an explorer window if a dependency search fails. I just want a way to open a VI refnum without even starting to look for dependencies, thus very quickly and guaranteed silently.
The relevant people would know that this request isn't that hard to implement, as it is kind of already possibly using some ninja tricks. I'd like such an option to be public.
There is something wrong with this VI, although you wouldn't know it unless you ran it (and I should warn you that it will annoy you if you run it):
What's wrong with it is that auto grow has been disabled and there's some annoying code hidden there beyond the loop boundary. This is one of my least favorite things about LV - it allows us to hide code completely and get away with it. I don't think it should.
LV already has auto grow enabled by default to handle some of the cases which cause this issue, but I know that many many people don't like it when LV automatically plays with their BD layout (and rightly so) and auto grow only covers some of the cases, so I think we need something more encompassing and less obtrusive, and I would suggest breaking the VI if it has hidden code.
I also know that LV has warnings and VI Analyzer has tests for this, but I think this needs to be something which doesn't let you get away with it.
I think LV should break any VI which has any of the following:
We're witnessing more and more requests to stop LV hiding important information from us. In One direction we want to be able to know (and some want to break code) if structures are hiding code.
Others want LV primitives to give visual feedback as to how they are configured, especially if that configuration can have an effect on what's being executed or how it's executed.
Examples include (Please please feel free to add more in the comments below)
Array to cluster (Cluster size hidden)
Boolean array to number (Sign mode hidden)
FXP simple Math (Rounding, saturation and output type hidden)
SubVI node setup (When right lcicking the subVI on the BD and changing it's properties - show FP when run, suspend and so on)
Sub VI settings in general (Subroutine, debugging)
I know there are already ideas out there for most of these (and I simply chose examples to link to here - I don't mean to leave anyone's ideas out on purpose) but I feel that instead of targetting the individual neurangic points where we have problems, I would like to acknowledge for NI R&D that the idea behind most of these problems (Some of them go much further than simply not hiding the information, and I have given most kudos for that) is that hiding information from us regarding important differences in code execution is a bad thing. I don't mean to claim anyone's thunder. I only decided to post this because of the apparent large number of ideas which have this basic idea at heart. While many of those go further and want additional action taken (Most of which are good and should be implemented) I feel the underlying idea should not be ignored, even if all of the otherwise proposed changes are deemed unsuitable.
My idea can be boiled down to the fact that ALL execution relevant information which is directly applicable to the BD on view should be also VISIBLE on the BD.
As a disclaimer, I deem factors such as FIFO size and Queue size to be extraneous factors which can be externally influenced and thus does not belong under this idea.
Example: I have some Oscilliscope code running on FPGA and had the weirdest of problems where communications worked fine up to (but not including 524288 - 2^19) data points. As it turns out, a single "Boolean array to number" was set to convert the sign of the input number which turned out to be completely wrong. Don't know where that came from, maybe I copied the primitive when writing the code and forgot to set it correctly. My point is that it took me upwards of half a day to track down this problem due to the sheer number of possible error sources I have in my code (It's really complicated stuff in total) and having NO VISUAL CLUE as to what was wrong. Had there been SOME kind of visual clue as to the configuration of this node I would have found the problem much earlier and would be a more productive programmer. Should I have set the properties when writing the code initially, sure but as LV projects grow in complexity these kinds of things are getting to be very burdensome.
I am probably the only one to be using Extended precision numbers considering the feedback on these requests:
but so be it.
One other area where LabVIEW ignores extended precision is wire values in debug mode. To illustrate what I am talking about, consider this snapshot of a debugging session:
The result of my modified Bessel calculation (that reminds me I haven't suggested to implement special function calculation in extended mode...) returns a perfectly valid extended precision number, such as 5.03E+418, but LabVIEW doesn't recognise this as a valid value and returns an "Inf" value (which would be the correct reaction if the wire could only display double precision floating point values).
This wire is connected to the logarithm primitive, which happens to be polymorphic and hence accepts the extended type. The result is the correct logarithm of 5.03E+418, i.e. 964.15.
On the face of it though, it appears that the output of my VI is +Inf, and that LV went wahoo and estimated an arbitrary value of log(Inf)...
My code actually stores such values in shift registers, so when I debug problems with the code, I have 3 or 4 wires carrying an "Inf" value, which, when I am trying to understand the reason of overflow problem, is not exactly helpful.
Suggestion: display Extended Precision wire values correctly in debug mode.
After looking at the problem encountered here, it turns out that LabVIEW seems to make some insane choices when mixing a waveform with simple datatypes. Some behavior is good and intuitive. For example multiplying a waveform with a scalar applies the multiplication to the Y component only, because it would not make sense to e.g. also multiply the t0 or dt values.
It is less clear what should happen if multiplying a waveform with an array. Intuitively, one would expect something similar to the above, where the Y component is multiplied with the array. Unfortunately, LabVIEW chooses something else: It creates a huge array of waveforms, one for each element in the array. (as if wrapping a FOR loop around it, see image). If the waveform and the array both have thousands of elements, we can easily blow the lid off all available memory as in the quoted case. Pop! But the code looks so innocent!
I suggest that operations mixing waveform and simple datatypes (scalars, arrays) simply act on the Y component as shown.
(not sure how much existing code this would break, but it actually might fix some existing code!!! )
Thus, no need to add the "EXIT" VI, and no need to check if the VI is in Run-Time mode or developpement mode...
(for big application, it could be when no VI are running...)
For example, add an option in the application builder :
>> Close the run-time engine when the application is completed...
Format into text is very useful but can become hard to edit when it has a lot of inputs. I propose, instead of one huge format string, that the programmer be allowed to put the required format next to the corresponding input. Also, the user should be allowed to enter constant strings, e.g.. \n, \t, or "Comment", and have the corresponding input field automatically grayed out.
Auto-indexing of arrays in for and while loops are a nice luxury in LabView. One option that could save much time would be a menu option to turn on conditional indexing, this would expose a boolean terminal under the auto-index icon to select if the current itteration should add the itteration to the array or skip it. From an execution standpoint there would only be a minor performance hit (could still preallocate max array size on for loops and automatically return used subset). This could also work for autoindexed in but would have less use that the autoindeded out case. I know I have built many conditional arrays inside of a for loop and it requires a case selection and a build array making the code less readable and requires time and thought. It can also be less efficient than a compiler can do.
See the example below which would run a for loop and only build array of < 0.1
Dear all Labview fans,
I'm a physicist student who uses Labview for measurement and also for evaluation of data. I'm a fan since version 6.i (year 2005 or like)
My typical experimental set-up looks like: a lot of different wires going every corner of the lab, and it is left to collect gigabytes of measurement data in the night. Sometimes I do physics simulation in Labview, too. So I really depend on gigaflops.
I know, that there is already an idea for adding CUDA support. But,not all of us has an nvidia GPU. Typically, at least in our lab, we have Intel i5 CPU and some machines have a minimalist AMD graphics card (other just have an integrated graphics)
So, as I was interested in getting more flops, I wrote an OpenCL dll wrapper, and (doing a naive Mandelbrot-set calculation for testing) I realized 10* speed-up on CPU and 100* speed-up on the gamer gpu of my home PC (compared to the simple, multi-threaded Labview implementation using parallel for loops) Now I'm using this for my projects.
What's my idea:
-Give an option for those, who don't have CUDA capable device, and/or they want their app to run on any class of calculating device.
-It has to be really easy to use (I have been struggling with C++ syntax and Khronos OpenCL specification for almost 2 years in my free time to get my dll working...)
-It has to be easy to debug (in example, it has to give human readable, meaningful error messages instead of crashing Labview or making a BSOD)
Implemented so far, by me, for testing the idea:
-Get information on the dll (i.e..: "compiled by AMD's APP SDK at 7th August, 2013, 64 bits" , or alike)
1. Select the preferred OpenCL platform and device (Fall back to any platform & CL_DEVICE_TYPE_ALL if not found)
2. Get all properties of the device (CLGetDeviceInfo)
3. Create a context & a command queue,
4. Compile and build OpenCL kernel source code
5. Give all details back to the user as a string (even if all successful...)
-Read and write memory buffers (like GPU memory)
Now, only blocking read and blocking write are implemented, i had some bugs with non blocking calls.
(again, report details to the user as a string)
-Execute a kernel on the selected arrays of data
(again, report details to the user as a string)
release everything, free up memory, etc...(again, report details to the user as a string)
Approximate Results for your motivation (Mandelbrot set testing, single precision only so far.):
10 gflops on a core2duo (my office PC)
16 gflops on a 6-core AMD x6 1055T
typ. 50 gflops on an Intel i5
180 gflops on a Nvidia GTS450 graphics card
70 gflops on EVGA SR-2 with 2 pieces of Xeon L5638 (that's 24 cores)
520 gflops on Tesla C2050
(The parts above are my results, the manufacturer's spec sheets may say a lot more theoretical flops. But, when selecting your device, take memory bandwidth into account, and the kind of parallelism in your code. Some devices dislike the conditional branches in the code, and Mandelbrot set test has conditional branches.)
Sorry for my bad English, I'm Hungarian.
I'm planning to give my code away, but i still have to clean it up and remove non-English comments...
It is time to put a dent in the floating point "problems" encountered by many in LV. Due to the (not so?) well-known limitations of floating point representations, comparisons can often lead to surprising results. I propose a new configuration for the comparison functions when floats are involved, call it "Compare Floats" or otherwise. When selected, I suggest that Equals? becomes "Almost Equal?" and the icon changes to the approximately equal sign. EqualToZero could be AlmostEqualToZero, again with appropriate icon changes. GreaterThanorAlmostEqual, etc.
I do not think these need to be new functions on the palette, just a configuration option (Comparison Mode). They should expose a couple of terminals for options so we can control what close means (# of sig figs, # digits, absolute difference, etc.) with reasonable defaults so most cases we do not have to worry about it. We get all of the ease and polymorphism that comes with the built-in functions.
There are many ways to do this, I won't be so bold as to specify which way to go. I am confident that any reasonable method would be a vast improvement over the current method which is hope that you are never bitten by Equals?.
When creating an installer for my built LabVIEW application, I really dislike having to choose between including the RTE installer (and having a 100+ MB installer for my application) or not including it (and requiring my users to download and install the RTE as a separate step). Typically, I'll build two installers at the same time (with roughly duplicate build settings): a full installer w/ RTE and a light installer w/out the RTE.
What would be much nicer would be if my app's installer were able to download and install the RTE, if necesary. Actually, this is common practice, these days, for users to download a small installer that then downloads larger installer files behind the scenes.
I propose that Case Selectors should accept any type of reference, and the two cases generated are "Valid Ref" and "Invalid Ref". (This would be very similar to the current behavior of the Case Selector accepting errors with the two cases of "Error" and "No Error".)
The current behavior using "Not a Number/Path/Refnum" is very unintuitive. It requires the programmer to use Not Logic (i.e., do something if the reference is "not not valid").
There are a lot of issues that arise because you can't register for events using control references when VI's front panel is not open. Here is a practical example that illustrates a bunch of limitations:
The problem, however, is that we can't register for such FP events when the FP is not open. So that means we have to have an "Open FP" state in our simple state machines (if only there was an FP.Open event! but that's a separate idea...), and register for these events on first call in that state.
Plus, since we want a single event case to handle all those indicator refnums, we would have to reregister all refnums each time a new window is opened (as mentioned above). But that means if you registered events for a window, then closed it, then opened a different window...when you open the second window, the re-registration of events from the first will throw an error because its FP is no longer open.
Confusing, yes. Hope that gets the point across.
Our workaround is that we have a separate daemon instance dedicated to each window. Not ideal, but works for now. But there are other similar problems that have required more complicated workarounds. Would be best if the root cause were addressed.
The size of NI Package Library is too large for it includes so many graphic programming information such as front panel.In our view,it is not necessary when the vi is used as function,but not UI.But the 'Remove front panel' Option in the Soure File Settings Config Dialog is disable for user to select.In our point,LabVIEW should allow users to decide each VI wheather to include front panel or not when building a package Library.
Project->build->Packed Library Properties:
I think the Array Element Gap should be sizable. This would facilitate lining up FP arrays with other items on the FP, or simply as a mechanism to add more apparent delineation between elements.
The size should be set in the Properties box, not by dragging the element gap with the mouse - that would add too much "cursor noise".
A new Property Node for this feature would complete Idea.
LabVIEW saves configuration data, including Recent Files and Recent Projects, in a single LabVIEW.ini file, by default saved in National Instruments\LabVIEW <Version>. If more than one user logs onto (and uses) LabVIEW, this "sharing" of the configuration, particularly the path names to files, will probably point to the "wrong" place, as they go by default to the user's <My Documents>\LabVIEW Data. Note that the NI function "Default Data Directory" will correctly "point to" the user's <My Documents>\LabVIEW Data, but there is no guarantee that the (single) LabVIEW.ini file will be correct for all users.
A simple fix is to save LabVIEW.ini in the user's Profile. I notice that there is a National Instruments folder in App Data\Local -- this is one place that could be used. Then if a second user logs on to a PC, he (or she) will have a unique set of saved files/folders in the Configuration file, one that references files in the appropriate <My Documents> folder.
I have a base typedef (e.g., a lvclass) that is used to create many different user events (see Create User Event). If many registrations (see Register for Events) are done where the base typedef is the same, the only way to tell which event is which in an Event Structure is by the order of the inputs to Register for Events and/or the anonymous bundling of multiple Register for Events.
I would like to dymically 'name' events when they are created with Create User Event and then to call them out specifically in the Event Structure.