LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea
it would be nice if we could copy teh changes from one vi to another while comparing two VI's
Add CUDA support for analysis, comutation and vision.

Inconsistency is found in the supported image types for many IMAQ vi's:

 

For instance IMAQ ROIProfile, IMAQ LineProfile and IMAQ LinearAverages supports U8, I16 and SGL format but do not support U16 format images.

 

But, a profile along a line drawn in a U16 image should make as much sense as in an U8 image. Shouldn't it? So why is it not supported?

 

In addition, I would expect it to be easy to add an additional polymorphic U16 version.

 

These are just a few of several examples of similar inconsistency in IMAQ vi's.

 

 

Message Edited by heel on 03-17-2010 08:33 AM

I would like to see a LabVIEW implementation of the Matlab function which approximates a floating point number to a ratio of two integers, within a given tolerance.

From Matlab help:

 RAT    Rational approximation.
    [N,D] = RAT(X,tol) returns two integer matrices so that N./D
    is close to X in the sense that abs(N./D - X) <= tol*abs(X).
    The rational approximations are generated by truncating continued
    fraction expansions.   tol = 1.e-6*norm(X(:),1) is the default.
 
    S = RAT(X) or RAT(X,tol) returns the continued fraction
    representation as a string.

 

Some discussion and example code of how to implement in LabVIEW is here: http://forums.ni.com/ni/board/message?board.id=170&thread.id=18012&view=by_date_ascending&page=2

 

Is there any reason why backwards compatibility couldn't be added to the LabVIEW datalog access VIs? 

 

Why does a LabVIEW 8.x (or 7, or 6, or 5, or 4, or 3) datalog file NEED to be converted to LabVIEW 2009 datalog format.

 Shouldn't it be optional?

 

 

I have around 8 terabytes of data in LabVIEW 7 and 8 formats.  Switching to version 2009 will be very painful (as was the switch to other versions), in that it requires me to convert each datalog file.

 

- yes I know about SilentDatalogConvert, yes I could write a simple program to churn through all my data files. That much data would take weeks or months of continuous chugging - seems silly.

 

I'd even settle for backwards compatibility with caveats - read-only for example.

 

The Cluster type of my datalog file hasn't changed in 15 years. Maybe a cluster type check first to determine if the format really requires an update, but even then allow access without conversion

If you have messy "future" or "obsolete but we are being risk-averse and keeping it around" code in the Disabled state of a Diagram Disable structure, VI Analyzer reports test failures for such code, in my opinion cluttering results.  It would be great to have an option, perhaps in the "Select Tests" page of the setup wizard, to ignore any such code.

We have a Round towards -Infinity  (3.8 becomes 3,  -3.8 becomes -4)

We have a Round towards +Infinity (3.2 becomes 4, -3.2 becomes 3)

We have a Round to Nearest (Rounds up or down to nearest integer, if 0.5, banker's rounding to even integer)

 

Why is there no Round towards Zero?  Basically a truncate.  (3.2 becomes 3, -3.2 becomes -3)

I have a use for that right now, but it takes several primitives to work. 

 

As a corollary, a Round Away from Zero.  (3.2 becomes 4, -3.2 becomes -4)

 

 

Message Edited by Ravens Fan on 01-19-2010 04:53 PM

To make your code reusable as subvi's and have less connector space used the cluster is a way to go, but you need to define the types inside your cluster upfront.

If you want to add an other item inside this cluster you have to rewrite all the subvi's this cluster is wired to...... This is at least annoying. (or dull, boring, dreadful, tedious, dreary, tiresome, aggravating, exasperating, irritating)

 

If you could add new types of variables inside this "Dynamic Cluster Array" on the fly, you can expand the clusters on the fly when needed !

 

Example of a "Dynamic Cluster Array":

 

Dynamic_Cluster_Array_example.png

 Look also to the Search 1D node, to see how you can select an array cluster item.

 

You can read and write the cluster as normal clusters but if you want to read or write a variable that does not exist this variable is added.

Reading and writing this variable will add this variable in all clusters in the array, these variables will get the default value until your software changes this value.

 

This behavior is the same as structures in Matlab

 

John

Background:

LabVIEW has had a recent proliferation of datatypes that essentially store the same type of data: an ordered sequence of numbers.  We now have Array, Matrix, Waveform (+ Digital Data/Waveforms), Signal (!) and Image datatypes (and perhaps one could also include Scalar).

 

The Benefits:

Each datatype is optimised for its own particular application, making it simple to perform a particular set of operations on the data.  However each appears to be developed independently.

 

The Limitations:

Each datatype has different restrictions on representation, dimension (typically only 1D or 2D), and "metadata", and most built-in functions will work on only a small subset of types.  Accessing the data as a different type usually requires copying the underlying data, with consequent limitations on speed and especially memory.  I notice these limits especially in three situations:

1) I want to use a function written for one datatype on another datatype (for example, use the Wavelet Denoising function (or even simply Square Root) on an Image),

2) I want a way to work easily with 3D images (most of the existing 2D image processing functions would have simple extensions to 3D),

3) I want to write routines that will adapt to both 2D and 3D arrays (say deconvolution).

It is certainly possible currently to work around all of these things, but it seems there should be a more cohesive approach.

 

New LabVIEW developments:

3 new developments in recent versions of LabVIEW show potential for addressing this issue: Classes, In Place Element Structure, and Data Value References (DVR).  However because they are all ideas "in development", and added on top of the existing LabVIEW rather than under the core, they only hint at possibilities rather than provide them.

 

A Possible Solution:

A low-level generic Array datatype from which all these other types are "inherited", therefore providing a single common way of addressing array-type data.  It seems like this datatype could resemble either a Class or a DVR.  It would have various attributes associated with it:

- Dimension and Size - essentially specifies the way in which indices address the array (probably always stored as a 1D array underneath)

- Metadata - e.g. t0 and dt of Waveforms, name etc of Signals, border size etc of Images, ...

The In Place Element Structure would be one way of choosing how to access the information in this variable, although it may be possible for the array information to be directly available.  Hopefully something like this would also enable the ability to "overload" existing routines, for example adding a 3D convolution to the 2D and 1D routines that already exist.

 

Some of my earlier thoughts are here, here and here.

 

PS - hope no-one said Ideas here had to be practical or reasonable!

 

HiSmiley Happy

 

It is clear from the title itself what i want to say.

 

When are we going to see a function in the functions palette through which we can directly view the Excel file as an array in labVIEW!!! Everytime building a code using ActiveX and the property/invoke nodes is not comfortable. 

If the excel file has more than one work sheets, then the user must be able to select the desired worksheet. It should be able to display both 1D and 2D arrays

Also, if the Excel file contains some graph or chart, It should scan for it and display the same. This would be the set by the user weather the function should search for the graph or chart. It should return the  graph/ chart in the form of cluster (in case the sheet has more than 1 graph/chart):smileyhappy:

Attached is the pic of the proposed function with labels

This would replicate the ability of the standard Index Array function to extract a subarray from a multi-dimensional array. Currently both indices must be wired, which only selects scalar elements.

ArrayIndexInPlace.png

 

Another related enhancement that may be useful is to provide the equivalent of "Array Subset" on the In Place Element Structure.

 

there have been several ideas on the forums pertaining to making labview smarter about how it handles arrays, but none of them (that i've been able to find) quite get to the root cause: labview doesn't directly support a 'sorted' status of arrays.

 

sorted-arrays.png

 

 

labview should add a 'sorted' field to the array data. when the array primitives detect, either at runtime or compile time, that an array is or could be sorted, it generates optimized code. for the most part primitives will function the same but just be more efficient (e.g. array min/max will be constant time operations).

 

the special cases for this feature are the array building and modifying primitives because i believe they will need special behavior in some cases to make sorted arrays useful.

 

build array  when all input arrays are sorted, the output array will be sorted. this seemed strange to me at first, but the more i thought about it, the better this seemed as the default behavior.

 

insert into array  when the input array is sorted and the index is unwired, the output will be sorted. this will be true whether or not the insertion is one element or an array of (possible unsorted) elements. 

 

i would assume you could turn any of these behaviors off by right-clicking on the node and configuring it. i'm sure the labview team can figure out mutation issues as necessary.

I really wish labview had a rainflow analysis algorithm in one of its tookits. Rainflow analysis algorithms are very useful for analyzing lots of data taken during structural test monitoring. A real-time version would be nice too.

 

180px-Rainflow_fig3.PNG

 

 

taken from http://en.wikipedia.org/wiki/Rainflow-counting_algorithm

When producing a 3D plot with an array containing NaN values, the plot should not interpret these as zero. Rather, these points should be disregarded so that the black lines do not appear. Given the way Labview handles arrays, there is no way to remove these elements given the way that the data is generated and as such it must be left up to the plotting algorithm.

 

3d plot NaN.PNG

We currently have an AND Array Elements and OR Array Elements.  It would be helpful to also have an XOR Array Elements.  I most often run into needing this when trying to calculate parity.

 

Current Boolean Pallette.PNG

Hi

 

I think the title itself is self explanatory. 🙂 🙂

 

How about modifying the thermometer vi so that there are options in the vi itself which help in setting the colour (which fills the thermometer) for a range of temperature.

For example. if the range is from 0 to 100, then:

                                                                            GREEN for 0 to 30

                                                                            ORANGE for 31 to 60

                                                                            RED for  61 to 100

 

(not trying to make a traffic light system......hahaha)

 

the colour and the range can be set by the user.    

As I see it, working with the IMAQ Image Typedef always results in problems for me, so I've gotten to where I try to avoid using it.  Working with IMAQ should be so much better than what it is.

 

Here are some Ideas:

 

1.  Allow IMAQ stream to disk RAW Format with a string input for a header for each Image.  You could call this vi once to write a Header to the Image, then for every call after that everything you input into the string would be written between Images. This vi should allow you to append the images.  Most of the time I DON'T want to write out to a specified Image format.  I want to stream raw Video to disk. 

 

Also, we are entering an era where lots of Image data is being dumped to disk, so the Raw stream to disk function would need to take advantage of multithreading and DMA transfer to be as fast and as effecient as possible.  The Vi should allow you to split up files into 2GB sizes if so desired.

 

See the block diagram to see how this would change the Labview code required to grab Images and save them to disk.

IMAQ Code Processing.JPG

 

 

 

Also, It would be nice to be able to specify what sort of Image that you want to use in the framegrabbing operation.  This could be set in the camera's .icd file, or by the IMAQ create.vi

Notice in the above example, I make IMAQ create.vi create U16 Images, but when the Image is output, I have no choice but to take the image in an I16 form.  I would like to have the image in its native U16 form.  I am converting the image to U16 from I16 by the "to unsigned word Integer"  I don't think that this should work, but it does, and the fact that it does helps me out. 

 

In general it would be nice to have better support for Images of all Flavors.  U16, U32 and I32 grayscale, and Double grayscale. 

 

 While you are at it, you might as well add in a boolean input (I32 bit result Image? (T/F)) to most Image math processing functions, so the coercion is not forced to happen.

 

Really though....... LETS GET TO THE POINT OF THIS DISCUSSION.....

 

The only reason that I have a problem with all this is because of speed issues.  I feel that arbitrarly converting from one data type to another wastes time and processing power.  When most of my work is images this is a serious problem.  Also, after checking out the Image Math functions they are WAYY slower than they need to be compared with thier array counterparts.

 

Solution: spend more time developing the IMAQ package to be speedier and more efficient.  For example, the IMAQ Array to Image is quite a beast and will essentially eliminate a quick processing time.  Why is this?  This doesn't need to be. NI should deal with Images internally with pointers, and simply point to the array Data in memory.  I dont see how or why that converting an array to an image needs to take so much time.

 

Discussions on this subject:

 

http://forums.ni.com/ni/board/message?board.id=200&thread.id=23785&view=by_date_ascending&page=1

 

And

 

http://forums.ni.com/ni/board/message?board.id=170&thread.id=376733

 

And

 

http://forums.ni.com/ni/board/message?board.id=200&message.id=22423&query.id=1029985#M22423

 

 

Hello:

 

I am going to be testing the 64-bit version of LabVIEW soon.  But the major code that I want to port to it also uses Vision and Advanced Signal Processing Toolkit.  Therefore, I am VERY, VERY interested in 64-bit versions of those toolkits.  I work at times with 100s of high resolution images and to effectively have no memory addressing limitation that 64-bit offers will be a significant advance for me.  Right now, I post-process with the 64-bit version of ImageJ to do some of the work that I need with huge image sets.

It would be nice if it was possible to add another 'Reentrant' setting.

This setting would make sure VI A always uses a specific instance, where VI B uses another instance. Sort of a single parent sub-vi.

This would allow for look-up VIs that have a seperate set of data per VI that is calling them.

 

So you can store some variables that are only valid in a single VI and if another VI is calling the same VI it will be calling a second instance and gets other variables back.

 

Ton

I'd love to see a handful of built-in false-color colortable choices for the Intensity Plots.

 

I know there's programmatic control over these things - I wrote my own 4-5 years ago. But the black-blue-white should be one of several common selectable colorschemes.

 

I labeled my own as:

 Greyscale B->W

 Greyscale W->B

 X-Ray

 Yellow Hot

 Rainbow

 Rainbow #2

 Fluorescent (green)

 Blue Red Green 

 Planet Earth

 

Perhaps one of the front panel (and Property node please) off of the Z-Axis submenu, listed by title.

 

I also have the ability to invert top/bottom or choose the color-inverse (photographic negative) for even more colors. 

 

Attached is a sample of some common colortables I routinely use.

2nd attachment is a snap of the VI I use to create the colors based on colortable constants.

3rd attachment is my messy but functional code (Block Diag of attachment 2)