LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

                 I am maybe going to say something stupid (??)  Smiley Frustrated


          but ... to get dynamically the Ref of the type specifier


    would it be really impossible to have a Property Node like that?

 

SR1.png

The function Excel Get Excel Location function in the NI Report Generation Toolkit takes an Excel cell address (e.g.A1) and returns row/column (array) indices (e.g. 0, 0).  However, if a "non-cell" address, including a blank input, is input, the result is 0, 0, the same as if A1 was entered.  I'd suggest that an index of -1, similar to the "illegal Array index", be returned for cell entries that don't parse as a Cell Address.  Specifically, <blank> would return -1, -1 (which doesn't correspond to the legal cell address A1).  

 

This illogcal behavior of the current function has messed up some functions I've written to manipulate Excel worksheets.  My routines checked for indices "in range", but got fooled by the "wrong" 0, 0 returned by <blank>.  So either I have to build a "validity check" before using this function or (better, in my opinion) the function needs to return legal indices only for legal inputs.

Opengl supplies a function fpr updating a portion of an existing texture (glTexSubImage2D). It would be cool for incremental data display to have support for this with the 3d picture control.

Hi,

 

I am looking for a function that gives the number of bits (or bytes) that a given data type uses on disk and could not find it so far. LV types use constant number of bits for each specific data type, e.g. a string is always 8 bit, a single numeric is 32 bit and a double is 64 bit. It would be helpful to have a function that takes any data type as input and gives back the number of bits used to save the data on disk.

 

With this function one could write VIs that read/write binary files generic. Without this function I have one VI that reads double numerics from a binary file (e.g. the 4th number is on positioin 4*8) and a different VI to read an U8 numeric (the 4th number is on poistion 4*1). The only difference between those two VIs is the constant for a doulbe (8 bytes) and a U( (1 byte).

 

So basically I would prefer the "GetNumberofBytesOfDataType.vi" shown at the bottom of the attached example implemented as a LabVIEW native function.

 

Regards,

  TT

Registry VIs require a registry view to be selected. This differs depending on if you are using 32 bit windows or 64 bit windows. This should be auto detected by labview or be pragmatically be available to allow labview registry to be portable across different platforms.

I have a idea for LV jitter analysis toolkit (JAT).

 

Please add some examples with "Clock Recovery by PLL Filter.vi" and "Clock Recovery by 2nd Order PLL.vi".

 

And, my customer previously used DPOJET as jitter analysis toolkit. DPOJET has very detailed manual.

http://www.av.it.pt/Medidas/Data/Manuais%20&%20Tutoriais/18%20-%20Real%20Time%20Oscilloscope%2020Ghz/CD2/Documents/DPOJET.pdf

 

He wants to switch from DPOJET to LV JAT. However, LV JAT does NOT have detailed manual.

If LV JAT doesn't have detailed manual, I think that LV JAT needs more examples.

Now, LV JAT 2013 has 8 exsample.

If we can create standalone application of Multisim-LabVIEW cosimulation, it is batter to develope and deploy cosimulated standalone application. And can make batter program without hardware. In this application should not need install Multisim or LabVIEW, on the machine, on which we want to run the "Standalone Cosimulated Application." If any body can do this please give me solution of this idea.

Offten I will get handed a PDF of something like this 

 

 

#define PMC_STATUS_5V_ABOVE_POWERON           0x0001

#define PMC_STATUS_5V_ABOVE_BROWNOUT          0x0002

#define PMC_STATUS_12V_ABOVE_POWERON            0x0004

#define PMC_STATUS_12V_ABOVE_BROWNOUT          0x0008

#define PMC_STATUS_U31_PG                 0x0010

#define PMC_STATUS_U30_PG             0x0020

#define PMC_STATUS_3P3_PG                               0x0040

#define PMC_STATUS_USB_RESUME                                0x0080

#define PMC_STATUS_U31_DCDC1_PG  0x0100

#define PMC_STATUS_U31_DCDC2_PG  0x0200

#define PMC_STATUS_U31_LDO3_PG  0x0400

 

 

I want a simple way to impor this into labview and doe bitampping besides this 

 

Untitled.png

which seems like a very messy solution to a thing that is very easy to do in every other language. 

 

Case Structure

select frames : by name, but also by index

the default behavior : by name

 

SR1.png

Below, just an example that uses the "mean function".


This function uses a call to a dll.


Look at the difference between the execution times !

 

This is just a small example to show the extreme slowness of dll calls


The time required to call a dll is a disaster!

 

 


                  please, It would be necessary to improve time to call a dll, really !

 

 

 

SR1.png

 

sorry for my bad english, i do my best Smiley Happy

Quickly force all children of a particular VI to become reentrant or non-reentrant.

 

The option would be a checkbox below the Parent VI you are editing

I use typecast a lot. It's the perfect function to turn clusters into a string and vice versa when doing tcp or udp communication. Unfortunately it breaks when the cluster contains a string or array. If I pass a cluster containing a five character string and a U32, it should give 9 characters out.

 

The main complication seems to be turning that same 9 character string back into a cluster, without size info on the string Type Cast doesn't know how many bytes go to the string. That could be pushed back to the user and require the cluster wired to the 'type' input to have the expected number of characters in the string or elements in the array.

 

I looked at Flatten To String, but if you pass a cluster containing a string it always prepends the string length in the flattened output.

 

I haven't yet had a chance to look at LV2013, maybe it's in there?

It would be helpful if the IMAQ Particle Analysis VI took as inputs:

 

Max Number of Particles

Maximum Analysis Time

 

It could then use these parameters to decide when to quit processesing the image and report back that it did not complete the operation via a boolean output or enumeration that indicates why it did not complete the processing.  

 

In an automated vision system used count defects it is possible that the sample under test has an enormous amount of defects.  In that case the user might want to call the sample grossly defective and they do not care if the exact number of defects (particles) are reported.  Likewise, if the automated system has a fixed time frame over which it needs to process a sample this input would guard against that time frame being exceeded.

 

Context Help.PNG

Controls.png

Greetings:

There were couple of related posts on Close Reference. I did a search and found these. I am posting it here for reference. Link1, Link2.

 

I am suggesting that anytime we use an item from Application Control palette or an instance where an application reference needs to be closed, a Close Reference item also be automatically placed in the block diagram. This would be very similar to how teststand handles Flow Control. So when a Close Reference is always dropped, it suggests good programming practice to close all references from the subVI/VI. Ofcourse LabVIEW will have to determine the best location to drop the Close Reference.

IMAR Read QR Code VI somtimes takes a while (more than 10 seconds) and the program hangs.I know that setting the QR Code Cell Size (pixels) larger than 3 pixels seem to take care for the most part but it still takes a few seconds. It will be great if there is a timeout option to this VI. Does anybody have a workaround idea?

Hi,

 

I'm bit afraid to post this idea, because it seems to be I could overlook something.

 

There is no function which do the proper rounding (with 5 a a last digit as a hard limit) as below:

 

If the last digit is 5 o greater round up, otherwise round down.

 

An example:

(to simplify I use only one digit after dot)

 

0.1 -> 0

0.4 -> 0

0.5 -> 1

0.9 -> 1

1.1 -> 1

1.4 -> 1

1.5 -> 2

1.9 -> 2

2.1 -> 2

etc.

 

Now to satisfy this function you have to do coding like this (just for numbers with one digit after the dot):

 

Capture0000.PNG

 

Why to complicate so simple thing???

I have several times found myself using variants to create various types of sorted lookup tables data sets. (Variants "sort" attributes/records by the name automatically, taking a performance hit when creating/adding to the variant, but wicked fast to retrieve a named attribute, and when getting a list of all named attributes they are returned "sorted".)

 

Now, in many/most of those scenarios, I find myself wishing I could just name the attribute with a numeric input without the hassle of converting the numeric to a string.

 

For example, if you read the file-property of all files in a folder tree and put various information for each file into a cluster (e.g. size, last modified date, name etc.), this cluster would become the "attribute" or "record" that you add to a variant for quick lookup later. Now, of course you have many ways to name the attribute(s) depending on what you want to do. In one case, you may want to retrieve file attributes based on the oldest (or newest) "last modified" time.. currently one way to do this is to "format into string" the "last modified" time stamp data and use that as the attribute name. If you are careful with how you format it to string, the sorted string list will come back oldest to newest (e.g. format it as YYYYMMDDHHMMSS to get a decimal string that sorts oldest (least) to newest (latest). (Note that when getting lists of files from LabVIEW, the order of the returned files does not appear to be sorted by "modified" or "created" dates.)

 

It would be nice if instead the "name" input would support usage of numeric inputs directly. If this was implemented at a low enough level, then the string conversion would not only be hidden from us (users), but potentially be completely removed as I picture the "name" at a low enough level is just a unique identifier anyway..  If this could be made to allow the "name" terminal to accept time-stamp data types, strings, I/U32 and DBL I would be very happy indeed!

 

Please discuss in comments if this is good, bad, impossible, possible etc. etc.

Thanks,

Q

Hi

 

Suppose we have a big code in that a part of the code need not be executed ( still obeying the data flow).There is no option to do that .Those who are familiar with Matlab /Python  know there is acommenting option .Once commended thet statements will not get executed.The problem with LabVIEW now is in abid code if we want to try if a node need not get executed we have to delete it then claerind of wire will come.If we want to include it again we still have to do thwe same thing.If something is there ( may be a rectangle) when drawn around  a particular node keeps it away from the execution it will make development/testing  easy

 

Please update the XML parsing pallette (specifically the load VI and Node.XML invoke node) to be able to get the actual text contents of a node when it contains ">" or "<".  Currently, if the node text contains ">", the Result XML of the node is displayed, with "&gt;" substituted for ">".  If the node text contains <, the XML document won't even load and throws an error.

 

To see this behavior, use the LabVIEW example: Query XML Document for a Single Node.vi

 

Modify any text tag contents to contain either of these characters.

 

 

Hello,

LabView permits to import three different 3D file formats: VRML, STL and ASE.

So LabView can handle these file type, but is not possible to export any CAD format with LabView.

In example can not be exported to a CAD format simple x,y,z points generated by a mathematic process in LabView.

Next future will be possible to save CAD formats?