LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

LabVIEW - Dealing with Decimals

Solved!
Go to solution

Sorry for posting again about my endeavors to learn how to use LabVIEW, but after searching around for the better part of an hour for a solution to a simple problem, I assume you guys can help me out.

 

So I am trying to create a delayed flat sequence that needs to take data from a case statement that defines how many clock cycles / microseconds it needs to delay by. This ends up resulting in the following formula:

(0.025us * noOP) + microseconds = delay time for the wait function in the sequence. Timing is important as it is being used in an optical setup to pulse a laser; therefore it needs precision decimal points.

 

The problem I am having is that apparently the multiply VI does not like double precision numbers (which is the only way I can get LabVIEW to not instantly round off my number of 0.025 to 0.00). Therefore, the root of my question, is how can I accomplish a simple decimal operation? I know one method was to use the VI package string/ number conversion and convert the number into a fraction, but for some reason it does not show up... I am using a huge license provided by the school, so I assume it would have something as basic as that.

 

Another question I have is will this properly delay the sequence, or is there a better way to go about this? (I saw someone say to never use these because they are slow?) Any advice on the structure is appreciated - it is obviously very simplistic at the moment, but I want to improve my LabVIEW skills as much as possible.

 

The Arduino code would look something like this:

 

digitalWrite(DIO23, HIGH);

delayMicroseconds(microsec+(noOP*0.025));

digitalWrite(DIO22, HIGH);

delayMicroseconds(0.025);

digitalWrite(DIO22,LOW);

delay(1000);

digitalWrite(DIO23,LOW);

etc etc...

 

0 Kudos
Message 1 of 3
(2,741 Views)

Based on your previous question on the forums I assume that you are creating a VI to compile to the FPGA. You haven't specified your FPGA target or LabVIEW version, these would help better answer your question. LabVIEW targeting an OS environment (real-time, Mac, Windows, Linux etc.) has complete support for large 754 types as well as extended (128-bit+) types not specified by 754 - the multiply primitive natively handles all of these. LabVIEW also lets you target other hardware such as FPGAs - in these situations it offers you a (substantially) reduced set of primitives to work with, depending on what your target can support.

 

Historically FPGA targets have not supported floating-point operations; they take up a lot of resources on the fabric. It is only (relatively) recently that using 754 types has become common. If it is not supported on your target  and LabVIEW version then you should use Fixed-Point ("FXP") data types which lets you create a custom-width data type to get the precision in the region of interest. These types let you optimize a balance between resolution and width size to manage fabric usage and timing. The help file has a lot of information on these data types as well as the online FPGA tutorials.

 

Either way the Wait node requires an integer value for ticks/us/ms depending on which you select; if the resulting value you pass to the Wait is not an integer I believe it will get converted and truncated.

 

From your VI (which, loaded into my cRIO FPGA target on LV16, has no problems with the 64-bit double type) I can see that you have constants in microseconds and you are multiplying by 0.025 - the resulting value going to a Wait in ticks. I suspect you meant this to be a Wait (us) instead. I would recommend using either use a fixed-point type or multiply your constants by your clock frequency and rate things in ticks which will always be integers.

 

Regarding the rest of your code; its a little hard to give good advice without a good understanding of what you are trying to achieve. Despite some quirks associated specifically with FPGA development generally you use normal LabVIEW programming styles. If you're not comfortable with those basics then you really should look at the online training videos that NI provides to get you started; they will be well worth the time invested (a few hours). The key value proposition of FPGA development with LabVIEW is that you can transfer your existing LabVIEW skills to developing for FPGAs too.

0 Kudos
Message 2 of 3
(2,719 Views)
Solution
Accepted by topic author Cameron9438

Hi Cameron,

 

defines how many clock cycles / microseconds it needs to delay by. This ends up resulting in the following formula: (0.025us * noOP) + microseconds = delay time for the wait function in the sequence.

When using a FPGA this formula can be converted to "noOP + 40*microseconds" with unit ticks (using the default 40MHz clock).

See: no fractional values anymore!

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
Message 3 of 3
(2,685 Views)