From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Measuring flowrate on labVIEW using a Coriolis flow meter

Solved!
Go to solution

Hi all,

 

I want to measure flow rate of liquid oxygen in a rocket engine.

It will be measured using the Micro Motion™ 5700 Transmitter (also called Coriolis flow meter or coriolis transmitter) pictured below.

 

 

I have attached the installation manual.

 

The Coriolis transmitter works by converting raw sensor data to mass flow. This is then translated into an output signal to LabVIEW. The output signal will be in mA as this will be attached to the analogue input module NI 9203 whose signal is measured in amps.

 

To convert the mA output to mass flow that can be displayed on LabVIEW I used the equation below. This is an equation that is used to the determine a 4-20 mA current output from flow rate. I got it from this link: https://www.sensorsone.com/flow-transmitter-4-20ma-current-output-calculator/#inrdg

 

x= (((Linear mA out -4)*(y2-y1))+16y1)/16

 

 

Where

Linear mA out is the 4-20mA output signal, x is the flow measured by the flow transmitter, y1 is the lowest limit of the flow measured by the transmitter, and y2 the highest limit of the flow measured by the transmitter. The data sheet of the flow transmitter does not specify the lowest flow rate measurement so 0 will be used. However, it does specify a value of 15 g/sec as the highest limit. Thus, flow is measured in g/sec.

 

Now when I applied this theory on LabVIEW I did not get the result I wanted (Vi attached) .

 

I need help.

 

Is this how you would go about measuring flow rate from my coriolis flow meter or am I in the completely wrong direction?

 

Maybe I am using the wrong equation to scale from amps to flow rate?

 

Am I in the right direction but missing something in my vi?

 

Let me know if you need additional details!

thanks,

Emilie.

 

 

 

 

0 Kudos
Message 1 of 11
(2,998 Views)
Solution
Accepted by erabrannan

Hi erabrannan,

 

in the end all you need is a simple linear scaling:

Keep in mind:

When measuring current with DAQmx you usually get current in Ampere, so the scaling factors should be corrected to 0.004A and 0.016A.

You can also apply a linear scale to the DAQmx channel, so DAQmxRead will already give you "flow" values instead of current (in A).

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
Message 2 of 11
(2,989 Views)

Hi,

 

thank you so much for your help!

 

Are you sure the scaling factors should be corrected to 0.004A and 0.016A because the website I got the equation from  already takes into account amps in mA

Link of website:

https://www.sensorsone.com/flow-transmitter-4-20ma-current-output-calculator/#inrdg

 

I tried your linear scaling with simulated hardware NI 9203 (vi attached) and it gave me values within two decimals of 11.2 which is quite a small change over time considering the range should be from 0 to 15 g/sec.

 

So then I tried to change the factors to 0.004A and 0.016A as you suggested and got values from 11 to 30 which is much too high since the max value should be 15 g/sec (vi attached)

 

So what should I do to get a normal range from 0 to 15 g/sec?

 

-Emilie.

 

0 Kudos
Message 3 of 11
(2,961 Views)

Hi erabrannan,

 


@erabrannan wrote:

Are you sure the scaling factors should be corrected to 0.004A and 0.016A because the website I got the equation from  already takes into account amps in mA

Link of website:

https://www.sensorsone.com/flow-transmitter-4-20ma-current-output-calculator/#inrdg


Well, that website might calculate using mA values.

But here we are discussing about DAQmx devices and LabVIEW. In my experience DAQmx always reads current with unit A…

 


@erabrannan wrote:

I tried your linear scaling with simulated hardware NI 9203 (vi attached) and it gave me values within two decimals of 11.2 which is quite a small change over time considering the range should be from 0 to 15 g/sec.

 

So what should I do to get a normal range from 0 to 15 g/sec?


Maybe you should try with real hardware instead of simulated one!?

Why are you applying those "shunt resistor" settings at DAQmxCreateVirtualChannel at all? (That module already knows how to read the current…)

Then you should just display the current readings to check which unit is used (mA vs. A).

Then you can apply the scaling.

 

Or do it the DAQmx way:

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
Message 4 of 11
(2,952 Views)

Hi,

 

Unfortunately I don't have the real hardware yet so I won't be able to test it.

 

 

I tried the DAQmx way and also got completely different values. This time the graph went from 1 to 2.

 

It is hard to know which scaling method is correct without the hardware.

 

I think I will have to ask the manufacturer if they have a proper conversion equation.

 

When I set the pre-scaled minimum to 0.004A (4mA) it returns a straight line on the waveform graph where the value does not move.

 

Any idea why?

 

 

-Emilie.

 

 

0 Kudos
Message 5 of 11
(2,938 Views)

Hi ebrbrannan,

 


@erabrannan wrote:

Unfortunately I don't have the real hardware yet so I won't be able to test it.

Any idea why?


Simulated AI hardware just reads a sine wave signal.

When you want to simulate real world data then you need to simulate that specific data on your own!

 


@erabrannan wrote:

It is hard to know which scaling method is correct without the hardware.

I think I will have to ask the manufacturer if they have a proper conversion equation.


It is really easy when you use the snippet of my first answer: set some current values for the input and check the resulting flow values.

The manufacturer already gave you the equation: 4 mA…20 mA relates to 0 g/s…15 g/s!

 

Edit:

The scaling becomes much more easy once you implement it without wrong wiring:

Your scaling was wrong!

 

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
0 Kudos
Message 6 of 11
(2,905 Views)

Hi,

 

how do I implement your code  with the DAQmx read vi I am getting the signal from?

 

 

Kind regards,

Emilie.

0 Kudos
Message 7 of 11
(2,843 Views)

Hi erabrannan,

 


@erabrannan wrote:

how do I implement your code  with the DAQmx read vi I am getting the signal from?


Replace the Ramp function by DAQmxRead…

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
0 Kudos
Message 8 of 11
(2,838 Views)

Hi,

 

sorry to bother you about this again but I have attached my new vi below.

 

When I run it I dont get any data on the graph.

 

Kind regards,

Emilie.

0 Kudos
Message 9 of 11
(2,834 Views)

Hi Emilie,

 

please downconvert to LV2017 and attach again…

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
0 Kudos
Message 10 of 11
(2,826 Views)