LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Piezoelectric - charge output pressure sensor problem

Hey everyone!

 

I'm not one to like to post but I've been encountering a problem on a project that I have been trying to fix for the last week(s), by browsing this forum and trying every suitable idea I come across.

 

It's helped me understand more about LabVIEW, but hasn't really done much to solve my problem. Furthermore, the deadline for the project is fast apporaching (sometime this week). Hence this post..

 

My system:

  • A pressure tranducer with a charge output (it's a dynamic pressure sensor, so reacts to a change in pressure)
  • A charge amplifer to convert the transducer charge to a voltage output
  • A NI DAQ to read the voltage readings and feed into LabVIEW

My problem:

  • I can't seem to convert my voltage readings into a meaningful pressure reading.

 

When using my current VI, the change in pressure recorded is of a degree of magnitude more than is possible. I have tested each of the hardware components in my system independently and they all seem to function.

 

As it stands, I am reading 10 samples a second, averaging these samples, repeating the steps for the next second, then subtracting the current average from the previous average to get the change in pressure. Therefore, I am not sure if it is a problem with my scaling/conversion or with my methodology?

 

Please find the pressure sensor calibration and specifications attached, as well as my current VI . If it helps, I have also looked into my amplifier and the gain ranges from 1-190 (varies with the input frequency). 

 

If anyone could possibly help, I would be immensely grateful!

 

P.s. Please excuse my ameteurness, I am still very relatively new to LabVIEW or even data acquisition for that matter.

0 Kudos
Message 1 of 10
(4,754 Views)

You have many problems with this VI.

 

"As it stands, I am reading 10 samples a second, averaging these samples, repeating the steps for the next second, then subtracting the current average from the previous average to get the change in pressure. Therefore, I am not sure if it is a problem with my scaling/conversion or with my methodology?"

 

I do not really understand the required scaling here, but I can give some comments regarding to the DAQ part of the raw voltage signals:

  1. You do not have any main loop here, therefore the 1000 msec Wait is functionless in the block diagram
  2. You wrote you read 10 samples per second. No. Your DAQ Assistant is configured to read a fixed number of samples, 10. You set 1kHz rate, it means you will measure 10 samples altogether under ~10 msec.
  3. After you got your only 10 samples, you connect the dynamic data wire to a FOR loop, which is set to iterate ONLY ONCE (you set value 1 to the iteration terminal). Therefore you will pick only the first value of the 10 samples. After this, you create a 2D data from this single value, and feed it into another only-once iterating FOR loop, with a Mean function inside. This does not make any sense to me.
  4. I guess you are running the VI using the "Run Continuously" button. In this case This is not a good practice, use a While loop for your data acqusition.

EDIT: what is the voltage range you expect from the Voltage amplifier?

EDIT2: Please specify the hardware you use for DAQ! I see a "USB-TC01" device in the DAQ Assistant configuration settings, it is very strange, since this device is to measure thermocouple voltages (so temperature). What kind of NI hardware do you use??

 

0 Kudos
Message 2 of 10
(4,719 Views)

Hi,

 

Thanks a lot for your reply!

 

Not having a while loop was very silly of me. I have now included it and set the number of iterations for the FOR loops to be 10. I hope this is more inline with me achieving my objectives?

 

Secondly, is it safe to assume that if I set the DAQ assistant to 10 samples at 1Hz, while also removing the delay of 1s. I would be reading a total of 10 samples each second?

 

Lastly, I am using an NI USB-6211. In regards to the voltage range from the voltage amplifier, I am not sure. I will get right on investigating it as soon as possible!

 

in the meantime, I would like to get the theory down and so for that sake could we assume te voltage output range is -10V to 10V?

 

Again, thanks a lot for your help so far 🙂

 

 

Please find an updated VI (which includes the implemtentation of my questions e.g. 10 samples at 1Hz) attached.

0 Kudos
Message 3 of 10
(4,707 Views)

Secondly, is it safe to assume that if I set the DAQ assistant to 10 samples at 1Hz, while also removing the delay of 1s. I would be reading a total of 10 samples each second?

 

You are doing wrong this basic calculation. You know what sampling frequency means? 1kHz means you sample 1000 samples per second. If you only read 10 samples at 1kHz speed, it will give you a total time of 10 msec. NOT 1 second!
EDit: sorry, I read 1kHz, not 1 Hz 😄

I did not check your second VI, but here find an example how to do the DAQ with DAQmx functions, using a Queue. Finish the code in the lower while loop, with the scaling you need. Also add a file save function, use spreedsheet file or TDMS functions. Try this, and respond if it gives values which make sense...

 

EDIT2: Why do you use that factor, a very small value in the second devision? 3.638E-11? Where does it come from? The amplifier specification gives that to you? I really do not get the scaling factors here, probably you need to find in the manuals...

 

Voltage - Continuous Input_modified.png

Message 4 of 10
(4,699 Views)

Blokk's reply shows what would have been my first Recommendation -- get rid of the DAQ Assistant (and its evil Sidekick, the Dynamic Wire).  Do a Web Search on "Learn 10 Functions in NI-DAQmx" and you'll find a White Paper that discusses how easy it is to use DAQmx directly to much-more-transparently do what the DAQ Assistant "hides" from you.  You can also do the "trick" of opening MAX, using MAX to develop a Test Panel where you set up the parameters you want to use in your actual Sampling task (say, ±10v, 10 Samples at 10 Hz from Channel 0, differential, Continuous), push "Run" and see that it "does what you intend", then save that as a named Task (I suggest not naming it "My Voltage In Task").  Now, when you set up your first DAQmx function (say, Create Virtual Channel), you can put a Constant on the Task/Channel In terminal, click the little down arrow, and up will pop all of the MAX-created Tasks, including the one you just made, with all of the parameters you need already specified.

 

Next, note that each time the While loop with the "Get 10 Samples" function runs, it ... gets 10 samples".  You'll get an Array (if you configured the Acquisition to return an Array of Reals) that you can average, save in a Shift Register of the surrounding While loop, and then do the subtraction of the previous value (read from the Shift Register) from the just-sampled-and-Shift/Saved value, giving you the difference.

 

However, in your original Post, you said that your instrument gave you a voltage corresponding to the change in pressure.  So I wonder -- if you are now computing changes in the change of pressure, aren't you effectively computing the second derivative of pressure?  Could this be the reason your results look so strange?

 

Bob Schor

Message 5 of 10
(4,685 Views)

Hi Blokk,

 

Haha, no problem at all 🙂

 

Thanks a lot for the above code! I will try it out as soon as I get the testing system all up and running again (probably tommorow morning).

 

With regards to the division factor, it comes from the pressure transducer manual (see the sensitivity attachment in my first post).

 

My reasoning with the scaling was to convert my voltage output reading into the original input, then afterwards divide this value by 3.638E-11 (an arbritary sensitivity/bar, in this case 0-80 bar at 250 degC) to obtain the value of the pressure to display on the graph. Is that not a correct approach?

 

EDIT:

 

Hey Bob,

 

Thanks a lot for your reply! I will indeed take a look at getting to grips with the DAQmx functions, it does look like it will help in greatly increasing my understanding of the data acquistion routine.

 

With regards to the important fact you highlighted. I'm actually not sure now, I was indeed told it measures changes in pressure. But now I see how my understanding of its output characteristics could be wrong.

 

If the output is actually the change itself, then that could be a reason for my really large values. I will try and investigate further, thanks a lot for highlighting this! 🙂

0 Kudos
Message 6 of 10
(4,679 Views)

Heya,

 

Sorry for the lack of update!

 

I have tried to investigate the fact highlighted by Bob, as well as implement the suggestion by Blokk.

 

So far it would seem, the problem is more hardware related, than software. I am unable to get any nopticeable change in reading from the pressure sensor for a (relatively) sizeable change in pressure, which has led me to believe the problems lie in the amplifcation of the pressure input (too little).

 

Thanks again! And though I am unable to verify (at this stage) if any the proposed answers would solve my original problem, I am now at a better footing to tackle it once the (or a similar) opportunity arises again.

 

P.s. Have some Kudos for the help 🙂

 

P.p.s.  I had never heard of a queue before, So it was a very fun/interesting expereince trying to understand how the code worked! Thanks for the above code Blokk!

0 Kudos
Message 7 of 10
(4,553 Views)

It seems that you missed the charge amplifier ( signal conditioner)  in your calculation.

 

Your pressure sensor has a sensitivity of S_qp =36.44 pC/bar  and what is the sensitivity of your charge amplifier?  They usually also come with a calibration sheet 😉

If you are not shure just post the cal sheet or a link to the datasheet (or just manufactor& type)

Assume it has a sensitivity of S_uq in mV/pC  of  ... U_max /max charge ~ 5000mV/ (250bar * 37pC/bar~ 10000pC)  ~ 0.5 mV/pC ??  (or maybe 1mV/pC??)

Now: What you read is a voltage u and you need pressure p

p (in bar) = u /( S_qp * S_uq)   *1000   (the *1000 is due to V to mV conversion)

 

 

 

 

 

 

 

 

Greetings from Germany
Henrik

LV since v3.1

“ground” is a convenient fantasy

'˙˙˙˙uıɐƃɐ lɐıp puɐ °06 ǝuoɥd ɹnoʎ uɹnʇ ǝsɐǝld 'ʎɹɐuıƃɐɯı sı pǝlɐıp ǝʌɐɥ noʎ ɹǝqɯnu ǝɥʇ'


0 Kudos
Message 8 of 10
(4,536 Views)

Hey Henrik,

 

Many thanks for your reply! 🙂

 

I am unsure as to the sensitivty of the charge amplifier, as it was an in-house custom made one. Unfortunately, the documents surrounding it have been misplaced, but I will try and see if they can be found.

 

I do however, know it posesses a 220pF capacitor in a 'charge-mode charge amp' and therefore I was previously using the equation: Voltage = Charge / Capacitance for my voltage to pressure conversion.

 

Please feel free to advice if this is the wrong approach (or equation).

 

P.s. A second post-amplifier is present within the charge amplifier, but has been set to a gain of 1. 

0 Kudos
Message 9 of 10
(4,522 Views)

Well 220pF in the feedbackloop gives you a sensitivity of 220pC/1V -> 0.22pC/mV

(so my guess of 0.5pC/mV wasn't that bad 🙂 )

I hope a good stable (low tempco, low dF/dV ) cap like a silver mika cap was used and the hole thing was cleaned after soldering,....   building a good charge amp is an art BTDT 😉

 

If you can get your hands on a good stable 100pF or (even better) 220pF capacitor (silver mica or air capacitor, I use GenRad 1404s 😉 )  you can perform your own calibration. All you need is that stable capacitor with a known value (best would be a calibrated one, since that uncertaincy directly goes into your hole MUB )

and a MultiIO DAQ card (I assume you have) ,  place the cap between the analog output and the charge amp input  and run the

examples\Sound and Vibration\Swept Sine\Frequency Response with Swept Sine Express VI (DAQmx).vi   (If you don't have the SOund and Vib toolkid , just try the 30 day version 😉 )

 

The input charge Q is 

Q = C*U    with C the capacity and U the voltage

 

with a 220pF cap the exitation voltage amplitude and the charge amplifier output should have (about) the same value.

rigth click on the Frequency response graph , export to excel , add the conversion factor (the value of your input is not voltage , it's charge 😉 ) you have done your charge amp calibration 😄

 

Do that in the morning and at high noon  ...   say at different room temperatures ....  

 

 

 

Greetings from Germany
Henrik

LV since v3.1

“ground” is a convenient fantasy

'˙˙˙˙uıɐƃɐ lɐıp puɐ °06 ǝuoɥd ɹnoʎ uɹnʇ ǝsɐǝld 'ʎɹɐuıƃɐɯı sı pǝlɐıp ǝʌɐɥ noʎ ɹǝqɯnu ǝɥʇ'


0 Kudos
Message 10 of 10
(4,504 Views)