LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Error 200019 problem

Solved!
Go to solution

Hi,

 

I am using simple vi to acquire data from 4 analog inputs (pic attached), but I keep getting this error 200019. I am using PCI-6251(16-Bit, 1 MS/s (Multichannel), 1.25 MS/s (1-Channel), 16 Analog Inputs) card. At lower rates it run fine, but when I try acquiring 5M samples at 1M sampling rate I am getting this error. I have tried internal/external clock/ 10Mhz ref clock/ and different combinations but problem still presists. Even at slower rates (900K / 800K) same thing happens.

 

Help Please,

Sine

Message Edited by labview.edu on 03-04-2009 07:39 PM
0 Kudos
Message 1 of 9
(4,874 Views)

Hi Sine,

 

According to the specifications of the PCI-6251, you can run one channel at a rate of 1.25 MS/s, or multiple channels at an *aggregate* sample rate of 1 MS/s.  You are attempting to run 4 analog inputs at 1 MS/s.  This would correspond to an aggregate rate of 4 MS/s.  This is too fast for your device, and as a result you are getting the error that you are seeing.

 

To achieve the multi-channel rates you are attempting, you may want to look into Simultaneous Sampling Multifunction DAQ devices (such as the PXI-6124).

 

Hope this helps,

Dan

0 Kudos
Message 2 of 9
(4,845 Views)

Hi Dan,

 

Presently I can run 4 channels at the rate of 350k which aggregate to 1.4MS/s (>1 MS/s). But If I go more than 350k then it gives a problem. Still confused, DAQ card you suggested is about 3500 $ 😞

 

Thanks,

Sine

0 Kudos
Message 3 of 9
(4,819 Views)

Hello Sine,

Dan does make a good point. It does seem that you are exceeding the aggregate sampling rate of 1MS/s. Even if you are having success exceeding it with 4 channels at 350k, the spec still applies and is probably the source of your problem. Also, I am not sure what version of LabVIEW you are using or how your vi is configured, but this KnowledgeBase may apply. Please take a look at it and let me know what further questions you have. Have a great day!

Regards,
Margaret Barrett
National Instruments
Applications Engineer
Digital Multimeters and LCR Meters
0 Kudos
Message 4 of 9
(4,801 Views)

Hello Margaret,

 

 I am using PCI-6251 with labview 8.5.1, It is a simple vi (pic attached with the first post of this thread). I have another question How can I increase the size of the buffer?

 

Thanks,

Sine

0 Kudos
Message 5 of 9
(4,793 Views)
Solution
Accepted by topic author labview.edu

Hi Sine,

 

I'm not surprised that your device will operate a bit above the rate in the specifications.  Keep in mind however, that your device has only one ADC.  As you read through multiple channels, they must get switched one at a time to the ADC.  The reason the device has a lower rate specified for the multi-channel use case is to allow this circuitry time to settle when switching from channel to channel.  The ADC itself can operate at the same speed (a bit above the spec), which is consistent with the behavior you were seeing.  As for the device I recommended, I simply looked at the resolution of the device you were using (16 bits), and the rate / number of channels you were attempting to acquire data from.  If you don't need 16 bit resolution, a less expensive alternative may be the PCI-6132.

 

To answer your question about buffer size.  To do this, I think you'll need to use lower-level VI's than the DAQ Assistant.  To do this, I would recommend one of two methods.  You can look at the examples which ship with DAQmx (LabVIEW's Menu Bar->Help->Find Examples->Hardware Input and Output->DAQmx->Analog Measurements->Voltage).  From this list, I would recommend selecting Ack&Graph Voltage-Int Clk.vi.  The other method is to right click on your DAQ Assistant VI, and select Generate DAQmx Code. For both cases, you'll see lower-level DAQmx VI's and how they are used to configure a task.  To change the buffer size, you'd need to insert DAQmx Configure Input Buffer.vi before calling start on your task.  This VI allows you to specify the size of the buffer DAQmx acquires into.

 

Hope this helps,

Dan

Message 6 of 9
(4,779 Views)

Hi Dan,

 

Thanks for clearing things up for me. About the buffer size, My understanding is that solution you gave will optimize the use of the buffer. Will "Configure Input Buffer.vi" allow me to change the size of the buffer?

 

Question I am asking is that, Is buffer size related to hardware or is it a software issue?

 

If I upgrade to a better computer hardware (faster processor, more RAM) will it be helpful in terms of buffer size?

 

If I buy the newer cards that you have suggested will they provide me with increase buffer size?

 

Sorry for all the trivial questions I am just trying to learn.

 

Thanks,

Sine

0 Kudos
Message 7 of 9
(4,756 Views)

Sine,

 

Let me attempt to outline the data flow that happens in hardware and software when you perform analog input.  After that outline, I'll attempt to explain what can be configured with DAQmx.  Everything begins when your device receives a sample clock (this can come from an internal timer, or an external signal).  When this sample clock is received, one of two things happens.  If you have a S-Series device the ADC for all programmed channels performs an analog to digital conversion.  If you have a M-Series device, all programmed channels are read switched to the ADC and conversions are performed on them in sequential fashion.  This converted data is then set to memory on the device (the device's FIFO).  Each time a sample clock occurs, this process is repeated.  After some amount of data accumulates in the FIFO, the device will then transfer it to a buffer on the host computer using DMA.  Once in the buffer in host memory, the data sits and waits.  At some point, your application will call DAQmx read and the data will be read from the host buffer, and returned to you by the DAQmx Read function.  DAQmx allows you to configure behavior at multiple points along this data path.

 

Using the DAQmx Channel Property Node, you can configure (in broad terms) when the device will transfer data from the FIFO to the buffer on the host computer.  To do this, you'd set the data transfer request condition.  The default value is to transfer when the FIFO is not empty.  This works well for most applications.  Note that the size of this FIFO is not programmable, and will vary from device to device (the PCI-6251 has a 8 KB FIFO, while the PCI-6132 has a 32 MB FIFO).

 

The next stage of the data path that DAQmx allows you to configure is the host buffer.  DAQmx Configure Input Buffer.vi allows you to configure how big this buffer should be.  If your computer has more RAM available, DAQmx should successfully be able to allocate a larger buffer.  Next, DAQmx allows you to configure what to do if this buffer becomes full (This happens if your LabVIEW application does not read data from the buffer as fast as the device writes to it).  You can allow the device to overwrite data in the buffer, or to not overwrite samples in the buffer.  DAQmx defaults to not allowing overwrite, and for most applications this is the correct setting.

 

The last stage in which you can configure behavior is at read.  DAQmx allows you to specify how much data to read from the buffer at one time (input to the DAQmx Read VI).  This setting defaults to '-1', which means read all available samples.  This is a default setting that I almost always overwrite.  I've found it to be far more useful and efficient to always read a known-sized block of data from the buffer.  Finally, DAQmx allows you to specify which samples to read from the buffer.  The default behavior is to read beginning with the first unread sample, and behavior is correct for most use cases.

 

The final thing which you as the programmer has control over is how often DAQmx Read is called.  Ideally, you would call it often enough so that you were reading data from the buffer at the same rate hardware was writing data into it.  If you have a lot of processing to do on the data read, and it can't be done post-acquisition, this is area where upgrading to a faster computer can be helpful.

 

The DAQ Assistant hides most of these settings from you, which is OK in many cases.  If you have an application where you'll need to configure these, then you'll have to use lower level DAQmx API VI's (I don't currently have LV in front of me so I can't point to specific example VI's at the moment).

 

There's my three minute overview of buffering in DAQmx.  Hopefully the details provided will give you some insight about how all of this works, and what DAQmx allows you to configure.  Hopefully it answers more questions than it raises, but feel free to let me know if I've been unclear (I'm never sure what the right amount of detail is when attempting to describe this).

 

Hope that helps,
Dan

Message 8 of 9
(4,743 Views)

Hi Dan,

 

Thanks for all the information, It was very useful for me.

 

Thanks for your time,

Sine

0 Kudos
Message 9 of 9
(4,710 Views)