I am sending data to my hardware in fairly small chunks, arrays on the order of 65 elements. I need to send a data stream to select one of many multiplexed devices, send a data stream and then read a data stream back. I am using a high speed 6533 PXI card. I am pleased with the actual output performance of the card but find I am limited by calling the Dig_block_out command which is taking about 4milliSeconds. The actual sequence I am using is a Dig_block_out command with about 32 elements which selects a unit, send out a 65 element data stream, set up a 65 element data read, then send out the appropriate clock signals in another data stream. I have set up some timers and am measuring about 4mS for each Dig_block_out call. Since I am sending out relatively small arrays at a time I did not want to complicate things with double buffering and the like. Is a call Dig_DB_transfer faster? Am I going to have to go down the register level programming path? Thanks, schudon
I am having a similar problem. I am trying to send chunks of data -- two hundred, 16-bit words -- to a device and it is taking about 4.5 msec between chunks using the DIG_Block_Out (~3.7 msec), DIG_Block_Check (~0.0 msec), and DIG_Block_Clear (~.8 msec) functions. I am running this on a Windows NT 4.0 workstation and I am using NI-DAQ Version 6.9. What could be going on in the DIG_Block_Out function that is taking so long?
I tried using the double-buffered mode to output my two hundred word (16-bit words) chunks of data. The time between the outputs of the chunks of data averaged about 1.06 msec. The DIG_DB_Transfer function used about 0.8 msec of that time. I started a 400 word transfer and kept doing 200 word half-buffers, in a loop. The single-buffered DIG_Block_Out is my preferred way to do the outputs but the 4 msec delay will be too long.