I'm using a PCI-6036E with Labview v6.1, and I'm doing quite a bit of
I/O. I have 16 channels of single-ended analog data input, 1 channel of
analog data output, 3 digital input channels, and 1 digital output
channel. Within my code, I'm doing a lot of matrix multiplication, and
I have 2 sequence structures (nested). My problem is that my max
sampling rate is approximately 16 Hz, which is slower than I'd hoped.
Is there any way to get better speed or am I just asking too much
considering the number of channels of I/O that I'm dealing with.
You can indeed go at higher sampling rates for your AI channels, if you optimize your code.
I can give following Tips to optimize your code.
As much as possible try to synchronise AI, AO and/or DIO process.
If you do not need any synchronisation between these functions at all, keep them in seperate while loops, so that each process runs independently of the other
And if possible, keep your matrix multiplication/data manipulation as an Offline process. It means, perform acquisition at your desired rate, save to a file and do post processing on this data. This is an option, only if you do not have to display your processed data online.
If your need to display your data online itself, Keep one loop dedicated to acquisition and perform your matrix multiplicaion in another loop, just keep passing data between the two ( using Local variables). At this point, i cannot recall what multi loop sync operations were availible in LabVIEW 6.1, if You see any such examples in LabVIEW 6.1, try and use it to achieve what i have explained.