LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

multicore performance

Yes I agree that the core 2 should be significantly faster independent of GHz.  I assumed that this was and is a faster pc but the pci is running slower.  I have to essentially send a large digital out buffer data while having a continious Digital task in keep reading to avoid a buffer overrun.  I can not stop the din task since I am reading a devices respones to the digital out command stream.  This works on my old pc just not the new one.  This leaves me to believe that the write to digital out task is slower (when I write >5MB array to the buffer no problem but 5MB takes too long and I get a buffer overrun)  I will check on DMA vs Prog IO. The device is a PCI-6534 I assume that it is already set for DMA? Thanks for all the help.

 

Paul

Paul Falkenstein
Coleman Technologies Inc.
CLA, CPI, AIA-Vision
Labview 4.0- 2013, RT, Vision, FPGA
0 Kudos
Message 11 of 15
(740 Views)
for the 6534 you have a 32MB onboard buffer. on top of that you can define up to 16MB windows buffer. at 5MHz, 32MB provides for 6 seconds already. have you tought of filling the buffer, and run only when buffer is prefilled? what happens if you have a sequence less than 32 MB of data - is the card stacking on buffer overun also? if not, you have online computation too heavy. if yes, then my wild guess would be a hardware problem, or related misconfiguration of motherboard with the card.
 
 
 
-----------------------------------------------------------------------------------------------------
... And here's where I keep assorted lengths of wires...
0 Kudos
Message 12 of 15
(722 Views)
The out is clocked at 5 Mhz and the in is clocked (external source) at just ~9Mhz so the in is comming at 9MB/sec.  This should fill the buffer in 2 seconds if the buffer is split 16MB for in and 16 MB for out.  I am using daqmx and the doccumentation says that the buffers are set automatically, I guess that I should override this?  The docs say that the buffer is set to 1 MB for sample rates faster than 1 MHz which if this is what is happening the buffer should overrun in  1.sec this is bad.  Is there any docs on the buffers and how they are allocated?  ideally I can set the in buffer to 16 or 24MB and the out buffer to 8MB.  The out buffer is updated periodically to change the data in buffer with .5-5MB waveforms and the in buffer is continiously read at ~10-20 Hz.  The problem arises that when I write 5MB of data the in rate slows and the buffer overruns.  If I had buffers as large as 16MB this should not happen.  There should never be more than 1 second between reads (I have 3 loops master/slave/slave) to handle the processing to give the read priority.  So it looks like my buffer is too small.  This could be that the clock is external so the auto buffers of daqmx might not be guessing large enough.
 
Paul 
Paul Falkenstein
Coleman Technologies Inc.
CLA, CPI, AIA-Vision
Labview 4.0- 2013, RT, Vision, FPGA
0 Kudos
Message 13 of 15
(698 Views)
I can't comment on HSDIO, but some thoughts about dual core. If you don't use timed loops and bind them to a specific core applications might run slower on dual cores because of core switching. Windows and, as far as I observed also LabVIEW Real-Time, will switch processes between the different cores. In total the load of one process is maximum 100% (unless programmed for multicore). The 70/30% load that Gabi observed shows exactly this behaviour. It's not LV making use of the two cores, but Windows switching the execution between the cores. This usually slows down execution (the different cores have their own caches, so data needs to be transferred). You can manually bind a process to one core (right-click in task manager).

I recently tested a LV real-time 8.5 application on a Core2Duo 2x2.0 GHz. With dual-core enabled in BIOS the application was running significantly slower than on a Pentium M 2.0 GHz. With dual-core disabled it ran at the exact same speed. Didn't have time yet to perform tests using timed loops with core assignment. Things may look similar on Windows.

So my recommendation would be to bind the process to one core as long as you don't use multicore programming techniques.

Hope this helps,
Daniel

0 Kudos
Message 14 of 15
(691 Views)

in my case (and Falkpl as i understand) the problem is definitely the card: i have into same buffer overrun problem at too high clock rates, even tough i monitor my buffer to have way sufficient buffered data. This problem occurs at lower clock rates on dual cores rather than single cores, consistently. i assume therefore it is a hardware related problem, correlating the motherboard and the card.

Falkpl: you can use the 'DAQmx write' property nodes to set and monitor all buffers. very usefull feature. To see how i operate you can look there for some short demo of it. be aware that you can monitor also while writing to buffer.

in my case i read these values every 50ms or so, and send them trough an AE to a pop up windows that displays horisontal bar indicators, very much the same way a CD burner shows writing status. it has two indics: one the buffer, and the other one the nb of written elements (max is the expected nb of DO). How i do the sequence: buffer fills up to 2MB. Exp starts. buffer fills faster than get emptied. buffer fills to 16MB, and is updated as the sequence is running. when no data has to be inserted anymore, buffer slowly gets emptied while exp indics reach its max value. Buffer overrun is easy to observe: buffer not empty but no data is written to card anymore.



Message Edited by Gabi1 on 02-02-2008 08:09 PM
-----------------------------------------------------------------------------------------------------
... And here's where I keep assorted lengths of wires...
0 Kudos
Message 15 of 15
(677 Views)