キャンセル
次の結果を表示 
次の代わりに検索 
もしかして: 

Transpose large 2D array, memory problem

Your best disk performance on a Windows machine is going to occur when you read and write chunks of about 65,000 bytes.  Doing this should give you disk limited performance.  On the machine you mention above, you should be able to do the transpose of a 300MByte 2D array in substantially less than a minute.  Your limit will be disk writing speed, since a 3GHz machine can transpose 65,000 bytes of data much faster than the disk can read/write it.  Use multiple loops - one to read the data, maybe one to transpose, and one to write it.  Pass the data in chunks between the loops using a queue.  You can pass data positioning information the same way.

One last thought.  Make sure you defrag your disk first.  You can lose a lot of performance with a highly fragmented disk.
メッセージ11/18
1,623件の閲覧回数

DFgray,

You have a scarry understanding of this issue.  Past Experience?

Matt

Matthew Fitzsimons

Certified LabVIEW Architect
LabVIEW 6.1 ... 2013, LVOOP, GOOP, TestStand, DAQ, and Vison
0 件の賞賛
メッセージ12/18
1,618件の閲覧回数
To: DFGray
thank you for suggestion, i will try it and give you a feedback.
jut one thing: i compared in past "queue" and LV2 type global variable (using shift register) and found that "queue" is extrimelly slow.
0 件の賞賛
メッセージ13/18
1,618件の閲覧回数
By the way,
I am doing this using LV7.1,
I tired to use LV8 and found out that it gives me an "memory full" error much more often than LV7.1
Is it some problems with settings i have or something strange with LV8?
0 件の賞賛
メッセージ14/18
1,614件の閲覧回数
To: DFGray
I have tried multiple loops, tried to use queues. The maximum difference (comparing to my previous method) in processing time was less than 10%.
So it didn't realy help.

Message Edited by jonni on 05-19-2006 05:52 AM

0 件の賞賛
メッセージ15/18
1,595件の閲覧回数
What algorithm are you using to get data to and from the disk?  The more I thought about this, the less I liked it.  My previous post is only partially true.  If you are doing a chunked transpose from disk to disk with small memory use, you end up being forced to use small reads and writes, which really kill your time.  You would probably be better off sucking the entire thing into memory (use techniques in the tutorial referenced below to avoid copies), then writing it out again.  If you can get the whole thing in memory, you can optimize for 65,000 byte chunks both in and out.  Without all of it in memory, you will be forced into suboptimal read or write lengths.  Make sure you allocate your in memory array once (use Initialize Array), then use Replace Array Subset to fill it.  Repeated memory allocations are very costly in terms of time.

My experience with queues is that they are easier to use for data passing than LV2 globals, since they do not require any setup.  For small data, they are up to an order of magnitude faster.  For large data, it is a wash and depends on your coding skill.  One extra data copy can kill you.  It is easier to make no extra copies with the queue, since the queue essentially passes a pointer to the array when the element is an array (assumes the array wire is not split).  Your mileage may vary...  However, given my comments in the previous paragraph, a LV2 global is probably the way to go for this problem.

LabVIEW 8 takes up more space in memory and fragments memory more than LabVIEW 7.1 due to it's increased feature set.  You can usually allocate a single contiguous buffer of about 1GByte in LV7.1, but only about 800MBytes in LV8.0.  You can work around this issue somewhat by breaking your data into multiple arrays so LabVIEW can use the fragmented memory.  A single array in LabVIEW always uses contiguous memory space.  Due to a variety of reasons, you will probably never be able to use more than about 1.4GBytes on a Windows2K/XP machine, even exploiting the fragmented space.  For details, see the tutorial Managing Large Data Sets in LabVIEW.
メッセージ16/18
1,572件の閲覧回数
More than I care to admit.
0 件の賞賛
メッセージ17/18
1,571件の閲覧回数
Hello jonni,

do you have enough RAM to read in the data at once and save them one column after the other (as rows)? This way you only have 300MB for the whole data set and approx 25MB for one column to save...
 
Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
0 件の賞賛
メッセージ18/18
1,564件の閲覧回数