In fact I have two questions:
1) If I have an array of clusters with a million clusters inside of it. But I am using just the cluster #999.999 in this array, is LabVIEW intelligent enough to allocate just the memory used in this cluster #999.999 or it allocates memory for this one million positions based on the size of the one I'm using?
2) Based on the answer #1, I want to know how to calculate the memory of this array of clusters. Does the cluster has an overhead or its size is just the size of its elements? (e.g. 10 double numerics)
Well, please let me know if I wasn't clear enough in my doubts.