06-05-2013 11:13 AM
I am going to acquire up to 5 samples per second of 50 channels over 30 minutes then average ten of the channels and generate a report. There are other bits of information needed to complete my task and graphs. Would it be "faster" for the PC to have one large 4d array with all the information or faster to make several separate arrays? Would it be more reliable to have one or many(fewer code lockups)?
Also,
Yes, I do mean to be using labview. I expected this crowd to have a different perspective (and I could not find LV blog).
For each channel, I want to scale per 6 other variables per channel(nominal high, mid, low, and reading high, mid low) So there is the channels and time, then there is the calibration values, then there is the calibrated readings. If I put all that along with my test information in one large matrix, it would be simpler for me to remember where in the matrix each item is, but if it is in several matrixies then the "active" matrix is smaller while the other information is not used.
The sales rep for NI was indicating the computer power is much higher than I am used to. (I have not started programming yet, I am preparing for a huge project on a "new"(2 year old)PC.) I am trying to understand just how much power I am missing from my days of gwbasic, Q-basic and our current Visual Basic 6.0 running on XP. This matrix question is new to me.
Solved! Go to Solution.
06-05-2013 11:50 AM
I don't see where you are getting 4 dimentions for your array. I only see 2: channel and sample.
From what you are describing, I would put any "support" information for a channel into a cluster. So you should have an array of clusters to contain your support data.