01-10-2011 12:27 PM
Here is what i've done so far. I've only converted the main arrays so let me know if I am on the right track. I made 1 2D array that has 23 columns and 10,000 rows of zeros. I did use 360,000 rows but I was running into memory problems when I try to write to a text file. I use a search array function to find where the closest zero is and use that index to input the new information.
01-11-2011 10:32 AM
So far what I have done has made things much slower unfortunaely. I don't know what I am doing wrong
01-12-2011 12:05 AM
Hi Adam,
Well, wow, i have had a look at your Analog Input vi and to be honest i still dont have a clue what you are doing. It would be hard to pinpoint the crux of your problem as there are so many things fundamentally wrong. You are using so many local variables its hard to keep track on where the data is coming from and where it is going too. The amount of screen space used makes it impossible to track anything, this is actually the first time i have ever used the navigation window. Just as a quick example, in one of your sequence structures you have the following code:
This could be replaced with:
After looking at your array management it is bad but its hard to tell whether this is the cause of the fault you are describing. There is no doubt that it could be improved but there is so much going on that is wrong its hard to pinpoint one thing. In the top left had corner of your code you are constantly writing to approximately 50 property nodes whether they are visible or not, every iteration. I have also not seen a timing function (Although the page is so big i may of missed one), so it appears as though you loop will hog the cpu. From what i can see it seems as though what you are trying to achieve is not difficult, unfortunately i think you are trying to do it without reading the LabVIEW manual. I think the problem with LabVIEW sometimes is that it is sold as an anyone can do software, as such people tend to jump straight in without learning how to actually do it. You then end up with spaghetti code which is impossible to debug.
Sorry if i seem harsh but the only suggestion i can give is that you do the online tutorials and then re-write what you have done from scratch. Even if you do manage to sort out the current problem i guarantee that it wont be the last one.
Wish you the best of luck,
Rgs,
Lucither
01-12-2011 06:29 AM
I understand what your saying and I have been using LabView for a few years. I'm still in college (graduating in May) and this project started out small but as I was writting it they wanted to add more and more so after I would get one aspect working they would want to add to it so i guess it got out of hand. I did have a wait function in there but when I took it out it made no diference. So how can I make those arrays more efficient. I created 1 2D array and I replace a subset but this seemed to slow it down. I will change the graph display to too something like you posted to, Thanks by the way.
01-13-2011 03:59 AM
k Adam,
I took a bit of time today to go over your code again. Particularly the array management. I made a scaled example of your BD to benchmark, you will find it in the attached files.
This is a simplification of what you are doing. I did it to better understand what was going on. I have only shown 1 of the 23 arrays you have, A10. From what i can see you have a completely redundant part of your code, the bottom array. Apart from being redundant it is actually creating an error. To start from the beginning. You read in an input from your hardware. You scale it and then send this to 2 places, the 2 array loops shown above. On the bottom loop you insert this to the end of an array of length 10, making it a length of 11, you then immediately remove the 10th element, which is actually the data you have just input. This bottom loop just spins round full of 10x 0's.
You then input this array full of 0's to an indicator, you then send a copy of this using a local variable to the loop above. This loop is an empty array, initialised array of no length. You insert this to the end of an empty array, so now this array equals the array you have sent (Doing nothing). Then on the end of this array (Full of 0's) you again add the input data from the hardware. All this has done is each iteration given you an array of 11 elements, 10 being all 0's and the 11th being the new input data. As this array is always greater then 10 you then save it to a file. This is done every iteration. You then blank the array and start again.
Each iteration you also write to at least 100 (Which i counted) property nodes, regardless of whether they actually need updating.
As i said previously, in an ideal world we would re-write this from scratch but i understand for you this is not a good option. As such there are a couple of relatively easy things you can do to vastly improve performance, some easier then others. For a really simple fix then you could completely remove the lower loop which is actually creating a bug.
Remove the 'Insert into Array' function that was previously adding the 10x 0's. Place your local variable straight into the next 'Insert Into Array'.
With Regards to the property node updates i have attached a small vi that detects transistions from True to False. You can use this as shown above to only write to your 100+ property nodes when there is a change.
For an better performance you could:
This is the same as above but instead we are initialising an array to the size we are going to use. We then write over this array, keeping a track of our position. When we have overwritten all data, log and start again.
After benchmarking the above 3 examples, using 10,000 samples the results where:
Your original: 188 seconds!
Mod 1: 48ms
Mod 2: 35ms
Taking only 100 samples the results where:
Your original: 1.3 seconds!
Mod 1: 6ms
Mod 2: 4ms
You are actually trying to take 100 samples per second but even the stripped down version takes longer then this to finish. You have to remember that the real application is doing a far lot more per iteration, for example you are actually saving to a file each iteration caused by the false array sent from your bottom loop.
I would also like to add that you should really try to get rid of all the local variables you are using. The above modifications has limited this to only 2 per array. Fortunately the places you are writing and then reading from almost guarantee that they will be written to before reading but this could possibly change in the future. The reason i have shown the examples above with the Locals still there is because seeing your BD i would imagine you will be keeping them.
There are a lot of points i have missed out here but have highlighted them on the Front Panel's of the examples.
Try making the adjustments i have sent you and let me know how you get on.
Rgs,
Lucither
01-13-2011 09:43 AM
Thanks so much this is the help I have been looking for. This program has pretty much spun out of control and now I can hopfully get it back on track. I'll start making these modifications. Thanks again
Adam
01-26-2011 09:18 AM - edited 01-26-2011 09:21 AM
I ended up rewriting the whole thing and now I can get over 100 samples/sec. Here is my updated project. The project is under Scan Engine Trick\PMD
01-26-2011 09:30 AM
@Adam@Universal wrote:
Thanks so much this is the help I have been looking for. This program has pretty much spun out of control and now I can hopfully get it back on track. I'll start making these modifications. Thanks again
Adam
This is a perfect example of why it is always a good idea to think about expansion when developing code. I know it has saved my butt more than once. You start a project and never anticipate it changing or growing. Next thing you know you are doing exactly that and the code quickly devolves and becomes true spaghetti code. A little up front planning greatly decreases the chance of this happening. When developing code you should always keep code reuse and maintenance in the back of your mind.