LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

making code more efficient

I am having a lot of trouble getting my code to work fast enough. I have 4 sonic anemometers and currently my code is only efficient enough to collect data from one. I have programs that run 2 sonic anemometers and save the data, but bites pile up at the port. The instruments are in unprompted mode and send data at 10hz. I find that using the wait command dose not work well for some reason so I have the loop continuously running. The first version of my code (V3a) worked for one sonic and bites did not pile up at the port. So I made (V3b) and tried to make a more efficient program. I tried separating things into multiple loops but, it still does not work well and was hoping to get some ideas to make things work better.

 

 

I attached the 2 versions of my code. I am not sure if I should attach the subVIs, let me know.

 

Thanks!

Download All
0 Kudos
Message 1 of 13
(4,840 Views)

The second version is much better than the first. You have split everything into producer and cumsumer loops.

 

 

I noticed that you do not have any wait time in any of your loops. You nee to add a wait of at leat 10ms to each loop to help make this more functional. An event structure might help as well.

 

I would look to add the state machine to this system.

 

Tim
GHSP
Message 2 of 13
(4,833 Views)

I agree with aeastet, the second vi is better.  A few suggestions:

 

Use a small time delay (Wait ms) in each loop to avoid using up 100% CPU time. 

 

 Instead of using the local variables for append array, use a queue.  If you enqueue it in the top loop, then dequeue (more  on that later) in the other loops, you can eliminate the top loop's sequence structure.  Same for x-y locals.  Since you have two loops wanting the same data, you cannot just dequeue because one loop will get the data and the other loop will miss the data.  You can use Preview Queue in one loop, use a notifier to trigger the second loop, then dequeue in the second loop. 

 

If you use queues, then you can also eliminate the Set Occurance in the top loop.  To handle two variables in the queue, you have a couple of choices.  Use a 1D array for the queue data type, build the array in the top loop with x-y as the first element and appended array as the second one.  When dequeuing, index the array to extract the data in the same order.  Or you could make a cluster with the two data points.  This would be my prefered method because you can put names to the cluster elements using Bundle By Name and Unbundle By Name.  Then there would be no confusion as to which element is what.  Also, you would have to set an occurance in the loop that previews queue, and Wait on Occurance in the loop that dequeues the data.  You don't want to dequeue before previewing.

 

- tbob

Inventor of the WORM Global
Message 3 of 13
(4,824 Views)

OK some good ideas it sounds like. Thanks a bunch. It is friday evening though so it will have to wait until monday to apply these methods. Smiley Happy If there are anymore suggestions please add.

0 Kudos
Message 4 of 13
(4,806 Views)

I'm going to ask you a very important question about that occurrence in the top loop: by using the occurrence the way you have, have you eliminated the possibility of a race condition? The answer is NO... study it, and you'll see why. If you can't figure it out, post back and I'll tell you why the race condition is still present.

 

Also, if you ever are coding and thinking to yourself, "WOW, I can't believe the guys who developed LabVIEW made it so hard to do this simple task!", odds are, you're making it hard yourself! Rather than making 4 parallel branches of a numeric, converting to an ASCII string, then reinterpreting as 4 separate numerics, consider the following code. It's nearly equivalent, except my seconds has more significant digits (maybe good, maybe not):

 

19217i4D85CD6A897E5F86

 

I'm going to argue that even splitting the discrete components of time is unnecessary, unless your logging protocol specifically requires that format. Instead, simply write the timestamp directly to file with the data points.

 

Also, remember to use a standard 4x2x2x4 connector pane on your SubVIs. Refer to the LabVIEW Style Guide (search, and you will find it).

 

Finally, I'm going to disagree with the other guys, it's not evident why you split the one loop into three loops. The only "producer/consumer" architecture has the top loop as the "producer", and all it's producing is a timestamp! This is not a typical or intended use of the producer/consumer architecture. Your VI is intended to only save a data point once every 30 minutes (presumably), so it's no big deal of both of your serial devices are in the same loop.

 

The single biggest problem why your VI is completely railing out a CPU core (you didn't state this, but I'm guessing the reason you posting is because you noticed a core running at 100%!) is the unmetered loop rate... like the other guys say, drop a "Wait Until Next ms Multiple" and slow the loop rate down significantly. 10msec is probably too fast for your application.... actually, a loop rate of once every 30 minutes (that's 1800000msec) might be best.

 

Let us know how it goes!

 

 

0 Kudos
Message 5 of 13
(4,790 Views)

Ok thanks for all the input. I am still new at labview. I didnt even hear of labview until april... And I am not very good at any other languages ether. I am still a novice programmer in college and not even taking programming classes. So I knew I was probably doing some things the hard way.

 

Some of the things sound familiar but I will need to look them up and try them out. The one thing wrong though is that it does not save 1 point per 30 min. it saves 10 points per second and every 30 min a new file is made to save the data in. I tried 100ms for the  wait time but it seemed to lag faster. This is why I did not have the wait timer. But other than that thanks for the help. Once again I will not be able to try this till monday, but I will see what I can do and i may have  questions.

 

Thanks again!

0 Kudos
Message 6 of 13
(4,778 Views)

Coming to the forums is a good place to learn!

 

Actually, knowing more about your application, a producer/consumer design would be advantageous, but not absolutely necessary. A 100msec loop rate is actually very slow in the scheme of data acquisition, so once you get the program working well, it will have no trouble at all running smoothly. To correctly implement the producer/consumer architecture, the top loop would be generating the data (reading from your inputs/sensors) and the bottom loop would be consuming the data (writing this data to a file). You can learn more about the producer/consumer design if you're interested, but generally speaking it abstracts the lower-priority high-jitter tasks such as writing to disk from the higher-priority deterministic tasks such as data acquisition. Since your loop is only running at 100msec, this should be plenty of time to complete all data reads and writes without needing a more complex architecture (although some would argue it's better since it's more scalable and flexible. Also, if you are being graded on this project, the professor will find this architecture more impressive).

 

Were you trying to use the 100msec "Wait (ms)" or "Wait Until Next ms Multiple"? Look at the Help on both of those functions to learn the difference between the two.

 

We'll see how things go on Monday!

0 Kudos
Message 7 of 13
(4,766 Views)

Since the producer/consumer design wouldn't be necessary. I will try to do it with out it since I really need to try and get this program soon. I added the wait until next multiple and hopefully that will make it work since it should free up the cpu. But for the time stamp part of the program that you showed in the earlier post, How do you make the thing after the seconds to date?

0 Kudos
Message 8 of 13
(4,694 Views)

A cluster is a grouping of different datatypes. Look under the "Cluster, Class, and Variant" palette to find the "Unbundle by Name." Do some research into what a "cluster" is.

0 Kudos
Message 9 of 13
(4,686 Views)

Ok It seems to be working a lot better now. I understand clusters a bit too now. The only problem now is that whenever I save the data to the file it lags just a little. Currently I have it appending an array and saving the data only ever 10 min. I thought this would be better than saving every time I got data. So every 10 min a few bits built up. I attached the subVI I use for saving so you can see exactly what it does. Though I guess I could live with this though since I would not lose too much data.

 

Thanks again for all the help.

0 Kudos
Message 10 of 13
(4,655 Views)