09-11-2007 10:32 AM
09-13-2007 02:20 PM
09-14-2007 03:28 AM
Hi Brandon
My exact situation was as follows. I needed to grab as much images as possible during a 3 second time interval. The loop was waiting for a certain TCP input to end. I have a camera that does 60 FPS in freerun mode, so I was hoping I can get at least 20 FPS with VBAI. The processing of the images takes no more than approx. 20 ms, so that is not the bottleneck.
My loop consisted of transitions between 4 states. The first state was just grabbing the image and forking between 30 different states according to the product type inspected. The appropriate processing state ran on the grabbed image. After that, there was another step that collected the measured data and there was another fork depending on whether the image logging was turned on. The fourth state just checked the TCP communication and looped back or passed the control flow further.
With this setup, I could do no more than 20 images per 3 seconds. Something that was absolutely unusable. Then I realized that the bottleneck is in the ultraslow state switching in VBAI and put all the grabbing and processing into just one step that loops into itself. Now I can get almost 40 images per 3 seconds, which is a bit more usable, but still could be better.
My problem now is that I have to copy the whole grabbing and data collecting into each of the 30 states I was forking into. This is just a very bad example of not utilizing modular design. When I want to change something, I'll have to do it in each of the 30 states. Moreover, I no longer can do the fork into logging or not logging the image, because I cannot have the intermediate state to do it.
So the question is, why is a simple state machine simulation so slow? Is it going to be improved in the future versions of VBAI? This is quite ridiculous that the software needs 20 ms just to switch states.
Vladimir
09-14-2007 10:37 AM