08-20-2015 11:20 AM
I use external libraries to process pixel data behind IMAQ image containers, using the buffer pointers. I do this a lot in my application and for a good time already everything has worked beautifully as expected. Now, however, I ran into some strange behaviour. I found that whenever I use external code to edit a binary image and run IMAQ Particle Analysis (or IMAQ Particle Analysis Report) on that image, the analysis only does the calculation with the first image and with the consequent images immediately returns the same "cached" results, for every image, even though the pixel data has changed.
I don't know if any other IMAQ functions behave the same way—for instance IMAQ Histogram doesn't, but I only tried a few others. Also, if I use IMAQ functions to edit the binary image prior to the analysis, the behaviour doesn't occur, as if there was some IMAQ-only flag for changes in the data. Does anyone have any knowledge about this kind of behaviour? It sort of sounds like something one might implement into a function but if it breaks when mixed with external code it definitely needs another look.
If nobody has any knowledge about this or nobody else is able confirm this I can put together some sample code if it helps.
08-21-2015 11:37 PM
08-22-2015 02:36 AM
Yes, very odd. Not to rule out that this still could turn out to be nothing but a be silly mistake of mine, but yes, I'm positive the execution is serialized. I put together a little VI to illustrate as clearly as possible what I'm seeing:
And this is just one way to show it. It comes up in any imaginable debugging scenario was it image probes or the structure a for loop and using breakpoints etc. The common factor is an external bit of code (apparently too covertly for Particle Analysis) modifying the pixel data.
Additionally, if you use large images you can see that the Particle Analysis execution time with the first one is normal, say 50 ms, whereas with the second one some negligible < 0.1 ms, again indicating that the whole operation was decided to be skipped.
08-22-2015 10:14 AM
To me it looks like you are not changing the image data between particle analysis calls. I see your node on the error wire that may be doing something, but it isn't connected to the image in any way. I don't see how it would change the image if it isn't connected to it.
An explanation of what that node is doing would help, but I am fairly sure it is not changing the image.
Bruce
08-22-2015 11:43 AM
Yeah can't argue with that, the block diagram does look like that. Although, how would you explain the change in the image pixels? I guess I initially thought I don't want to bloat the post with the implementation details. But there's no way to spot the trivial mistakes then, is there!? The node there is just a masking function very similar to IMAQ Mask and the clusters wired to it contain the basic image header values like the pointer, step, size and type. The middle one points to same image as the IMAQ image wire visible, and that buffer is modified inside the node.
I don't know if there's any way to reproduce the issue without any external code, and if anybody would like to test it themselves it's probably easier for them to just build some simple dll themselves rather than me trying to send mine with whatever vc redistributable dependencies. The latter can be still arranged if it comes to that.
08-22-2015 02:19 PM
I am guessing LabVIEW optimizes the code and figures there is no way the image was edited between the two call.
Perhaps if you made a subvi that included the image ID as inputs and outputs, it would convince the compiler that the image may have changed.
Bruce
08-22-2015 02:38 PM
Because I noticed that the issue disappears if I run the image through some seemingly modifying but basically skippable operation, such as casting to the same type, I already tried what you suggest but it doesn't help, I even ran the wire through the node inside the subVI just to try to pretend possible modification. I wonder if the issue is related to LabVIEW optimization or some sort of NI Vision function run-time check. The optimization doesn't sound right in context with essentially no restrictions to pointer aliasing, like with the image headers. But other than all these hunches, I really have no idea what actually is going on.
08-24-2015 09:04 AM - edited 08-24-2015 09:16 AM
08-24-2015 09:26 AM
Can you send your exact VI that shows this issue? Ideally if you can replace the CLN function with something simple like a call to the CRT function memset so it isn't dependent on your code it would ensure we are testing the same exact code.
08-24-2015 09:45 AM
That's a great idea. Here:
and in case the snippet doesn't quite do it with the node I attached the actual VI as well.
I noticed I'm ignoring the the pointer return value but apparently it works anyway.