LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Procesing image in Labview using a dll in c


wiebe@CARYA wrote:

The integer itself is a pointer. Shouldn't it be passed as value, and then used in C as a pointer?

 

The dll would now get a pointer to the pointer, so the address of the first pointer will be where the data is expected.


Yes! I explained it like that in my previous post.

 

But in the LabVIEW Call Library Node you really need to configure it as pointer sized integer passed by value. This is what the pixel pointer from the IMAQ GetImagePixelPtr vi is returning.

 

Rolf Kalbermatter
My Blog
0 Kudos
Message 11 of 14
(490 Views)

Ah. your remark that the LabVIEW declaration *char is correct put me off.

0 Kudos
Message 12 of 14
(486 Views)

How would it change if I have 8 bit RGB?. Also, I have seen it could be problem of the type of file, my image is a BMP file and I have seen Windows changes the size of the image to read faster the data.

0 Kudos
Message 13 of 14
(470 Views)

@lamarck wrote:

How would it change if I have 8 bit RGB?. Also, I have seen it could be problem of the type of file, my image is a BMP file and I have seen Windows changes the size of the image to read faster the data.


What would be an 8bit RGB format? There are a few possible options such as 3:3:2 but that are formats used 30 years ago when 1Mb/s was an incredibly fast communication link and image streaming was still in its infancies.

But it's an academic exercise in combination with IMAQ since it's not supported by IMAQ Vision. You have only:

8 bit and 16 bit integer greyscale, 32 bit floating point real and complex greyscale, 32 and 64 bit RGB and 32 bit HSL. IMAQ Vision can load other formats such as 24 bit RGB from disk but will always convert them into one of these formats on loading!

 

Of course can a file be of different format and IMAQ Vision will attempt to create a compatible format when loading a file. That means that if you pass the image pointer to a DLL, your C code has to be prepared that there could be different formats and in your LabVIEW VI you have to only call a specific function after having determined that the image is in a format your C routine is prepared to deal with. I added a very simple check in the sample source code I showed already by checking that the pixel_size is indeed 4 bytes before proceeding but in a real implementation that is not enough. The 32 bit floating point greyscale format also has 4 bytes per pixel but obviously trying to interpret this as BGRA data will simply produce garbage.

 

IMAQ Vision takes for you care to check all these troubles and refuses to process image formats that an algorithme is not meant to work with but if you go in C you are all on your own and have to make these checks yourself in either the LabVIEW side or C side or a combination thereof. But hey nobody is pointing a gun at your head and forces you to go C! Smiley LOL

 

It all comes down to this consideration: Is this just a learning exercise to try your hands on C programming with images? If yes, by all means go ahead and tinker and crash your computer as much as you like.

But if this is for an actual project where you are tasked to implement a specific algorithme, going the C path is even for a seasoned C programmer not really the path of least resistance. If it can be done with the IMAQ Vision routines in any way you will be leaps and leaps faster in doing it that way and even if IMAQ Vision doesn't provide you the necessary functions it would be much simpler to extract the image data from the image and do the processing on the LabVIEW side. It won't be super duper fast but unless you absolutely positively know what you are doing on the C side you will also not produce a nearly optimized algorithme and have to live with the extra danger that you are dealing with C pointers that can easily get overrun and produce general protection erorrs once you execute the code in a real application with images that are just that tiny bit different in size or pixel format. 

 

And yes the claim that you have to go C for fast computational algorithmes is mostly just a myth in LabVIEW. LabVIEW is a compiling language and while it won't produce the exact same code as some of the very special purpose hyper optimizing C compilers out there it will in many cases be pretty close to the execution speed of standard C compilers as long as you program the algorithme in a sensitive way. The problem with LabVIEW is that it lets you get away with outright horrible algorithme implentations easily that will have a horriible performance but still giving you back the right result, while when going in C you first have to really and deeply think about how you want to go about memory allocations and how to optimize that and then end up usually with a fairly good algorithme as a result. If you don't do that you will end up with code that will all the time have buffer overruns and invalid pointer references that will rather crash your program than produce anything at all.

Rolf Kalbermatter
My Blog
0 Kudos
Message 14 of 14
(465 Views)