LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Evaluate how red the pixels are in an image

Solved!
Go to solution

I have an RGB image and want to evaluate the redness of each pixel. I know I can use IMAQ ExtractColorPlanes to get the red plane. However, it does not tell me the redness of the pixels. For example, the black and red pixels will show the same value in the red color plane. Is there any good idea for evaluating the redness of the pixels?

0 Kudos
Message 1 of 12
(13,460 Views)

Hi Xiang,

 


@Xiang00 wrote:

I have an RGB image and want to evaluate the redness of each pixel.


Convert the RGB values to HSV values and check the H (hue) for "redness"…

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
0 Kudos
Message 2 of 12
(13,440 Views)

@GerdW wrote:

Hi Xiang,

 


@Xiang00 wrote:

I have an RGB image and want to evaluate the redness of each pixel.


Convert the RGB values to HSV values and check the H (hue) for "redness"…


You need a little extra.  For black, the hue ought to be undefined; but of course a number has to be put there, so it defaults to 0, which is red.

0 Kudos
Message 3 of 12
(13,433 Views)

@Xiang00 wrote:

I have an RGB image and want to evaluate the redness of each pixel.


Most easiest way probably is to switch to CIE Lab model and check the distance between "pure red" and color of each pixel, lower values will be closer to Red, larger - far away, something like that:

Snippet.png

And you can perform threshold to get a mask if needed:

Screenshot 2024-10-01 14.29.02.png

VI is attached (LV2018).

Message 4 of 12
(13,405 Views)

Thank you very much for all your suggestions. HSV or CEI conversion is an excellent idea. I have tested the VI by Andrey. It worked very well. However, it would take approximately 1s to convert the RGB image to a redness image. I tried parallelizing the for loop, but it only made the process slower, probably due to some overhead.

 

Do you have any suggestions for speeding up this conversion? I plan to perform simultaneous redness detection with a 10 fps streamed video. So, I hope the conversion can be done in less than 100 ms.

0 Kudos
Message 5 of 12
(13,238 Views)

@Xiang00 wrote:

it would take approximately 1s to convert the RGB image to a redness image. I tried parallelizing the for loop, but it only made the process slower, probably due to some overhead.

 

Do you have any suggestions for speeding up this conversion? I plan to perform simultaneous redness detection with a 10 fps streamed video. So, I hope the conversion can be done in less than 100 ms.


That wasn't initial requirement. 😉 But of course, this can be improved. Two nested for loops and handling pixels as native LabVIEW array caused huge penalty. Parallelization will give something, but you should avoid these loops as well as coversion IMAQ Image->Array at all.

By the way — how large is your image and which CPU you have? I think 10x improvement is easily possible (I'll check it later).

0 Kudos
Message 6 of 12
(13,227 Views)

Thank you for the suggestion! I will try to improve the code.

The image size is 3000x1000, and my CPU is the AMD Ryzen 7 5700U.

0 Kudos
Message 7 of 12
(13,207 Views)
Solution
Accepted by topic author Xiang00

@Xiang00 wrote:

Thank you for the suggestion! I will try to improve the code.

The image size is 3000x1000, and my CPU is the AMD Ryzen 7 5700U.


You're welcome!

I'm not sure how familiar you are with IMAQ, but if I attack this problem, I would wrap everything into a DLL. This would give you a huge improvement.

 

For example, first just check where we are with pure LabVIEW implementation. To check this I will upsample the original 640x400 image to 2560x1600, which is close to your image and benchmark it:

 

Snippet01.png

On my  Intel Xeon W-2245 @ 3,90 GHz this will give me 3,2 seconds for 640x400 and 52,2 seconds for 2560x1600.

 

Now to get everything in DLL I will need first RGB to CIE Lab conversion. I will take the equations from OpenCV and with a little help of AI will get the following code:

 

// https://docs.opencv.org/3.1.0/de/d25/imgproc_color_conversions.html
inline void rgbToLab(uint8_t R, uint8_t G, uint8_t B, float* L_out, float* a_out, float* b_out) {
    // Step 1: Normalize RGB values
    double r = (double)R / 255.0;
    double g = (double)G / 255.0;
    double b = (double)B / 255.0;

    // Step 2: Apply gamma correction if needed
    //r = (r > 0.04045) ? pow((r + 0.055) / 1.055, 2.4) : r / 12.92;
    //g = (g > 0.04045) ? pow((g + 0.055) / 1.055, 2.4) : g / 12.92;
    //b = (b > 0.04045) ? pow((b + 0.055) / 1.055, 2.4) : b / 12.92;

    // Convert to XYZ
    double X = r * 0.412453 + g * 0.357580 + b * 0.180423;
    double Y = r * 0.212671 + g * 0.715160 + b * 0.072169;
    double Z = r * 0.019334 + g * 0.119193 + b * 0.950227;

    // Step 3: Normalize for D65 illuminant
    X /= 0.950456; // Reference white point D65
    Y /= 1.000000;
    Z /= 1.088754;

    // Step 4: Convert to Lab
    float L = (float)((Y > 0.008856) ? 116.0 * pow(Y, (1 / 3.0)) - 16.0 : 903.3 * Y);
    X = (X > 0.008856) ? pow(X, (1 / 3.0)) : (X * (7.787) + (16.0 / 116.0));
    Y = (Y > 0.008856) ? pow(Y, (1 / 3.0)) : (Y * (7.787) + (16.0 / 116.0));
    Z = (Z > 0.008856) ? pow(Z, (1 / 3.0)) : (Z * (7.787) + (16.0 / 116.0));

    *L_out = L * 2.55f; //To match LabVIEW's 8-bit images
    *a_out = (float)(500.0 * (X - Y));
    *b_out = (float)(200.0 * (Y - Z));
}

 

Now I will prepare DLL function. To stay prepared for multithreading I will add start/end parameters in assumption that I will split image to the stripes and handle these in parallel. Technically I will obtain RAW pointers to the pixels (you can use IMAQ GetImagePixelPtr, but this will require more DLLs inputs), then iterate over rows:

COLORMATCH_API int fnColorMatch(NIImageHandle src, NIImageHandle dst, TD1* color, int32_t start_line, int32_t end_line)
{
    Image *ImgSrc, *ImgDst;
    uint8_t *pLVImageRGB;
    float* pLVImageDistance;
    int LVWidth, LVHeight, x, y;
    float distance;
    uint8_t R, G, B;
    float L_src, a_src, b_src, L_ref, a_ref, b_ref;

    rgbToLab(color->RedValue, color->GreenValue, color->BlueValue, &L_ref, &a_ref, &b_ref);

    if (!src || !dst) return ERR_NOT_IMAGE;
    LV_SetThreadCore(1); //must be called prior to LV_LVDTToGRImage
    LV_LVDTToGRImage(src, &ImgSrc);
    LV_LVDTToGRImage(dst, &ImgDst);
    if (!ImgSrc || !ImgDst) return ERR_NOT_IMAGE;

    LVWidth = ((ImageInfo*)ImgSrc)->xRes;
    LVHeight = ((ImageInfo*)ImgSrc)->yRes;
    imaqSetImageSize(ImgDst, LVWidth, LVHeight);
    int LVLineWidthsrc=((ImageInfo*)ImgSrc)->pixelsPerLine;
    int LVLineWidthDst = ((ImageInfo*)ImgDst)->pixelsPerLine;

    pLVImageRGB = (uint8_t*)((ImageInfo*)ImgSrc)->imageStart;
    pLVImageDistance = (float*)((ImageInfo*)ImgDst)->imageStart;
    if (!pLVImageRGB || !pLVImageDistance) return ERR_NOT_IMAGE;

    if (end_line == 0) end_line = LVHeight;
    pLVImageRGB += (start_line * LVLineWidthSrc) * 4;
    pLVImageDistance += start_line * LVLineWidthDst;
    for (y = start_line; y < end_line; y++) {
        for (x = 0; x < LVWidth; x++) {
            B = *(pLVImageRGB++);
            G = *(pLVImageRGB++);
            R = *(pLVImageRGB++);
            rgbToLab(R, G, B, &L_src, &a_src, &b_src);
            distance = (float)sqrt(SQR(L_src-L_ref) + SQR(a_src - a_ref) + SQR(b_src - b_ref));
            *pLVImageDistance = distance;
            pLVImageRGB++;
            pLVImageDistance++;
        }
        pLVImageRGB += (LVLineWidthSrc - LVWidth) * 4;
        pLVImageDistance += (LVLineWidthDst - LVWidth);
    }
    return 0;
}

Header file:

Spoiler
#ifdef COLORMATCH_EXPORTS
#define COLORMATCH_API extern "C" __declspec(dllexport)
#else
#define COLORMATCH_API __declspec(dllimport)
#endif

#include "include/nivision.h"
#include "include/extcode.h"

/* lv_prolog.h and lv_epilog.h set up the correct alignment for LabVIEW data. */
#include "include/lv_prolog.h"

/* Typedefs */

typedef struct {
	uint8_t RedValue;
	uint8_t GreenValue;
	uint8_t BlueValue;
} TD1;

#include "include/lv_epilog.h"

#define SQR(x) ((x)*(x))

typedef uintptr_t NIImageHandle;

extern "C" int LV_SetThreadCore(int NumThreads);
extern "C" int LV_LVDTToGRImage(NIImageHandle niImageHandle, void* image);

COLORMATCH_API int fnColorMatch(NIImageHandle src, NIImageHandle dst, TD1* color, int32_t start_line, int32_t end_line);

Now I will call this DLL function like this:

Snippet02.png

Now my execution time dropped for 2560x1600 from 52 seconds to 470 milliseconds, and for 640x400 from 3,2 s to 30 ms down, which means over 100x performance improvement.

But this not all, because my CPU have 8 physical cores, so I can use parallel computation like this:

Snippet03.png

Of course, I will not get pure 8x improvement, but execution time now: for 2560x1600 image is 85 milliseconds, and for 640x400 is 5 milliseconds only, which means roughly 600x imaprovement in comparizon to original LabVIEW code.

 

There are some areas for further improvement (around twice shall be possible), for example, we can use SIMD (AVX/AVX2) commands to compute multiple pixels with single commands, but this will require to use Intrinsics or Assembler, and usually required huge amount of time for programming. This technique was demonstrated here when I replaced IMAQ FlatField correction with AVX-based computation. Another possible way is to use highly optimized Intel Compiler, for example, if I will compile DLL with OneAPI, then the execution time will be 3 ms and 47 ms for small and large images (means over 1000x times speed improvement), but this will add additional dependencies to your project.

Summary:

Screenshot 2024-10-02 12.30.40.png

Full source code downgraded to LV2018 (x64 only) including C source code and project for Visual Studio Professional 2022 (v.17.11.4) in the attachment, feel free to use it as "getting started".

Message 8 of 12
(13,196 Views)

@Andrey_Dmitriev wrote:

@Xiang00 wrote:

Thank you for the suggestion! I will try to improve the code.

The image size is 3000x1000, and my CPU is the AMD Ryzen 7 5700U.


You're welcome!

I'm not sure how familiar you are with IMAQ, but if I attack this problem, I would wrap everything into a DLL. This would give you a huge improvement.

 

For example, first just check where we are with pure LabVIEW implementation. To check this I will upsample the original 640x400 image to 2560x1600, which is close to your image and benchmark it:

 

Snippet01.png

On my  Intel Xeon W-2245 @ 3,90 GHz this will give me 3,2 seconds for 640x400 and 52,2 seconds for 2560x1600.

 

Now to get everything in DLL I will need first RGB to CIE Lab conversion. I will take the equations from OpenCV and with a little help of AI will get the following code:

 

// https://docs.opencv.org/3.1.0/de/d25/imgproc_color_conversions.html
inline void rgbToLab(uint8_t R, uint8_t G, uint8_t B, float* L_out, float* a_out, float* b_out) {
    // Step 1: Normalize RGB values
    double r = (double)R / 255.0;
    double g = (double)G / 255.0;
    double b = (double)B / 255.0;

    // Step 2: Apply gamma correction if needed
    //r = (r > 0.04045) ? pow((r + 0.055) / 1.055, 2.4) : r / 12.92;
    //g = (g > 0.04045) ? pow((g + 0.055) / 1.055, 2.4) : g / 12.92;
    //b = (b > 0.04045) ? pow((b + 0.055) / 1.055, 2.4) : b / 12.92;

    // Convert to XYZ
    double X = r * 0.412453 + g * 0.357580 + b * 0.180423;
    double Y = r * 0.212671 + g * 0.715160 + b * 0.072169;
    double Z = r * 0.019334 + g * 0.119193 + b * 0.950227;

    // Step 3: Normalize for D65 illuminant
    X /= 0.950456; // Reference white point D65
    Y /= 1.000000;
    Z /= 1.088754;

    // Step 4: Convert to Lab
    float L = (float)((Y > 0.008856) ? 116.0 * pow(Y, (1 / 3.0)) - 16.0 : 903.3 * Y);
    X = (X > 0.008856) ? pow(X, (1 / 3.0)) : (X * (7.787) + (16.0 / 116.0));
    Y = (Y > 0.008856) ? pow(Y, (1 / 3.0)) : (Y * (7.787) + (16.0 / 116.0));
    Z = (Z > 0.008856) ? pow(Z, (1 / 3.0)) : (Z * (7.787) + (16.0 / 116.0));

    *L_out = L * 2.55f; //To match LabVIEW's 8-bit images
    *a_out = (float)(500.0 * (X - Y));
    *b_out = (float)(200.0 * (Y - Z));
}

 

Now I will prepare DLL function. To stay prepared for multithreading I will add start/end parameters in assumption that I will split image to the stripes and handle these in parallel. Technically I will obtain RAW pointers to the pixels (you can use IMAQ GetImagePixelPtr, but this will require more DLLs inputs), then iterate over rows:

COLORMATCH_API int fnColorMatch(NIImageHandle src, NIImageHandle dst, TD1* color, int32_t start_line, int32_t end_line)
{
    Image *ImgSrc, *ImgDst;
    uint8_t *pLVImageRGB;
    float* pLVImageDistance;
    int LVWidth, LVHeight, x, y;
    float distance;
    uint8_t R, G, B;
    float L_src, a_src, b_src, L_ref, a_ref, b_ref;

    rgbToLab(color->RedValue, color->GreenValue, color->BlueValue, &L_ref, &a_ref, &b_ref);

    if (!src || !dst) return ERR_NOT_IMAGE;
    LV_SetThreadCore(1); //must be called prior to LV_LVDTToGRImage
    LV_LVDTToGRImage(src, &ImgSrc);
    LV_LVDTToGRImage(dst, &ImgDst);
    if (!ImgSrc || !ImgDst) return ERR_NOT_IMAGE;

    LVWidth = ((ImageInfo*)ImgSrc)->xRes;
    LVHeight = ((ImageInfo*)ImgSrc)->yRes;
    imaqSetImageSize(ImgDst, LVWidth, LVHeight);
    int LVLineWidthsrc=((ImageInfo*)ImgSrc)->pixelsPerLine;
    int LVLineWidthDst = ((ImageInfo*)ImgDst)->pixelsPerLine;

    pLVImageRGB = (uint8_t*)((ImageInfo*)ImgSrc)->imageStart;
    pLVImageDistance = (float*)((ImageInfo*)ImgDst)->imageStart;
    if (!pLVImageRGB || !pLVImageDistance) return ERR_NOT_IMAGE;

    if (end_line == 0) end_line = LVHeight;
    pLVImageRGB += (start_line * LVLineWidthSrc) * 4;
    pLVImageDistance += start_line * LVLineWidthDst;
    for (y = start_line; y < end_line; y++) {
        for (x = 0; x < LVWidth; x++) {
            B = *(pLVImageRGB++);
            G = *(pLVImageRGB++);
            R = *(pLVImageRGB++);
            rgbToLab(R, G, B, &L_src, &a_src, &b_src);
            distance = (float)sqrt(SQR(L_src-L_ref) + SQR(a_src - a_ref) + SQR(b_src - b_ref));
            *pLVImageDistance = distance;
            pLVImageRGB++;
            pLVImageDistance++;
        }
        pLVImageRGB += (LVLineWidthSrc - LVWidth) * 4;
        pLVImageDistance += (LVLineWidthDst - LVWidth);
    }
    return 0;
}

Header file:

Spoiler
#ifdef COLORMATCH_EXPORTS
#define COLORMATCH_API extern "C" __declspec(dllexport)
#else
#define COLORMATCH_API __declspec(dllimport)
#endif

#include "include/nivision.h"
#include "include/extcode.h"

/* lv_prolog.h and lv_epilog.h set up the correct alignment for LabVIEW data. */
#include "include/lv_prolog.h"

/* Typedefs */

typedef struct {
	uint8_t RedValue;
	uint8_t GreenValue;
	uint8_t BlueValue;
} TD1;

#include "include/lv_epilog.h"

#define SQR(x) ((x)*(x))

typedef uintptr_t NIImageHandle;

extern "C" int LV_SetThreadCore(int NumThreads);
extern "C" int LV_LVDTToGRImage(NIImageHandle niImageHandle, void* image);

COLORMATCH_API int fnColorMatch(NIImageHandle src, NIImageHandle dst, TD1* color, int32_t start_line, int32_t end_line);

Now I will call this DLL function like this:

Snippet02.png

Now my execution time dropped for 2560x1600 from 52 seconds to 470 milliseconds, and for 640x400 from 3,2 s to 30 ms down, which means over 100x performance improvement.

But this not all, because my CPU have 8 physical cores, so I can use parallel computation like this:

Snippet03.png

Of course, I will not get pure 8x improvement, but execution time now: for 2560x1600 image is 85 milliseconds, and for 640x400 is 5 milliseconds only, which means roughly 600x imaprovement in comparizon to original LabVIEW code.

 

There are some areas for further improvement (around twice shall be possible), for example, we can use SIMD (AVX/AVX2) commands to compute multiple pixels with single commands, but this will require to use Intrinsics or Assembler, and usually required huge amount of time for programming. This technique was demonstrated here when I replaced IMAQ FlatField correction with AVX-based computation. Another possible way is to use highly optimized Intel Compiler, for example, if I will compile DLL with OneAPI, then the execution time will be 3 ms and 47 ms for small and large images (means over 1000x times speed improvement), but this will add additional dependencies to your project.

Summary:

Screenshot 2024-10-02 12.30.40.png

Full source code downgraded to LV2018 (x64 only) including C source code and project for Visual Studio Professional 2022 (v.17.11.4) in the attachment, feel free to use it as "getting started".


I'm stunned.

Bill
CLD
(Mid-Level minion.)
My support system ensures that I don't look totally incompetent.
Proud to say that I've progressed beyond knowing just enough to be dangerous. I now know enough to know that I have no clue about anything at all.
Humble author of the CLAD Nugget.
0 Kudos
Message 9 of 12
(11,875 Views)

@billko wrote:


I'm stunned.


Glad to see!

Well, in this particular case we can also use the fact that we have max 16777216 different colors and if we need to get color distance from "hard-coded red", then we can prepare LUT like this:

LUT.png

and then simply using this like that:

Screenshot 2024-10-02 21.12.20.png

This will be much faster for sure (once LUT loaded into memory), but we will lost abilities to set color "on the fly", because this will require to re-compute LUT.  But this approach can be also done in DLL, and we will be faster again (no so dramatically, but faster). On the other hand random read from 64MB LUT array can cause massive cache misses, but this can be improved by normalization to the range 0...255 and rounding to fit into one byte if high precision is not required... As usually — "pros" and "cons" everywhere.

Message 10 of 12
(11,855 Views)