From 04:00 PM CDT – 08:00 PM CDT (09:00 PM UTC – 01:00 AM UTC) Tuesday, April 16, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

NI VideoMASTER and NI AudioMASTER

cancel
Showing results for 
Search instead for 
Did you mean: 

PQA Blockiness

Hi,

I'm doing the blockiness test on PQA to a basic video stream (just frames one-color, one second has black frames, one second has white frames, one second green frames and so on). I made a test with provider AVI single and another test with provider digital AV (banc 8133 with modules 6545 and 2172 for HDMI). I get almost a similar behavior for two tests but numerical values are totally different. I want to know what is the ideal blockiness value for a perfect frame (no-blockiness at all) ? is it 1?

 

Carolina

 

0 Kudos
Message 1 of 19
(13,660 Views)

Carolina1: The quick answer to your question is that the perfect score is 0, no difference in macroblock edges.  This can be demonstrated by loading attached AVI in PQA.

 

Blockiness:

  1. First let's make sure you're using Blockiness to achieve what you want.  Blockiness is used to detect defects in the compression that occurs when a video is encoded.  To compress a video frame, it is broken up into macroblocks.  Each block is then compressed individually, and put back together as frame.  Depending on the codec being used and the level of compression, this can result in the appearance of blocks throughout the image, like this.  Note that the defects that Blockiness detects are introduced at encoding, not decoding a transmission, which can result in macroblock errors like this.
  2. The blockiness processor requires that you enter the correct Macroblock size, so that it knows how to divide the frame back up, the same way it was divided up when it was encoded.  It defaults to 16 for 16x16 macroblock size, but 8x8 is also common.  If you don't pick the correct Macroblock size for your video, it can result in bad results.
  3. The formula for the Blockiness processor is found in PQA Help at NI PQA Executive and The NI PQA Configuration Panel>>NI PQA Executive components>>NI PQA Tabs>>Processors Tab>>Video Metric Processors>>Blockiness Processor.  I've also pasted a copy below.

    PQA Blockiness.png


  4. To better explain the formula, it first goes through and finds the intensity of each pixel in the frame.  It then goes through and compares the intensity of adjacent pixels on either side of the block, and computes their difference.  It does this in the horizontal direction and vertical direction.
  5. In my visual example below, we can see a macroblock size of 3x3.
    1. To calculate Horizontal Blockiness, it would compare the difference between (0,2) and (0,3), then (1,2) and (1,3), then (2,2) and (2,3), and would continue to do so for the rest of the macroblock borders.  If there is no difference between the 2, this would result in 0.
    2. To calculate Vertical Blockiness it would compare the difference between (2,0) and (3,0) then (2,1) and (3,1), then (2,2) and (3,2), and would continue to do so for the rest of the macroblock borders.  If there is no difference between the 2, this would result in 0.

      Macroblock.png
  6. In a video where there is a single color for the entire frame, coded losslessly, we would get a result of 0 for each frame.  However with a lossy compression codec, depending on the algorithm of the codec, you could still see transition differences, resulting in a non-zero number.
  7. If you had a video with an subject, not just solid color, you would never achieve a perfect score of 0, because even if there are no edges created by the macroblocking, as we divide up the frame, there are going to be subtle differences pixel to pixel.


Checkers:

  1. If you're instead looking for artifacts and macroblocking errors from the transmission of the video, then the Checkers processor would be the one to use.  It effectively goes through pixel by pixel vertically and horizontally and looks for sharp changes in pixel intensity, which would typically be caused by a block like shown in the link above not matching the rest of the scene.  However, controlling the content that the Checkers processor sees is important too, because it will return false positives for things like boxes on the screen or on-screen text.
Paul Davidson
National Instruments
Product Owner - ni.com Chat
0 Kudos
Message 2 of 19
(13,655 Views)

Hi,

 

I think for me it is better to use checkers processor because I have to test decoder devices (Set-top-box) instead encoder devices. Thank you for your answer and video. Do you have the same video but HD format 1080p? 

 

 

Carolina

0 Kudos
Message 3 of 19
(13,644 Views)

About the AVI I created:

I built that video using the IMAQ AVI functionality by creating frames in Paint for the resolution I wanted, then built them up.  I am attaching the code and source that I used to make a 1080p video that you could expand and change up if you'd like.  The code isn't perfect, just a quick concept.

 

About Checkers:

Using the Checkers processor is most likely going to require some post processing work in LabVIEW.  At a minimum, it is going to require you analyzing the multi-point metrics from within PQA rather than the single-point.  Single-point averages out the entire frame, so if you have 1 bad block, its hard to tell that from other things that might be going on.  Multipoint breaks the frame up in the 'Block size' that you specify, and effectively returns that 2D array as a 1D array (all of row 0 data, followed by all of row 1 data, followed by all of row 2 data, etc) , so its results can be challenging to look at too.  Basically, you'll be looking for one or more multi-point block that had a spiked value, indicating that it ran into a hard edge.  

 

If you bring the Checkers data back into LabVIEW, you can re-compose it into a 2D array, and also use an intensity graph to better see what the returned data is indicating.  I have a demonstration of this that I can share, but would prefer not to post that to the public forum.  If you'd like to contact your local NI Support team and open a service request, I can share that internally through them.  It will be provided as-is as a demonstration, I can't fully support it.

 

One other tip, it's much more CPU intensive, but to get the most granular Checkers results, I would recommend reducing the block size to 4 (the lowest it can go).  This block size is different than the one used for Blockiness.

 

About SSIM:

Using Checkers is ultimately challenging unless you can control your video source, and use a source that doesn't contain hard unnatural lines in it (like on-screen text or borders).  If you can control your source, and you can control the replaying of it to your DUT, I would recommend the SSIM processor with a reference video.  Using this method you create a 'Golden' known-good reference, and then compare your future acquisitions to it.  SSIM is an advanced vision algoritihm that detects unnatural defects in pictures the way that humans perceive them.  Something like a macroblocking error will score very high because it is not natural in a scene.

 

Paul Davidson
National Instruments
Product Owner - ni.com Chat
0 Kudos
Message 4 of 19
(13,639 Views)

Hi, 

 

Really I didn't know that checkers has a lot of work and I don't have Labview. But I can control my video, I'm just using images as you send me and I create avi videos as you. Then I use this video with the AVI single provider and then I send it to my set-top-box, finally I made the comparison between two videos. Actually I haven't worked with the SSIM or PSNR processors, but I will do it next week.

 

Carolina

0 Kudos
Message 5 of 19
(13,636 Views)

If you can control your video source, I would highly recommend using the SSIM metric.  It's going to take a lot of work off your end, because its really strong at detecting defects.  The nice thing about SSIM is that it returns a result of 1 (perfect match) to 0 (no match).  You can use PQA Metrics tab to set a high Pass/Fail threshold, like .95 or more (I'd do testing to figure out the best limit to use), and it will take care of most of the work for you.

 

PSNR performs a similar task, but has a few drawbacks.  It is much more processor intensive than SSIM.  It returns an unbounded result, 0 to Infnity.  And overall the algorithm is a different approach.  

 

The algorithim and work behind SSIM is some very advanced work in the field of studying how humans perceive images and defects in them.  Humans are very good at naturally detecting very small distortions in an image, and SSIM was designed to emulate that process.

 

There's no harm in trying both in your design phase, however.  If you see similar results, I would recommend sticking with SSIM as it will be simpler to work with.

Paul Davidson
National Instruments
Product Owner - ni.com Chat
0 Kudos
Message 6 of 19
(13,635 Views)

I will take your advice, I will work with the SSIM and PSNR and see what happen with them. 

 

On the other side, I want to use MTF processor too. I found this page and I made a video with the same image that their. But I don't understand what is the resolution I need to put and the MTF  in the result info (image)?

I want to know how MTF and resolution are calculated by PQA? 

mtf.png

 

0 Kudos
Message 7 of 19
(13,621 Views)

We used our own implementation of MTF in PQA, and as such can't share all of the details of that implementation, which are also very complex.

 

Reviewing PQA Help at: I PQA Executive and the NI PQA Configuration Panel>>NI PQA Executive Components>>NI PQA Tabs>>Processors Tab>>Video Metric Processors>>MTF Contrast Processor gives us a start in how the processor works.

 

The 'Resolution' input affects cycles per pixel.  In some ways 'Resolution' and 'Subsampling' relate to how we're going to subdivide up pixels as we're scanning for edges and contrast.

 

The 'MTF' output gives us contrast at the edge that was detected.

 

The 'Resolution' output gives us resolution in cycles per pixel at the MTF that was output.

 

Vision Processing and MTF methodology take a good bit of explanation beyond what I will be able to offer here.  I think these next 2 links do a better job of explaining the concepts than I would be able to here.  They further explain the idea of cycles per pixels, and how this relates to the Fourier transform that is performed on the edge data to return an MTF value:

http://www.imatest.com/docs/sharpness/

http://www.normankoren.com/Tutorials/MTF.html

 

 

I'd also like to point out that, SSIM will help you detect problems with sharpness and contrast in your incoming video stream as compared to a reference stream.  You may not need to use the MTF contrast processor at atll.

 

Think of SSIM as kind of a catch-all processor.  It can detect the problems we've discussed above: compression artifacts from video encoding, macroblocking errors in decoding, issues that cause a picture to be less sharp than it should be, if a picture has been shifted or disorted, and much more.

 

If your test just requires that you detect these problems, you may not need to specifically test for each one individually.  SSIM is great at giving a general pass/fail against a bunch of different types of problems you would encounter, it's just not going to tell you which specific problem caused it to fail (macroblocking error, sharpness, etc).

Paul Davidson
National Instruments
Product Owner - ni.com Chat
0 Kudos
Message 8 of 19
(13,613 Views)

Thanks so much for your help!!

Now I'm working with SSIM processor and I have good results. But in my video I realized that when I have black frames or black and white frames, SSIM is lower than in the ohters colorfull frames. Is it normal?

 

Carolina

0 Kudos
Message 9 of 19
(13,569 Views)

I created a simple AVI (attached), and got a perfect score of 1 from the SSIM processor when I created a reference and ran it against it.  Can you share your test frame or frames that cause the issue?

 

If its normal video subject matter, just in black/white or grayscale, I would expect the SSIM algorithm to still work properly.  If it is more like a black and white video test pattern with unnatural shapes and hard edges, then I could possibly see the SSIM algorithm having trouble with this since the focus of the research was on processing natural scenes.  However, I just tried that test pattern also, and it seemed to score perfectly.

 

In general, I would still expect SSIM to score properly.  You may want to verify that you aren't introducing some issues with the codec using to encode the video, or something similar.

Paul Davidson
National Instruments
Product Owner - ni.com Chat
Message 10 of 19
(13,567 Views)