Machine Vision

Showing results for 
Search instead for 
Did you mean: 

maintain relative roi position



In this VI that I am making, I am doing a spatial calibration and then a pattern match (geometric), from which I am defining a coordinate system.  This is going just fine.  After that, I need to place a group of ROI's at a fixed position relative to the coordinate system.  Somehow I need to define the ROI coordinates based on the aforementioned coordinate system, or calculate their positions (and angles) from this.  As of now I am defining the ROI's in pixel values and am using TransformROI on them as a group to place them relative to the coordinate system.  Is there a way that I can define the ROI's in calibrated units i.e. millimetres, instead of in pixels?  Since I already have calibrated the image from a grid, I mean?





0 Kudos
Message 1 of 4

IMAQ Convert Pixel to Real and Convert Real World to Pixel do not help you? I guess You would specify the ROI as poligonal line in real world coordinates, convert the coordinates of these vertices to pixel, then set them in the ROI structure in the image property node.

A potential problem it this, which I'm finding, is the fact that Labview sets completely arbitrarily the origin position of the coordinate transformation. In order to circumvent it, I found nothing better than to compute the resulting real world position of a fiducial point after calibration, to subtract it from the known original position, and then use IMAQ Set calibratio to set the offset.


0 Kudos
Message 2 of 4

Thank you for your response!

I am now calculating the relation between pixels and millimetres by using a clamp on my grid-calibrated image.  I use Rectangle To ROI and feed it millimetre values multiplied with the pixels/mm relation to create my ROI's.  This works.  What do you mean when you say that the origin position is set arbitrarily?  I place my origin and coordinate system based on a geometric pattern match.

0 Kudos
Message 3 of 4

Being concerned with it (see here), I thought (perhaps wrongly) that you were calibrating your image using IMAQ Learn, which I find that offsets the origin. In the sense, if I use a set of calibration points and corresponding real world coordinates, generate an image calibration, and then I test-transform one of my original registration points into real world coordinates, I get coordinates offset wro what I trained the calibration with.

Attached, what I do to correct. "Registration Points" is a 4 column array containing {x,y,X,Y}.





0 Kudos
Message 4 of 4