I use vision assitant to do positioning and ranging calibrations on the micron scale for parts.
The camera takes a ten-megapixel image.
If I use a higher level graphics card and a higher resolution display will my algorithm be able to identify the data more accurately?