Machine Vision

cancel
Showing results for 
Search instead for 
Did you mean: 

IMAQ Image Timestamp?

I am using an NI 1722 Smart Camera with Low-Level Triggered Snap code (see LabView example "LLTriggered Snap.vi", action ="Trigger each buffer" ) to acquire a sequence of 5 images based on input from external electrical trigger pulses. The pulses (and hence the images) are spaced 60-150 ms apart.

 

Is there a way to "timestamp" each acquired image with the exact millisecond timer value at which it was acquired? 

Message 1 of 8
(7,141 Views)

As far as I know, there is no timestamp attached to acquired image.

What you can do is following: use IMAQ Write Custom Data for adding custom information to the image.

Something like that:

 

 

This should work also for Smart Camera, I guess.

 

Andrey.

Message Edited by Andrey Dmitriev on 04-16-2009 11:47 AM
0 Kudos
Message 2 of 8
(7,134 Views)

Thanks Andrey for your suggestion. It may be useful if I need to attach the timestamp to the image. Unfortunately I don't think I explained the situation well enough.  I would like the timestamp at the exact instant the image was aquired by the camera, and I am acquiring 5 images in a single Low-Level acquisition that are acquired within ~60-150ms of each other

 

"As far as I know, there is no timestamp attached to acquired image."

 That's what I was afraid of. I guess it won't be as simple as I thought.

 

 Here's the situation:

Mounted on the rotor of a (model of a) helicopter are 5 contacts that close a 24V circuit that is attached to the trigger line of the Ni1722. 4 of the contacts will cause the camera to take a picture of each of the 4 blades. An extra contact lies between blades 4 and 1 and will result in a garbage image. When the rotor is rotating at a constant speed, this system produces a square wave with 4 peaks that are temporally equidistant and a 5th inserted between the peaks for blades 4 and 1. The acquisition is started when the program is run (the rotor already in motion). This captures 5 images. I need to identify blade 1 - and by calculating the time between each image, I can locate the dummy image, discard it, and know the next image in the array is of blade 1. 

 

Here's what I am working with (see attached images):

I'm using IMAQ Trigger Read2.vi to monior the status of the trigger line that is telling the NI1722 to take a picture. When the trigger pulse (it's a 24V square wave of varying frequency) goes to high, it triggers the camera to take a picture - and the block I added records the ms timer value. My timing block then waits for the pulse to drop off again (so I don't get 5 identical times - the pulse is on for a few ms) before repeating the process for the 5 images I'm acquiring. 

 

The problem is that since the timing code is separate from the image acquisition code, often the time recorder doesn't catch the first trigger pulse.  I end up with a timing array that doesn't match the images - the first time in the array is actually the time that the second image was acquired. Since the square wave of pulses continues after the 5 images are acquired, I will always end up with 5 recorded times, even if they are the wrong ones. 

 

 

 

Message Edited by blade tracking camera on 04-16-2009 12:49 PM
0 Kudos
Message 3 of 8
(7,117 Views)

Hi Blade,

 

Our IMAQdx driver does have the capability to record a system timestamp within the Custom Image data automatically at the point the image is finished acquiring (usually within the interrupt handler signalling the image is completed). It is a software-based timestamp, not a hardware one (GigE Vision can provide a camera-hardware-timestamped one as well), but it is done at a low enough level that it can still be useful for some applications.

 

Unfortunately, this does not help you for the Smart Camera using the IMAQ driver. Now, we have considered adding this as a future capability to the IMAQ driver as well, but I wanted to ask you what your timing accuracy constraints are. It would be useful to know for your application what the maximum amount of latency (time between image acquisition and timestamping) and jitter (amount of possible difference in latency over a number of images) your application would require. Details like this would let us determine if such a feature would be useful.

 

As for your application, I would suggest not polling "Trigger Read" but rather do something like start a sequence of 5 images using our "LL Sequence" example, but after doing the "Start" you could have a loop simply polling the "Frame Count" attribute and putting a timestamp into an array each time the count changes (it should happen 5 times). Since this loop would be running very fast doing very little it should be fairly accurate (at least within your goal of determining if there was a 60ms gap or 150ms gap). After this you can simply call Get Image(s) and get the images out of the sequence. You actually may need to take 6 images and drop the first to ensure that you have accurate timing for 5 whole frames (since the first one could start anytime in the gap between images). 

 

Eric

0 Kudos
Message 4 of 8
(7,108 Views)

Eric,

 

Thank you very much for your suggestion to use the "Frame Count" property - it seems a better solution than the "trigger read", but I'm not sure I understand what you mean about the first frame starting in the gap between images. The frame count is the number of images acquired since the start of acquisition? The first image will be acquired when the first trigger signal is received after the program is run (and reaches that point in execution). The current issue is that the camera sometimes takes a picture before the timer can realize a picture was taken. E.g. using the frame count loop, if an image is acquired before the frame count loop can evaluate that the frame count increased, that image (index 0) will not have the proper timestamp in the timing array (instead timestamp index 0 will conrespond to image index 1). But it's not consistent - this happens (with the "trigger read" loop) about 1 in 7 times. So how would dropping the first image everytime, or dropping both the first image and the first timestamp everytime address this issue? (Or am I missing the point entirely?)

 

In response to your questions:

Latency doesn't matter at all to me for this application. I am only concerned with finding the delta t between images and the actual time that the image was acquired via trigger is only a vehicle to obtain this. If each image was timestamped a minute after it was acquire, the delta t wouldn't change - as long as the jitter was 0.

 

As for the jitter, I think anything more than about 13ms would mess things up, but I'm not positive. This is due to the fact that the extra trigger pulse (not capturing a blade image) is located such that the shortest distance between images is between pulses 4 and 5, but the delta t between pulses 5 and 1 is also smaller than the other delta ts. For a certain case (cases being which blade happened to be the first picture acquired), the program needs to discern this difference - more than 13ms jitter would make this an iffy proposition. 

 

Thanks again. 

 

 

0 Kudos
Message 5 of 8
(7,101 Views)

Blade,

 

See the attached VI for what I am describing. The main advantage to my approach is that you ensure you never "miss" timestamping triggers (or duplicate them). Essentially it uses the hardware's acquired image count (polled in a timed loop to take advantage of its low jitter) to decide when to record a timestamp. I ran this on my smart camera and saw that it was able to timestamp it at the default 60Hz acquisition rate to a standard deviation of around 0.20 ms. This should be more than enough precision for what you want.

 

Again, you'll have to throw away the first image since you have no idea of the time between that one and the previous (non-acquired) image was.

Message Edited by BlueCheese on 04-22-2009 10:40 AM
Message Edited by BlueCheese on 04-22-2009 10:43 AM
Message Edited by BlueCheese on 04-22-2009 10:44 AM
Download All
0 Kudos
Message 6 of 8
(7,053 Views)

Thanks a lot for the code. 

 

Before you posted that reply, I made a simpler version (see image) that was similar to what I had before, replacing "Trigger Read" with "Frame Count". I finally got a chance to test it yesterday and although I have no idea if it's as accurate as your solution it works every time for my application so I'll take it. Whatever was causing the Trigger Read to miss the first timestamp doesn't seem to affect this. Also, for clarification, I do not need the time between the first image and the non-acquired image before it, only the 4 delta Ts between the 5 acquired images to identify blade 1 as I am doing. 

 

Once again, thanks for all the help. It's working as I have it, so I'm not going to mess with it. (The demonstration is next Wednesday, so my fingers are crossed it keeps working)

0 Kudos
Message 7 of 8
(7,019 Views)

I have a similar problem and are now following this (was not available at time of this post but might help others):

http://digital.ni.com/public.nsf/allkb/D0488009D5AC87CF862579BA007B5F9B

0 Kudos
Message 8 of 8
(4,050 Views)