Machine Vision

cancel
Showing results for 
Search instead for 
Did you mean: 

advanced triggering through NI-IMAQ C drivers.

Dear NI-Community,

 

I'm currently working on the triggering concept of a machine vision system based on an NI-PXI solution with C/C++ application.
The system features three PXIe 1435 modules connecting the cameras via CameraLink and two flash devices.
All shall be triggered synchronized for continous acquisition with a framerate (< 10Hz) which is user defined.
While the triggers of all devices shall be synchronized, their characteristics (i.e. delay and width) may differ from device to device.

Please refer to the appended figures (block and timing diagram) for an overview.

 

As far as I've understood the documentation this shall be possible with PXIe 1435 and the C/C++ IMAQ drivers.
However, the example programs for C/C++ cover much less than for LV and I'm quite new to the whole topic.
And even worse, we're at an early design stage where I've no access to hardware to play around and get familiar but at the same time have to eliminate the risk that our proposed triggering concept is not supported by the selected hardware.

So I'd be really grateful, if you took the time to discuss and give feedback what I have come up with so far:

 

Camera Module #1 shall be the Master and connect Camera 1 as well as the flashes with its TTL I/O lines. It also shall control the overall image acquisition by generating a continous pulse train whose characteristics are software/user defined as well as it is started/stoped by the user/software. This pulse train is put onto RTSI/PIX_Trig backplane bus of the PXI chassis.

The pulse generation is done via IMAQ API imgPulseCreate2() with pulse delay, width and timebase parameters calculated by imgPulseRate().

signalType and signalIdentifier would have to be set to IMG_SIGNAL_STATUS and IMG_IMMEDIATE repsectively to control the start/stop of the generation purely by software and mode PULSE_MODE_TRAIN for continous acquisition.


Camera Modules #1 (itself) through #3 would then pick up this pulse again from the PXI_Trigger line and forward it to their TTL I/O lines.
Where an exact copy of the master pulse train is needed/sufficient, this would be done by imgSessionTriggerRoute2() with source IMG_SIGNAL_RTSI and IMG_EXT_RTSIx and destination IMG_SIGNAL_EXTERNAL and IMG_EXT_TRIGy (where x and y denote the used lines)

Where different pulse characteristics are needed, one would create another pulse with imgPulseCreate2().
But in contrast to the master pulse train, those pulses would be of mode PULSE_MODE_SINGLE_REARM to fire once for every pulse of the master pulse train. Furthermore, their source and destination parameters would be basically the same as for the routed copies of the master pulse train. Those new pulses should be started with imgPulseStart() immediately after creation to be ready for the master pulse train to start somewhen in the future when the user pushes the button.

 

The actual image acquisition would then be configured as in the triggered snap example, to be in sync with the signals we routed (respectively: created to be put out) to the TTL I/O lines with imgSessionTriggerConfigure2() and images retrieved by imgSnap().


Long story, short question:
Is this concept basically feasable? Or has it some obvious design flaws?

For sure there will pop up some follow-up questions (for instance I'm not sure whether it would be better to grab or snap or how to correctly parameterize the API-calls in detail), but for the moment I would already be very thankful for any feedback or comments on the outlined concept,
possibly pointing me to general misunderstandings regarding the usage and capabilities of the driver API.

Cheers,
Mr. Both

Download All
0 Kudos
Message 1 of 1
(1,728 Views)