LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Help with missing iterations - TTL pulse timed loop

Solved!
Go to solution

Hello Labview gurus,

 

I'm having an issue with an application using an externally triggered timed loop and I wonder if anyone can advise on a potentially more robust solution.

 

Background - I'm trying to slave the scanning mirrors of a fluorescence laser scanning microscope using the trigger interface of the microscope controller.

 

The microscope can be configured in software to acquire images (via raster scanning of a laser across a sample, line by line) upon arrival of a TTL input pulse. The microscope controller then generates 4 digital timing outputs:

  • pixel clock (used to configure analog input sample rate)
  • line clock (signifying the start of a new line)
  • frame clock (new img frame)
  • beam blanking pulse, which runs at the same rate as the line clock but with a longer duty cycle.

 

The sensor which collects the emitted fluorescence (a PMT) is separate from the microscope, and so data is acquired from a separate amplifier at the rate of the pixel clock - I'm using an NI PCIe-6374 multifunction IO card for all of this.

 

The GUI i've written in Labview cannot talk directly to the microscope hardware, so instead I set it up to trigger a short sequence of images (3-4 is sufficient) and then use counters to capture the characteristics of the 4 digital timing outputs. I can then configure the Labview GUI to perform various imaging tasks using these settings - so far so good

 

The problem i'm having is how best to implement triggering and data acquisition without missing loop iterations. I've attached a sample VI using a Timed loop with the beam blanking pulse as the trigger for the loop (it's crude, sorry). A digital line is set up and started to trigger the microscope, then the loop begins. A second trigger can be sent to the microscope if just one frame is required, as it doesnt stop scanning until the frame finishes.

 

What I found however is that the loop misses iterations, despite being triggered by an external TTL pulse. It's triggering at about 500Hz, so nothing crazy. The pixel clock runs somewhere between 600kHz and 1MHz, but the 6374 should be able to handle up to 3.5MS/s so don't think that should be a problem (could be wrong).

 

At first there were quite a few things going on in the loop, but even after stripping them out it didnt make a difference. Reducing the number of samples to acquire also made very little difference (even going down to 10 samples - from over 500 - made no difference in the number of missed iterations, so AI sampling doesnt immediately leap out to me as the problem.

 

TL;DR: Can anyone suggest a way to reduce the number of missed iterations using an externally-triggered timed loop

 

Any help would be greatly appreciated. Is there a better way to achieve what i'm trying to do that doesnt require loops in this way?

 

Thanks!

Allen

Message 1 of 7
(3,244 Views)
Solution
Accepted by topic author Oldbhoy

I can give a few initial thoughts, but don't have a solid enough grasp on the whole thing to lay out a detailed solution.  (Not a criticism of your write-up, which was FAR above average -- it's just that you have a complicated set of signal and timing interactions in a field and with equipment I'm not familiar with.)

 

1. You mentioned counter measurements that I didn't see in the code.  I was hoping to decipher some things by seeing what you were set up to monitor or measure.

 

2. I presume your 2-sample DO task is generating a short pulse for your microscope.  It isn't *wrong* to do that with a fast hardware sample clock, but it might be *wasteful* as it doesn't appear that you're trying to establish hardware-sync with any other DAQmx tasks.

 

3. Your X-series device supports hw-retriggerable finite AI acquisition which you'll probably end up wanting to use.

 

4.  There's overhead involved with starting and stopping the AI task inside your Timed Loop.  That's probably what's preventing you from keeping up with a 500 Hz pulse rate.

    You can reduce the overhead by "committing" the task before the loop (search "DAQmx task state model" for more info).  In the long run though, I suspect the Timed Loop won't be part of a good robust solution.  Handling the signal timing relationships in hardware, if possible, will be a better goal.

 

You'll obviously keep needing your AO task.  You *might* need 1 or 2 counter tasks as timing signal helpers, but I can't say for sure yet.  And you'll still need to generate a pulse for your microscope, which you could do more simply with a software-timed DO task or a counter output task.

 

 

- Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 2 of 7
(3,212 Views)

Hi Kevin,

 

Thanks so much for the detailed reply. Yes, i'm sorry if any details are missing, but i'm honestly such a DAQmx noob that i'm not even sure what's important sometimes. I'll attempt to answer your points as best I can. If you skip to point 3 you'll see that your suggestion of finite retriggered Analog Input worked! (see also attached VI snippet) Is there a reason why this works inside a loop by itself and yet without retriggering enabled, I needed to start and stop the task inside the loop each time?

 

1) I've attached a prototype of the VI used to measure the microscope pulse timings. This is executed for 3 different input lines using separate counters. The microscope is triggered to start scanning as in the previous VI i posted, then the pulse frequency and duty cycle is measured for the frame clock, pixel clock and beam blanking pulse. These can all change depending on how the microscope software is configured to acquire an image. This process is not used again until the user changes the image capture settings on the microscope, and isn't involved in the VI i originally attached. I realise this may be a less than adequate way of doing things but that part actually worked quite well.

 

The main reason for doing this is because I don't have any other way of having the Labview GUI determine the image acquisition parameters from the microscope software, other than a best guess using the timing data from the external clock pulses it generates: 

Some more background, in case it helps

 - In general, the image is made up of a number of lines (in Y) and each line contains X pixels. Resolution of the scanned images can be found from the timing clocks by the following:

  • 1) counting the number of pixel clock pulses between successive line clock pulses, and multiplying that by the duty cycle of the beam blanking clock (~0.426 for a unidirectional scan; x2 for a bi directional scan). 
  • 2) counting number of line clock pulses occurring between successive frame clock pulses

 

It's possible this could be done on the fly but at the moment I just don't have the know-how to do it

 

2) Yeah, I agree. I can try different things with this. I was just concerned during testing that some kind of delay between generating this signal and the resultant incoming microscope triggers may have been responsible for missing a pulse/pulses at the start of acquisition. I was originally using a flat sequence structure to ensure the trigger started before the loop (😱) but that was a terrible idea. I think enforcing data flow with the error wire seems to be ok. The labview example for generating a software digital trigger was a bit clunky so if you have a particular code sequence in mind i'd love to see it.

 

3. Tried this and it seemed to solve the problem, although it still requires a loop (is there any way of avoiding that? I guess not). I need to do more testing as my grasp of the DAQmx tools is woeful. Wish someone would write a thorough textbook-sized manuscript on this. I'd buy that! (hint hint Kevin, i've seen loads of your previous posts. You know you're stuff, i bet you'd be the perfect man for the job 😀)

 

4. I did originally include a commit (via DAQmx Control Task VI) right before the DO trigger was sent to the microscope but it didnt seem to make too much difference. Would you recommend this in my current VI iteration as well?

 

Download All
0 Kudos
Message 3 of 7
(3,100 Views)

More thoughts and questions...

 

1. What is the timing of the "frame clock" pulse edges relative to the other timing signals?  And a "frame" is the 2D image created by X pixel clock cycles per line and Y line clock cycles per frame, right?

 

2. Is there any particular time gap between the end of one frame and the start of the next one?  Or do you only get 1 frame per image (putting the time gap into the human interaction realm of timing rather than milli- and micro-sec)?

 

3. As I tried to pose this next question, I think I started to get a better understanding.  Let me turn it into a set of statements that you can correct as needed.

- Your microscope has 2 modes for doing a single line scan, unidirectional and bidirectional.

- In bidirectional mode, the line scanner sweeps one way and then back in the reverse direction.  I assume the microscope averages the results before making the final image?  You end up with 2 pixel clock cycles for each image pixel, first generating pixel 1-->pixel X then reversing direction to generate pixel X-->pixel 1.

- In unidirectional mode, the line scanner sweeps one way only, pixel 1-->pixel X.

- You use this pixel clock to sample your AI data.  You want to pixel-by-pixel correlation between your AI data and the microscope image.  Thus you need to know whether the set of pixel clock cycles occuring within a line clock cycle represent a unidirectional or bidirectional scan.  If bidirectional, you need to split your data in half, reverse the 2nd half, and then average across the two halves.

- Your method for determining uni- vs. bi- is from the duty cycle of the beam blanking clock.  The lower duty cycle for uni- means that the beam is "blanked" during the reverse sweep.

 

I'm beginning to work up some ideas for how to approach this via DAQmx and your device.  I'll make that a followup post and finish this one by answering some of your questions.

 

Is there a reason why this [RE: finite retriggered AI] works inside a loop by itself and yet without retriggering enabled, I needed to start and stop the task inside the loop each time?

Yep, it's kinda one of the main points for adding support for hardware-retriggerable tasks.  A regular finite task needs the software API call to be explicitly stopped before it can then be explicitly re-started.  This adds overhead that prevents apps like yours from responding to every 500 Hz trigger signal.  With hardware retriggering, I think the only constraint is that you need to start reading data out of the task buffer before the next trigger signal causes the driver to want to start delivering new data into the buffer.  (Possibly, but I don't know for sure, you might need to read *all* the data out.)   This probably occupies some microsec worth of time, but still lets you respond to triggers at (maybe) 10 kHz or more.

 

I don't have any other way of having the Labview GUI determine the image acquisition parameters from the microscope software, other than a best guess using the timing data from the external clock pulses it generates

A clear timing diagram to demonstrate the relationship and dependencies of these clock pulse edges would be big step toward figuring this stuff out on the fly without a special pre-run step.  (Which is a decent workaround in the meantime, BTW.)

 

I think enforcing data flow with the error wire seems to be ok

That's pretty much my main go-to method when I need to enforce sequencing of DAQmx calls.

 

 I did originally include a commit... but it didnt seem to make too much difference. Would you recommend this in my current VI iteration as well?

A "commit" before starting minimizes the overhead involved in subsequent stop - restart cycles.  With hardware retriggering, you shouldn't need to go through stop - restart cycles any more, so there's probably no particular need for the "commit".

 

Meanwhile, before I write up an approach I have in mind, start reading up on change detection and particularly the DAQmx exportable signal known as the Change Detection Event.  Then hang on, it should be a fun ride!

 

 

-Kevin P

 

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 4 of 7
(3,046 Views)

Hi again Kevin

 

Again, lots to think about - thanks for all of this.

 

I've attached a zip file with some images of the trigger pulses from the microscope. I just hooked them up to the analog inputs of the 6374 and set the sample rate to max (3.5MS/s). I only have a 2-channel oscilloscope so it was easier just to grab them all using the NI board - rise time of the pulses will be slowed somewhat but they're spatially distinct so I dont think any info is lost.

 

All images use the following key:

  • White - Line clock
  • Green - Beam blanking pulse
  • Red - Frame clock

I didnt capture the pixel clock. This can run at somewhere between 600kHz and 1.5MHz, depending on the image capture setup (note, this is really quite fast, and normally the pixel clock wouldnt run faster than about 500kHz (2microseconds). The pixel clock is connected to the PMT (light detector) amplifier. Often PMTs use a high-bandwidth amplifier to amplify the PMT output to a usable range but depending on your electronics setup this means that you can be throwing away a lot of the signal occurring in the inter-pixel clock interval. The amplifier we're using is a 3-stage integrator which allows the PMT signal to be integrated throughout the entire period between successive pixel clocks. This PMT/amplifier combo is completely separate from the microscope. The microscope has detectors and amplifiers in it, but using our upgraded hardware/optical setup, none of the light goes to them, it all goes to our detectors. The motivation for the project is to use this improved detection setup to enable more advanced imaging than we could perform previously. There are a number of reasons for not upgrading the microscope directly via the manufacturer that are lengthy and i'd be going into crazy detail, but suffice to say due to budget (or lack of) this is the path of least resistance for now

 

Answers to specific questions below:

1) Yes, a frame is a 2D image made up of the raster scanned lines (Y), which in turn is made up of individual pixels (in X).

 

2) Yes (see attached png - inter-frame delay). Note the timebase is just sample number. Approximates to about 10000 samples at 3.5MS/s, or ~2.86 milliseconds. This may not be fixed, as scan regions can be changed by the user in the microscope software GUI so that the frame is larger or smaller (and also not necessarily square). The time delay could represent the Y-axis scan mirror (which is usually the larger/slower of the two) taking a finite time to return the beam back to the start position ready for the next scan. Would need to test this in more detail and might do if i have time

 

3)

- Your microscope has 2 modes for doing a single line scan, unidirectional and bidirectional.

Yes, effectively - bidirectional is rarely used for high resolution imaging though as it introduces errors. My understanding of this is that for galvanometer scanners there is an add degree of complexity in getting the mirror to sweep at exactly the same rate going backwards as going forwards, so bidirectional scanning is sometimes implemented by the 3rd party users of these scanning systems as an additional feature, then some correction is applied to the final image to compensate for the offset of adjacent scan lines. However, there is a lot of functional data that can be gained from simply scanning in one axis (or scanning a repeating pattern that doesnt necessarily build an image of the sample you are scanning). In this case, higher speed is desirable and so bidirectional scanning offers the ability to "see" rapid events. This is more info than you need, but you maybe understand better now why I want to cover both unidirectional and bidirectional modes

 

- In bidirectional mode, the line scanner sweeps one way and then back in the reverse direction. I assume the microscope averages the results before making the final image? You end up with 2 pixel clock cycles for each image pixel, first generating pixel 1-->pixel X then reversing direction to generate pixel X-->pixel 1.

That's almost it. The scanner sweeps one direction, in which time many pixel clocks will have occured. Each one makes up a pixel in one line of your image (imagine a row of pixels in a standard digital image). On the flyback, the Y axis has moved just enough so you're scanning a line just below (or above) where you scanned before. The result is X pixels per line over 2 lines (X can equal 128, 256, 512, 1024 or 2048). You'll notice that there is a time delay between the line clock (white in images) and the beam blank (green). This is because the scanning mirrors are only moving at a constant rate for a fraction of their travelling range, and to get an image that isn't abberated or smeared by this non-linear sweep of the laser across the sample, signal is acquired only during the high period of the beam blank. In bidirectional mode the beam blank duty cycle is just doubled, so the sweep back isnt corrected in the same way (see below)

 

- In unidirectional mode, the line scanner sweeps one way only, pixel 1-->pixel X.

See above

 

- You use this pixel clock to sample your AI data. You want to pixel-by-pixel correlation between your AI data and the microscope image. Thus you need to know whether the set of pixel clock cycles occuring within a line clock cycle represent a unidirectional or bidirectional scan. If bidirectional, you need to split your data in half, reverse the 2nd half, and then average across the two halves.

Bingo!

 

- Your method for determining uni- vs. bi- is from the duty cycle of the beam blanking clock. The lower duty cycle for uni- means that the beam is "blanked" during the reverse sweep.

Correct

 

Had a look at change detection already for the timed loop, but now I think I know where you may be going with this

 

One issue I was having that I couldnt work out in my head is how to stop the while loop acquiring the data without prior knowledge of the number of lines it's expecting. I was using a variable in a shift register to count the number of loops and when it reached the correct value (Y resolution-1) a new frame has begun and the variable goes back to 0.Without this check the AI operation times out and an error occurs. The frame clock occurs rather inconveniently a very short time AFTER the first Line Clock, but since the beam blank clock controls AI acquisition, do you think I could use the frame clock in a timed loop to implement multiple frame scans?

0 Kudos
Message 5 of 7
(2,673 Views)
Solution
Accepted by topic author Oldbhoy

That's a LOT of info, and I'm going to have to more fully absorb it after this initial reply.  Not enough time right now.

 

The most important new info I gleaned in this first pass is that an image isn't necessarily square, which I assume to mean that from one line to the next, you may have different #'s of pixels to scan and they might start at a different X offset position?

 

Fortunately, I don't think that'll greatly affect the scheme I'm thinking of.  There may be a few more details to manage precisely, but it all still seems doable.

 

Basic background: the entire plan rests on the notion that you should accumulate your AI data in something equivalent to a 1D array, and then figure out how to chop it up, reverse  some portions, and reassemble in the appropriate order afterwards.  I don't think there's any other way since you won't know the microscope's imaging and timing configuration in advance.

 

The concept is relatively straightforward-ish, the complete implementation is not exactly *hard* but does have some irreducible complexity and will require meticulous attention to detail.  Ok, buckled up?   Let's go...

 

1. You will be setting up a *continuous* AI task that uses the pixel clock as a sample clock.

2. You will further use a producer / consumer approach to deliver that AI data to an independent loop for accumulation and processing.

3. You will also set up a DI task that uses "Change Detection" as its sample timing type.  The microscope's Frame, Line, and Blanking timing signals will be acquired in this task.  Be sure to wire them into port 0 (which supports hardware timing).

4. You could set it up to be sensitive to both rising and falling edges of all those timing signals.  On the other hand you might not need to.  There will be some interaction and dependency between which edges DI change detection is sensitive to and how you put together your post-processing algorithm for the AI data.  I'm not confident which approach will be simpler or more robust overall.

5. You will set up a nifty little counter task to help pull this all together.  It will be configured as a buffered edge-counting task. 

6. You'll need to use a DAQmx Channel property node to specify that the edges to count will be the AI task's sample clock.   It will be named something like /Dev1/ai/SampleClock". 

7. When you call DAQmx Timing, you'll right-click the 'source' input and create a constant or control.  You'll then right-click *that*, choose "I/O Name Filtering..." and then Include Advanced Terminals.  Then you'll be able to pick the named signal similar to "/Dev1/ChangeDetectionEvent".

8. <pause and take a breath, we're getting there>

9. Here's what's gonna happen.  On every configured change detection edge from one of those microscope timing signals, your DI task will sample the digital state of those signals (the post-change state) and your CTR task will sample its own internal count register value.

10.  The count register value represent the cumulative # of AI sample clocks seen.

11.  This lets you correlate the *state* of all the timing signals at the instant any of them change with the specific AI sample #.  You can then run an algorithm to track the state of the timing signals, and figure out how to slice and dice your AI data into the appropriate image.  Algorithm left as an exercise for the reader.  🤔

12. You'll deal with your DI task and your CTR task together in one loop, but *not* the same loop as your AI task.  You'll again use some producer / consumer arrangement to move the data elsewhere for post-processing.

13. The CTR task should be started before DI.  And also before AI.  All of them should be started before you issue the "start" pulse to your microscope.  You can probably use a different counter to generate the start pulse.

14. You will need some method for your software to "notice" when a frame is complete.  That can initiate your post-processing algorithm that'll crunch your AI data into proper pixel-for-pixel correlation to the microscope image.

15. The algorithm.  Ah, the algorithm!   You'll be scanning through your DI data looking for the sample # where the timing signals are in the next target state.  Then you'll get the count value at the corresponding index from your CTR data.  That count value then tells you the corresponding sample # within the AI data.  And then the fun starts.

    [Thought experiment note: there's a little off-by-one thingy going on that may work in your favor.  Consider the very start.  The instant when the Blanking clock first goes high is slightly earlier than the 1st AI sample taken for that line.  So the count value at that instant will be 0 because there have been no pixel clocks / AI sample clocks yet.  But the very 1st AI sample taken *after* that instant will, in fact, reside at index #0 of the AI array.  So it looks like you're lucky and one of the many meticulous details already worked itself out for you.  Don't worry, there'll be plenty more.]

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 6 of 7
(2,542 Views)

Hi Kevin,

Sorry for slow response.

 

That was a lot to work through but it really helped my understanding of simultaneously interacting with hardware counters and analog data acquisition.

 

Took me a while to get it right but it worked well, thanks so much!

 

I also accepted your solution of using hardware retriggerable analog input as well, since I originally everything working this way using a simpler VI. I've still to decide which solution is more robust, but I think your second one may work a bit better.

With hardware retriggering I was occasionally getting buffer overflow errors. Think this was a concurrency issue (too much other stuff going on in the background processing the resulting data). When I refactored the code to make fewer calls while this loop was running and manually increased the buffer size this seemed to solve the issue. Was only ever a problem when I ran the image capture (ie scanning) in continuous mode. During serial measurements to acquire image stacks it's much less of an issue

 

Cheers!

0 Kudos
Message 7 of 7
(2,343 Views)