Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

CMOS sensor read-out

I'mn having some more trouble with my current application regarding triggered analog measurements with a PCI-MIO XE-10:

I have originally posted here (Original problem with read-out schematics) and here (where I got some example code from NI - thanks by the way).

The read-out scheme of the ship in question is shown below:


I have now advanced to the situation that I no longer need to use the evaluation board supplied with the CMOS sensor.  I'm powering the chip from the +5V output of my card (not shown above), supplying both CLK(100Hz to 800kHz supported by the chip - T1) and ST via Counter 0 and Counter 1 respectively, which also controls the overall integration time of the chip.  The gain option is currently not implemented.  I've controlled the function of both of these, and they seem correct.  The code shown below is used to perform this:



The chip outputs Video, EOS, Trig are being monitored.  The falling edge of the EOS signal is being used as a STOP trigger, the rising edge as the START trigger.  The trig output should be used to start a single A/D conversion so that all 1024 triggers result in 1024 data points being recorded.  To do this, I've connected it to pfi0 and defined it as "scan clock" and set it to trigger on a falling edge.  The code below is being used to perform this:



This is the theory.

In practice, it almost works.  Using the code attached, I can get almost all data points.  Sometimes I get the full 1026 (1024 plus minimum 2 pre-trigger as described in the code) whereas sometimes the count can go as low as 500.  Normal is around 1000 data points.  I've tried reducing the base clock of the chip to around 4 kHz, but the problem remains.  It seems that either the code or the card is missing some triggers.....   The actual scan rate for the A/D conversion is set to a much faster rate than the VIDEO signals ir the TRIG signals are being generated (up to 8 times faster).

I've made a scan of the inputs (Start, EOS and Trigger) via AI measurements and everything SEEMS to be OK.  I've also verified multiple times that there ARE 1024 trigggers being received, with the correct spacing, polarity and duration.  The signals going to and from the CMOS chip seem to be exactly as shown in the scheme above, but I can't get the software to return the correct number of samples.



One function I simply do not fully understand is the "AI buffer read" function.  I suspect there may be something wrong with how I'm using this, but trying any other configuration other than the NI-supplied example uses is worse.

I'm using LV 6.1 on Windows 2000 with NI-DAQ version 6.9.

If anyone has made it this far in my post and has experience with this kind of acquisition, please help......

Shane.
Using LV 6.1 and 8.2.1 on W2k (SP4) and WXP (SP2)
0 Kudos
Message 1 of 17
(5,089 Views)

Neat app and nice thorough walk-through.  I think I mostly understand what you're after, but I suppose we'll see...

1. Based on the timing diagram of the signals, I'm not seeing the advantage of using both Start AND Stop triggers.  Two alternate modifications that come to mind are:

A. Use falling edge of EOS as Stop trigger.  Don't use a Start trigger.  When calling AI Read, read with offset = -1024 and read spec = relative to trigger. 

B. Use rising edge of ST as a Start trigger.  Don't use a Stop trigger.  Perform finite acquisition for 1024 samples.

2. Not sure what to expect with your current use of AI Read.  Odds are good that the "no change" read spec will map back to a default of "current read mark" meaning the earliest acquired but unread data.  I don't know how that interacts with a stop-triggered acquisition, but it seems plausible that it may contribute to your problem.

3. Don't recall what the upper numeric output from AI Read is -- is it "# Read" or "# available"?  I'm assuming it's "# available" which would cause the loop to make more sense, but I'm used to seeing that output come out the bottom.  Maybe it comes out the bottom of the so-called advanced terminal, AI Buffer Read?  In any case, maybe you can make better sense of things if your loop always reads 0 samples (to give you visibility to the output values from AI Read), then terminate the loop based on those outputs.  After the loop, Read your whole chunk of data at once.

Just a couple thoughts to get the ball rolling...

-Kevin P.

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 2 of 17
(5,085 Views)
Kevin,

thanks for
  1. Reading the entire post
  2. Posting a constructive answer
I'll try to explain my thoughts.

Re start AND stop trigger....  The current chip outputs 1024 pixels, but there are other very similar chiups which output 256, 512, 2048 or whatever.  I'd ideally like toe program to be able to automatically work with ANY of these, thus the start and stop signals.

I don't know anything about AI read either.  It was in the example code posted for me by Andre Saller (much thanks by the way).
As far as I know, the output wired to the shift register is indeed the Backlog of scans in the buffer.  This way, we are only reading scans already in the buffer.  The first AI buffer read call reads zero scans, but it allows the current backlog to be fed into the next loop iteration and so on until the buffer has been read (1026 data points).  It seems that I'm missing something here though.  Unfortunately, I've been unable to make head or tail out of the help descriptions for this function.  I think if the buffer overruns, than the VI produces an error.  Funnily enough, slowing down the CLK signal to something pathetic like 4 kHz doesn't solve the problem, so software timing problems seem unlikely.

Thanks for the help

Shane
Using LV 6.1 and 8.2.1 on W2k (SP4) and WXP (SP2)
0 Kudos
Message 3 of 17
(5,081 Views)
How about...
 
1. Rising edge of ST = Start trigger, Falling edge of EOS = Stop trigger  (It didn't look like the EOS rising edge would be a natural start trigger, though perhaps it'll work if you're restricted to using a single signal for both start and stop triggering.)
 
2. Configure buffer to be big enough to handle max # pixels, plus a little cushion.  Dunno for sure, but this "padding" may interfere with use of idea 5A below.
 
3. Keep "trigger" signal configured as external scan clock
 
4. Don't read data out of the buffer until after your acq task is "complete."  Hopefully, this can be determined reliably with available DAQ vi outputs.
 
5. Inside your loop, simply "query" the task by using the lower-level 'AI Buffer Read', and reading 0 samples.  There's an "acquistion state" output and a "scan backlog" output.
 
So you start the AI task, and then once there's a rising edge on ST, the chip starts producing analog Video and digital Trigger signals, and your data acq task starts buffering samples.  By using the start trigger, you don't buffer samples until after the ST rising edge which will be important later.
 
I would then expect that upon receipt of the Stop trigger (falling edge of EOS), your AI task should stop buffering samples.  Maybe there's some extra Trigger Config to do to enforce this?   Supposing it can be made to work that way, here's 2 things to look for as loop terminators:
A. check for "acquisition state" to mean "finished."  Haven't actually used this method, but it seems like it may work.
B. check for a constant and non-zero value on "scan backlog."  (It may start out at constant 0 for a while until the start trigger happens.)  I've used this method several times to react to external scan clocks that might stop after an unknown # samples.  Just throw a little 5-10 msec loop delay in there, and use a shift register to compare consecutive values.
 
6. On loop termination, the last value of "scan backlog" should represent the actual # samples between your Start & Stop triggers.  Read that # out of the buffer with "AI Read".  Since you haven't previously read any data out yet, the default values for "read spec" and "read offset" should work.
 
-Kevin P.
ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 4 of 17
(5,074 Views)
Short update,

We upgraded to LV 8.20 and bought a PCI-6251 M-Series card.

Seems to work fine.

I need to speed up the code (DAQmx has a bit of overhead with stopping and starting tasks apparently) but it's working at least.

I'm going to go back to the 6.1 maching with the old card as I seem to have hit on a small difference in the DAQmx version which may enable it to work properly also on the old machine.  Out of curiosity, I simply need to try it out.

Thanks yor the help by the way...

Shane.
Using LV 6.1 and 8.2.1 on W2k (SP4) and WXP (SP2)
0 Kudos
Message 5 of 17
(5,013 Views)
Since you're just getting into DAQmx, you might get some speedup by taking advantage of the DAQmx Task State Model when you need to stop and restart a task.  The more states you explicitly transition through prior to Start Task, the fewer that will be "unwound" by the call to Stop Task.  In particular, you can avoid the need to deallocate and reallocate buffer space...
 
Out of curiousity, what did you end up doing to make it work?
 
-Kevin P.
ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
Message 6 of 17
(5,004 Views)
Thanks for the link to the info on the DAQmx tasks, I'll look that up tomorrow.

Basically, I changed LV version (6.1 to 8.20) and got a new card (M-Series wwith 1MHz rate instead of an old E-Series with 100kHz rate).

Apart from that, the driver software is of course totally new.  I think the difference may in the end be that I have changed slightly the amount of pre-trigger and post-trigger samples I can acquire.  I'll try it out on the "old" system and see if it suddenly works or not.  Not having to access the buffers explicitly is nice, but I need to dig deeper, just to satisfy my curiosity......

Thanks again,

Shane.
Using LV 6.1 and 8.2.1 on W2k (SP4) and WXP (SP2)
0 Kudos
Message 7 of 17
(4,994 Views)

Hello Shane,

I am working on the same kind of application as you. I am using a CCD sensor that works exactly the same way (start, clk, eos, trigger pulses) as your CMOS sensor. The CCD sensor is made of 532*64 pixels. The only difference with you is that I can choose between 2 acquisition modes: line binning (the pixels signal are first summed by columns before to be transfered, you then get 532 video signals) or area scanning (in that case you collect each individual pixel, here 532*64)

I am using a PCI-6281 (M serie) DAQ card, LabView 7.1 and NI-DAQmx.  I wrotte a program that sends a clk signal, 2 consecutive start pulses (interval = integration time) and I trigger the acquisition on the 2nd start signal with a sampling rate = trigger signal.

It works fine  (at least in the line binning mode) and I can collect the 532 pixels in the following conditions: clk frequency<100kHz and interval between start signal must be > 0.01 s. For my application I'd like to have shorter integration time. Any idea why I am limited to 0.01s? Is there a condition to consider between the integration time and the clk frequency?

For the acquisition, i used this example: http://digital.ni.com/public.nsf/websearch/878BD3188B1CD64686256F8C0060CCAB?OpenDocument

Also I didn't managed to use  the eos signal as a start or stop  trigger (this signal has negative polarity and I don't know if the digital lignes detects the edges?).

In area scanning mode, using the same code does not yield the expected result. I get the 34043 pixels signals but the integration time does not seem to be taken into account (i.e. increasing the interval between start signals does not necessarily translate into increasing the signal).

I would be happy to discuss all of this more in details with you at some point,

Regards

Polak,

0 Kudos
Message 8 of 17
(4,972 Views)
Polak,

what's most likely limiting your integration time is the time it takes the chip to output the pixel data.  This is generally defined as X clock cycles.  This means that if you double the base clock frequency, you can half your minimum integration time.  If you find that you can't go above 100kHz, then this will be your limitation.

Do you have a timing scheme for the chip?  If you can't get EOS to work, I find it unlikely you'll be able to get a proper 2D image from the chip since you NEED a way to detect the end of each line to be able to generate a 2D image.

When using the lines as digital triggers, it's important to select a rising edge or a falling edge.  If your EOS has a "negative polarity" then you'll most likele want to select "Falling edge" as the trigger mode.

I have used both start and stop triggers with DAQmx (With traditional DAQ it didn't work so well).

I'll try to upload some images of my set-up from work next week.

Shane.
Using LV 6.1 and 8.2.1 on W2k (SP4) and WXP (SP2)
0 Kudos
Message 9 of 17
(4,964 Views)

Hello Shane,

Thanks for your reply.

I have to supply the following signals to the chip: "start", "clk", "eos" and "select". "select" is a digital signal that selects between line binning operation mode (all pixels corresponding to one column are added in the shift register before transfer) and scanning operation (each pixel is individually collected). You can find the timing chart here: http://sales.hamamatsu.com/assets/pdf/parts_C/C7040_C7041.pdf. Concerning the maximum clk frequency I can go up to 1MHz (in theory).

When "select"=1, the mode is line binning. When "select"=0, the mode is area scanning. In area scanning mode, I know when 1 line has been scanned because there is a certain number of blank pixels between each lines.

At the moment, the program I wrotte realises the following operation: it sends a continuous pulse train for the clk, when you push the "start button" it sends 2 start pulses separated by the integration time. The acquisition is made on the 2nd pulse. The acquisition is triggered on the 2nd start pulse, there is no stop trigger. Can't I send the 2nd start signal, as the chip is still reading the signals generated by the 1st start signal? In other words, do I have to wait that the chip was read out entirely on the 1st scan, before launching the 2nd start pulse? In that case, as you explained, I would have to increase the clk frequency. Other question I am asking myself: what does the video signal resulting from the 1st "start" pulse mean (indeed the interval between " start pulses" has not yet been taken into account). I suppose in this case the sensor integrates charges until the eos signal. I also suppose it allows to "clean" the sensor before the !st acquisition.

Can the signal lines detect "negative polarity" digital edges? Do I have to change something in the configuration of the digital line used to receive the eos signal, so that the edge is detected?

Thanks again for your help. It is much appreciated,

Polak

PS: did you solve your problems, by upgrading to NI-DAQmx ? 

0 Kudos
Message 10 of 17
(4,942 Views)