LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Image acquisition with FlexRIO 7966R & 1483 module

Hi,

 

I am using FlexRIO 7966R, NI-1483 cam link adapter     & Basler camera (ACE 340km NIR) for Centroid processing. I am using example "1-Tap 8-Bit Camera with Centroid (FPGA).vi". 

I want to run the acquisition loop at customize rate or at user defined but it is written on the block diagram that acquisition Loop rate should be equal to the "Image data clock of IO module".

 

1. Can anyone tell me what is the Image data clock of IO module so that I can configure it. Is it the clock of the acquisition card I am using or something else?

 

2. Basler is providing pylon software to configure the camera setting through camlink interface. But I want to configure the camera settings through LabVIEW using FlexRIO & 1483 cam link card. Please suggest me how can I do this using the existing setup.

 

Thanks

0 Kudos
Message 1 of 20
(4,849 Views)

Hi!

 

There are two places I usually go when I have questions on FlexRIO Adapter Modules and the associated Component Level IP (CLIP) in LabVIEW FPGA. The first, is the adapter module CLIP reference, which describes all of the signas that we have access too and the second is the adapter module manual and specifications document. I'd definitely take a look at the adapter module CLIP reference that can be found by searching the LabVIEW Help.

 

You can check the Image Data Clock frequency by right clicking IO Module in the LabVIEW project and going to Preferences then Clock Selections. It should then list the current clock frequency in the far right column. This clock frequency should always remain faster than the pixel clock that is currently set; that's why 100MHz is the default given CameraLink's maximum supported clock frequency of 85MHz. If you want to run your Single Cycle Timed Loop at a slower rate for acquisition, you can use this preferneces pane to tie Image Data Clock to a lower frequency clock, as long as that frequency is still faster than whatever pixel clock you have set.

 

As for configuring camera settings  using the NI 1483R, I am not as familiar with the CameraLink standard, but it looks like we have 4 LVDS lines on the CameraLink connector which we can use to send control signals to adjust items such as exposure or frame rate. You will likely have to look at the manual for your ACE series camera to figure out exactly what those signals look time but you would have to implement that communication protocol on the FPGA.

 

I hope this helps!

Rob B
FlexRIO Product Manager
0 Kudos
Message 2 of 20
(4,834 Views)

 

Hi BauerPower

 

 

 

 

 

Thanks..I will search all these information..

 

 

 

 

 

0 Kudos
Message 3 of 20
(4,816 Views)

I have almost the same setup and have worked with the CLIP extensively.  The CLIP operates at 100MHz for its core clock.  There's an encrypted NI1483Core netlist that NI created that performs the bulk of the interface processing.  The CLIP connects the IO of the NI1483 to the NI1483 core netlist and also connects the LabView FPGA VI side interfaces (That's why the pCL and cCL signals are all OUT ports).  The CLIP thus is the glue between GPIO, NI1483Core, and your application.

 

The core performs the Camera Link pixel clock to 100MHz clock synchronization.  NI doesn't provide details on exactly how that works but it is probably done through a dual clock FIFO or similar methodology.

 

The 1483 itself doesn't seem to do much other than to pass Camera Link signals to the FPGA core.

 

FVAL, LVAL, DVAL are passed through the core (Synchronized).

 

In the examples, there's a signal called "Enables valid".  This signal is used to clock pixels on the 100MHz domain.  It's essentially a copy of DVAL but toggles so that every pixel is only clocked into the FIFOs once.  If you are running at 80MHz on the Camera Link, "Enables valid" will toggle low once every 5 clocks to ensure data isn't double clocked.

0 Kudos
Message 4 of 20
(4,629 Views)

I am using the example VI 10 tap 8 bit camera with DRAM & 1 tap 8 bit camera with centroid to read the images from my camera basler ace-340km NIR. but most of the time i am getting black images & time out error -50400.

 

When I am trying to compile the FPGA code at 100MHz clock, it is showing error so I am compiling it after selecting 40MHz clock in the timed loop.

 

Please suggest what should I do to get proper images from my existing setup.

0 Kudos
Message 5 of 20
(4,563 Views)

You have to compile at 100MHz.  It's designed to accomodate all CL clocks up to 82.5MHz.

 

I assume you are configuring the camera appropriately for Basler 10-tap mode when using the 10 tap example VI?  I'm not sure if the 1 tap example will even work with the Ace camera (Does your ace-340km support 1-tap base operation?).  I know my aca-2040-180km cameras do not.  The minimum tap configuration is 2-tap, which caused me no end to headaches since NI only provides examples for 1 tap and 10 tap operation.  To get 2-tap operation, you would have to heavily modify the 10 tap example or somewhat lightly modify something like the 1 tap with Bayer example (Hint: Remove the bayer decoding and pack 2 8bit pixels at a time into a 16bit DMA FIFO or similar).

 

The 1-tap with centroid example is hard to convert to multi-tap (I'd say impossible) since the Vision FPGA VIs are not designed for parallel processing of multiple adjacent pixels in each clock cycle.  Remember, multi-tap means you are clocking multiple pixels into the interface on each clock cycle.  None of the FPGA image processing algorithims that NI has can deal with this.  It basically makes the FPGA Vision Assistant useless for NI-1483 use with multi-tap cameras.  You could split the centroid into a separate SCTL (100MHz) and feed it with a much slower CL input but that's a serious pain.

 

The timeout is most likely caused by a DMA timeout.  The examples are not friendly to any variation in camera setup.  If you are operating in 10-tap mode, you are required to calculate the exact number of bytes (DMA words) a frame will generate in order to read the DMA completely.  If you don't read the DMA completely and do multiple frames of acquisition or continuous acquisition, you will blow the DMA FIFO.

 

Also, in the 10 tap with DRAM example, there is an interplay between the DRAM FIFOs and DMA FIFOs and the packers which is quite complex.  At the end of a frame, you must ensure the number of pixels clocked out:

 

Allows the packers (Specifically the 80 to 256 packer) to be empty

Allows an integer number of DRAM writes (256 bits)

Allows an integer number of DMA writes (This will always be true since DMA writes are 64bits and DRAM reads are 256 bits)

 

I'll leave the math for figuring out which X and Y dimmensions you need to use up to you 😉  Hint: Image height for 256bit packers at 10tap must be multiples of 16 lines OR image widths must be integer multiples of 160 8 bit pixels (16 pclk).

 

When using the 1483, everything is pretty much up to you to implement.  Vision FPGA doesn't do anything to help.

 

 

0 Kudos
Message 6 of 20
(4,555 Views)

Hi xl600,

 

I made a VI to verify the camera configuration. I found following configuration in the camera as default.

 

  1. Tap = 2 Tap
  2. Width = 2048
  3. Height = 1088
  4. Gain = 33
  5. Exposure Time= 2500 (x00, x00, x09, xC4)
  6. CL Pixel Clock = 82MHz

 

 

So I change the tap confiiguration of camera  to 10 Tap to use 10-Tap 8-Bit Camera with DRAM example code. I also change the width to 1280 & CL Pixel clock to 82.5MHz.

 

I also change the sizes of  FIFO to transfer the image properly. I have attached the images of configuration for all the three FIFO used to transfer the image.

 

After that I was able to acquire the data from camera but It was not giving a proper image. whenever I was changing the gain & exposure time of camera it was reflecting the effect on the acquired image data but it was not able grab a clear image. after seeing the images, It seems that image indicator was showing two different image data. I have attached two acquired images also.

 

I have some doubts regarding the image acquisition.

 

  1. software is not able to arrange pixel properly.

 

  1. FIFO is not able to transfer complete image from FPGA to Host computer. In my configuration of hardware I am having Windows PC (Host), RT PXI & FPGA (FlexRIO). I am transfer the image data directly from FlexRIO to Host. Please check the attached FIFO configuration is correct or not.

 

  1. I am having a telescopic lens mounted on my camera to capture the images of a far object. It might be possible camera is not able to focus but when I am taking images using a simple PXI frame grabber it is giving me a proper images.

 4. Can you tell me the unit of exposure time used in the camera (uSec or mSec)

 

Please share your view on the above mentioned points. So that I can grab the proper images through the camera.

 

Thanks

 

 

 

 

 

 

 

Download All
0 Kudos
Message 7 of 20
(4,507 Views)

Try setting the camera to a test image pattern.  That will surely help since it will make it clear if there is actual image data coming through or just garbage.

 

In 10-tap (Basler format) mode, you cannot have 2048 pixels of width.  The camera will clock out only 2040 pixels on each line (204 pclk).  Also, if you are using the example 10 tap with DRAM code, your width must be a factor of 160 pclk (8 bpp mode) or the height a factor of 16 lines.  1088 (default) will do that for you so the 10-tap defaults of 2040x1088 should be fine.

 

Does your DMA overflow indicator keep coming on?  I would think it would if you are expecting 2048x1088 pixels and are acquiring continuously at 2500us exposure.  Just in case, remember that the camera will always continue to capture frames when internally triggered or when trigger is left as default.

 

The only document available for the 1483 is the clip itself and the help files NI provides.  You shouldn't have to make any changes to the CLIP (It works fine) unless you wanto instrument it to do something unique.  In mine, I've added a few loopback signals to the GPIO to help with debugging of the serial port in my application.  I probe the internals of the 1483 to check timing using the GPIO.

 

The CLIP acts as the glue logic between the FPGA VI and the NI 1483 core (ngc) file.  The core is instantiated in the CLIP.  The CLIP glues the 7965R GPIO to the core and provides the VI visible interfaces in both the pclock domain and the 100MHz domain.

 

There are actually five FIFOs in the exmaple:

 

  • Cam data 16: Takes the upper 16 bits of the CL Data to Pixels polymorphic VI.  Essentially the upper 2 pixels of each clock (EX: pclk 0 pixels 8 and 9)
  • Cam data 64: Takes the lower 64 bits of the CL Data to Pixels polymorphic VI.  Pixels 0-7 (8bpp mode)
  • Pack80to256: This is the packer which reads 80 bits at a time from the two pixel FIFOs.  It's a small FIFO actually with different write and read widths.  Internally it's a bit of a mystery how it operates since it's a pre-compiled netlist and NI considers it proprietary.  NI assures me that newer LV FPGA examples will eradicate these in favor of VI based packers which users can actually see into.  Regardless, the number of writes to the packer must result in a sequence of reads which exactly empties the packer at the end of each frame.
  • DRAM:  The DRAM is organized as a huge FIFO (Both DRAMs for 512MB of 256 bit words).  The packer is read directly into the DRAM on each clock cycle.
  • Pack256to64: Is is another pre-compiled packer FIFO.  It's much simpler since it reads 256 bit words from DRAM and clocks out 64 bit words to the DMA FIFO.  It will always work because any write always results in 4 reads.
  • DMA FIFO: This is configured as a 1023 word FIFO of 64 bit words (Standard for DMA).  The default uses FPGA BRAMs (Hance the 1023 word size).

The remaining FIFO is, of course, the controller side DMA FIFO which can be any size available in controller memory.  Given that the pclk is 82.5MHz (80 bits) but the 256to64 packer is read at 100MHz, there's a slight mismatch in write to read speed.  You generally you won't notice a clock mismatch issue (80 bits written every 82.5MHz effectively, 64 bits read to DMA every 100MHz).  The real issue is DMA servicing and if the controller can pull the data fast enough from its DMA FIFO.  With the 512MB 7965R based FIFO, the DMA process can be very sporadic and still accomodate everything as long as it averages more data flow than the camera is producing.  I'm planning to run both my cameras at 2040x2048 8bpp at 180fps for short bursts.

 

Hope this helps!

 

 

 

 

 

0 Kudos
Message 8 of 20
(4,439 Views)

Hi xl600,

 

Thanks for your continuous support.

 

Today I was able to acquire the images. As you said I tried the test images that was coming fine. So when I diagnose the setup I found that my telescopic lens was the cause of complete black images.

 

But while acquiring images first image was coming fine & from second image onwards it was reading an overlapped image. I have attached the overlapped image. I found that some portion of last image was coming at the top of the current image & the size of the top portion was increasing at every next acquisition.

 

I think it might be happening because some data of last image might be remained left in the FIFO & while reading the next image it is giving me a overlapped image

0 Kudos
Message 9 of 20
(4,425 Views)

Yes, it looks like you are either reading insufficent data from the DMA FIFO or too much data.  In a continuous mode, you will probably get a DMA timeout if you are reading too little..

 

The FPGA should be reporting how many clocks and lines it received for each frame.  You will have to figure out why the camera is outputting those dimmensions if they are not what you expect.

 

Multiply the clocks by taps to find the number of pixels.  Then only read exactly that many pixels times the number of lines for each frame.

0 Kudos
Message 10 of 20
(4,420 Views)