Machine Vision

cancel
Showing results for 
Search instead for 
Did you mean: 

Switching from ChromasenseAPI to NI Stereo Vision?

Hello Community,

 

I'm using a stereoscopic camera, the Chromasense 3DPIXA, to acquire images and then calculate the depth image / 3D image.

 

The problem is that the calculation is not nearly real-time ready.

 

Now, Chromasense has everything integrated neatly, with a LabView Toolkit.

What's more, the Chromasense API utilises the GPU (using Nvidia's CUDA.)

 

I guess switching to LV's built-in Stereo Vision module will only increase the computational time and the complexity of the project, but, as of yet, I'm not sure.

 

Is there a feasible way to fasten the image computation within LV, other than reducing the resolution or buying better (and more) GPU-Hardware?

0 Kudos
Message 1 of 10
(4,835 Views)

How you realize the communication to the camera?

Can you upload the minimized VI?

On which system youre executing the program?

 

 

Message 2 of 10
(4,796 Views)

1. Between the "normal" PC and the Camera, there's a Framegrabber with 2 CameraLink-cables.

 

2. The minimized VI's would be NI's Stereo Vision Example (link), NI's Compute Depth Image Example (in action on youtube), or the Chromasense example ("CS3D simple 3D calculation"), that comes with the Chromasense Toolkit, which is installable via the NI Package Manager.

 

3. The system is going to be a high-end PC, which will be adapted to the requirements (if possible), not vice versa. If it takes 4 GeForce GTX TITAN X, so be it.

 

My best guess is that utilizing GPU power with LabView - at all - requires a lot of work (feeding the CUDA-VIs with data) and will not be significantly faster than using the software that Chromasense wrote for their camera.

There are probably no exact answers possible, so i'd be glad for any opinion.

0 Kudos
Message 3 of 10
(4,792 Views)

Which OS do you have on the Computer?

Message 4 of 10
(4,786 Views)

Win 7 Enterprise x64.

0 Kudos
Message 5 of 10
(4,783 Views)

Please try to change the process priority of the LabView process and examine if the priority has any change to the execution speed. Try High as well as Real Time priority.

Message 6 of 10
(4,779 Views)

I brought the CPU to 100% usage - with Prime95 'benchmark' running at 'normal' priority.

 

Setting LabView.exe to either 'low' or 'high' (higher/lower than Prime95brought no significant changes (1817ms to 1873ms ~3% difference) in execution speed.

 

-> Does this mean that the LabView 'Compute Depth Image.vi' example already uses GPU?

0 Kudos
Message 7 of 10
(4,772 Views)

I think the problem is not the usage of the GPU. Its more a windows thing. Do you also tried the real time priority?

0 Kudos
Message 8 of 10
(4,756 Views)

I was not able to test the performance under real-time priority.

 

I was, however, finally able to find a program that correctly shows GPU load for our NVidia graphics card without installation.

 

Now, by monitoring CPU and GPU usage while running the applications, I found:

 

- NI Stereo Vision does not utilize the GPU. The 10% load comes from continuously re-drawing the front panel.

 

- Chromasens software always utilizes the GPU, using 99% while the applications are running. This is true both for their own viewer (CS-3D), as well as for their LabView toolkit, which seems to simply call the supplied DLLs (and therefore only runs in 64bit LV.)

 

0 Kudos
Message 9 of 10
(4,739 Views)

It is always diffecult to realize a real-time app on windows. windows is not made for it, because the time management is based on Time Division Multiplex for every process

0 Kudos
Message 10 of 10
(4,716 Views)