10-16-2015 08:56 AM
Hello Community,
I'm using a stereoscopic camera, the Chromasense 3DPIXA, to acquire images and then calculate the depth image / 3D image.
The problem is that the calculation is not nearly real-time ready.
Now, Chromasense has everything integrated neatly, with a LabView Toolkit.
What's more, the Chromasense API utilises the GPU (using Nvidia's CUDA.)
I guess switching to LV's built-in Stereo Vision module will only increase the computational time and the complexity of the project, but, as of yet, I'm not sure.
Is there a feasible way to fasten the image computation within LV, other than reducing the resolution or buying better (and more) GPU-Hardware?
10-20-2015 08:54 AM
How you realize the communication to the camera?
Can you upload the minimized VI?
On which system youre executing the program?
10-20-2015 09:21 AM
1. Between the "normal" PC and the Camera, there's a Framegrabber with 2 CameraLink-cables.
2. The minimized VI's would be NI's Stereo Vision Example (link), NI's Compute Depth Image Example (in action on youtube), or the Chromasense example ("CS3D simple 3D calculation"), that comes with the Chromasense Toolkit, which is installable via the NI Package Manager.
3. The system is going to be a high-end PC, which will be adapted to the requirements (if possible), not vice versa. If it takes 4 GeForce GTX TITAN X, so be it.
My best guess is that utilizing GPU power with LabView - at all - requires a lot of work (feeding the CUDA-VIs with data) and will not be significantly faster than using the software that Chromasense wrote for their camera.
There are probably no exact answers possible, so i'd be glad for any opinion.
10-20-2015 09:36 AM
Which OS do you have on the Computer?
10-20-2015 09:43 AM
Win 7 Enterprise x64.
10-20-2015 09:51 AM
Please try to change the process priority of the LabView process and examine if the priority has any change to the execution speed. Try High as well as Real Time priority.
10-20-2015 10:31 AM
I brought the CPU to 100% usage - with Prime95 'benchmark' running at 'normal' priority.
Setting LabView.exe to either 'low' or 'high' (higher/lower than Prime95brought no significant changes (1817ms to 1873ms ~3% difference) in execution speed.
-> Does this mean that the LabView 'Compute Depth Image.vi' example already uses GPU?
10-21-2015 03:20 AM
I think the problem is not the usage of the GPU. Its more a windows thing. Do you also tried the real time priority?
10-22-2015 12:55 AM
I was not able to test the performance under real-time priority.
I was, however, finally able to find a program that correctly shows GPU load for our NVidia graphics card without installation.
Now, by monitoring CPU and GPU usage while running the applications, I found:
- NI Stereo Vision does not utilize the GPU. The 10% load comes from continuously re-drawing the front panel.
- Chromasens software always utilizes the GPU, using 99% while the applications are running. This is true both for their own viewer (CS-3D), as well as for their LabView toolkit, which seems to simply call the supplied DLLs (and therefore only runs in 64bit LV.)
10-23-2015 07:52 AM
It is always diffecult to realize a real-time app on windows. windows is not made for it, because the time management is based on Time Division Multiplex for every process