LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Using Labview to process a video stream and detect a color range

My boss has challenged me to come up with a solution to an interesting problem.  He wants to know if we could drive a vehicle around with a camera mounted on top pointing at various angles from the horizon and up and then process the video to determine how often we are seeing the sky and how often we are seeing a building or some other "blockage".

 

I currently have the Full development version and I told him we would probably need the NI Vision module but maybe someone can suggest another tack or at least tell me if I'm on the right direction.

 

Don Lafferty

0 Kudos
Message 1 of 13
(5,662 Views)

The trick in making a successful video processing system is to keep the image processing to the minimum and hence the processing time is very minimum. Before setting the value run a test run which calculatess the histogram alone on the streaming video. See if you can set a nice threshold to differentiate the intensity when the building is captured and when there is no building. See if you see a consistent value ( some range).

 

 

:smileyvery-happy:One additional note dont challenge your boss. Tell him even you feel it is not doable and just going to give it a try.... :smileyvery-happy:

0 Kudos
Message 2 of 13
(5,650 Views)
Hi all, I have the same problem and similar project to work. Sure NI Vision will help you with machine vision functions. My starting point is to capture a video stream with gstreaming. At the moment I don't find any solution and all VIs examples about TCP or UDP doesn't seem to work. Any help about how to capture video from a video streaming? Thanks in advance.
0 Kudos
Message 3 of 13
(5,439 Views)

What kind of video do you want to capture?  From what you're saying, I'm assuming that your video is being hosted online somewhere.  What format is it in, and what do you want to do with it in LabVIEW?  Do you want to analyze it or simply display it?

0 Kudos
Message 4 of 13
(5,416 Views)
Hello ThOr, the idea is capture video from a server under linux with gstreamer (http://gstreamer.freedesktop.org/). The comand for sending video is the following: gst-launch videotestsrc ! ffmpegcolorspace ! jpegenc ! multipartmux ! tcpserversink host=127.0.0.1 port=5000 To receive video (or do it in labview 🙂 : gst-launch tcpclientsrc host=127.0.0.1 port=5000 ! multipartdemux ! jpegdec ! autovideosink But there aren't binaries of gstreamer for windows and using third parts is very hard. So I think it would be better to develop something with labview for displaying video. I hope to be solve your questions. Thanks.
0 Kudos
Message 5 of 13
(5,405 Views)

I'm not sure how Gstreamer works as I've never used it before.  What about using something like Windows Media Player with ActiveX controls to point to the URL of the video on the server? I'm thinking something along these lines: https://decibel.ni.com/content/docs/DOC-2207

 

Was this your thinking as well? Or did you want to use the Vision Acquistion Software and/or Vision Development Module to acquire and process the image? The following link explains the difference between the two: http://digital.ni.com/public.nsf/allkb/F1699570F78FECBB86256B5200665134?OpenDocument

0 Kudos
Message 6 of 13
(5,386 Views)

Hello, I think my problem goes beyond. It is not so simple as play a file. I tried also opening sockets directly with java and I received the same type of "string-garbage" that I can't compose. Maybe the problem is the type of mpeg encode and video streaming. There are people developing pure code under C and java, so I'm afraid to do the same under LabVIEW.

 

See http://code.google.com/p/gstreamer-java/ for further information. After video capturing, my following step is to make some machine vision with pattern recognition purposes.

0 Kudos
Message 7 of 13
(5,369 Views)

Hello friends, I made some tests and I found that gstreamer send a flow of bytes wich stands for pixel colors. That is, gstreamer sends the video from byte to byte (1byte = color pixel). In fact, the resolution of the image is 320 x 240 pixels, so the size of the image would be 76800 pixels. Do you think that could be possible to count these bytes (76800) and after this, save to a file and display it? maybe should be a better way to do a video player. I would be wonderfull if somebody has a code under LabVIEW that could be tested. Thanks.

0 Kudos
Message 8 of 13
(5,357 Views)

I can't find any code that's readily available, but it seems like you could make a For loop that runs 76,800 times and saves these values into a byte array.  Once the for loop exits, you can save that array to a file and display it with the IMAQ ArrayToColorImage function.  This might be a good way to build up this video frame by frame.

0 Kudos
Message 9 of 13
(5,345 Views)

Hello friends, I can receive the video stream in my laptop but I only can show noise. Maybe the problem is the type of image or the order of the data (as far as I know It is jpeg format with 320x240 pixels, 24 bits).

 

I discovered that images' size changes dinamically (they are not fixed to 320x240=76800 pixels) due to the jpeg compression. Bu It is not a problem, because I basically assemble the data packets arriving over TCP/IP and identified by headers ("Content-Type:" and "Length-Type:"). I am discarding the content-type & content-lenght headers. After this
I convert the string into a byte array and once I have the byte array, I plug it into a picture indicator (but only white noise).

 

I didn't try IMAQ ArrayToColorImage function because it needs a 2D array to is input terminal (vertical and horizontal axis) but I only have available a 1D byte array from my input string. How can I convert it to 2D? I don't know if it is the correct direction or I am away.

 

   I hope I can attach the VI this time.

Thanks.

 

 

0 Kudos
Message 10 of 13
(5,307 Views)