05-17-2010 07:22 PM
My boss has challenged me to come up with a solution to an interesting problem. He wants to know if we could drive a vehicle around with a camera mounted on top pointing at various angles from the horizon and up and then process the video to determine how often we are seeing the sky and how often we are seeing a building or some other "blockage".
I currently have the Full development version and I told him we would probably need the NI Vision module but maybe someone can suggest another tack or at least tell me if I'm on the right direction.
Don Lafferty
05-17-2010 11:56 PM
The trick in making a successful video processing system is to keep the image processing to the minimum and hence the processing time is very minimum. Before setting the value run a test run which calculatess the histogram alone on the streaming video. See if you can set a nice threshold to differentiate the intensity when the building is captured and when there is no building. See if you see a consistent value ( some range).
One additional note dont challenge your boss. Tell him even you feel it is not doable and just going to give it a try....
06-16-2011 08:14 AM
06-20-2011 04:10 PM - edited 06-20-2011 04:12 PM
What kind of video do you want to capture? From what you're saying, I'm assuming that your video is being hosted online somewhere. What format is it in, and what do you want to do with it in LabVIEW? Do you want to analyze it or simply display it?
06-21-2011 03:49 PM
06-22-2011 07:42 PM
I'm not sure how Gstreamer works as I've never used it before. What about using something like Windows Media Player with ActiveX controls to point to the URL of the video on the server? I'm thinking something along these lines: https://decibel.ni.com/content/docs/DOC-2207
Was this your thinking as well? Or did you want to use the Vision Acquistion Software and/or Vision Development Module to acquire and process the image? The following link explains the difference between the two: http://digital.ni.com/public.nsf/allkb/F1699570F78FECBB86256B5200665134?OpenDocument
06-27-2011 01:20 PM
Hello, I think my problem goes beyond. It is not so simple as play a file. I tried also opening sockets directly with java and I received the same type of "string-garbage" that I can't compose. Maybe the problem is the type of mpeg encode and video streaming. There are people developing pure code under C and java, so I'm afraid to do the same under LabVIEW.
See http://code.google.com/p/gstreamer-java/ for further information. After video capturing, my following step is to make some machine vision with pattern recognition purposes.
06-28-2011 12:06 PM
Hello friends, I made some tests and I found that gstreamer send a flow of bytes wich stands for pixel colors. That is, gstreamer sends the video from byte to byte (1byte = color pixel). In fact, the resolution of the image is 320 x 240 pixels, so the size of the image would be 76800 pixels. Do you think that could be possible to count these bytes (76800) and after this, save to a file and display it? maybe should be a better way to do a video player. I would be wonderfull if somebody has a code under LabVIEW that could be tested. Thanks.
06-28-2011 06:22 PM
I can't find any code that's readily available, but it seems like you could make a For loop that runs 76,800 times and saves these values into a byte array. Once the for loop exits, you can save that array to a file and display it with the IMAQ ArrayToColorImage function. This might be a good way to build up this video frame by frame.
06-30-2011 04:03 AM
Hello friends, I can receive the video stream in my laptop but I only can show noise. Maybe the problem is the type of image or the order of the data (as far as I know It is jpeg format with 320x240 pixels, 24 bits).
I discovered that images' size changes dinamically (they are not fixed to 320x240=76800 pixels) due to the jpeg compression. Bu It is not a problem, because I basically assemble the data packets arriving over TCP/IP and identified by headers ("Content-Type:" and "Length-Type:"). I am discarding the content-type & content-lenght headers. After this
I convert the string into a byte array and once I have the byte array, I plug it into a picture indicator (but only white noise).
I didn't try IMAQ ArrayToColorImage function because it needs a 2D array to is input terminal (vertical and horizontal axis) but I only have available a 1D byte array from my input string. How can I convert it to 2D? I don't know if it is the correct direction or I am away.
I hope I can attach the VI this time.
Thanks.