ni.com is currently undergoing scheduled maintenance.
Some services may be unavailable at this time. Please contact us for help or try again later.
12-15-2010 09:14 PM
The Boolean local variables you use for controlling start up/shut down of loops are probably OK. Some will argue that queues are a better approach for that sort of communication, but it might be overkill in this scenario. All other uses of local variables can be dropped in favor of using wires or wire branches. Boolean controls that are only read from a single location like your SendMsg Boolean can use a latched mechanical action instead of switched action to accomplish the same thing.
For information on how to transfer images through Variables, see the following link.
The Sound Input Read VI in your program has an unwired number of samples/ch input. The default value for this input when unwired is 10,000 samples. To see this, just open the front panel of the VI.
In regards to the buffering, I was trying to say that your update rates and buffer sizes (in terms of elements) should be the same for both the vision and audio stream. If your updating your images 30 times a second then you should also read and send sound data 30 times a second. In this case that would be reading 15,000 / 30 samples worth of data each loop iteration. This should help get rid of your initial phase delay.
However, you should also make sure the buffer sizes are set equivalently or you also risk getting out of phase if at any time you ever lose data in the stream. For instance, let's assume you've disabled buffering on the vision stream and are buffering 10 elements on the audio stream where each audio element is a waveform representing 1/30th of a second of audio data. As long as you don't lose any data elements in either stream, things will stay in sync. However, let's assume the receiving client application temporarily loses connection with the server or there is a spike in congestion such that the server can't send out the data before receiving new data from the sending client application. In this case, the server will be able to maintain up to a 1 second backlog or history of the audio stream. However, there will only be a 1/30th of second history for the video. Once the receiving client application starts receiving data again, the audio stream will appear to lag the video stream by ~ 1 second. Since the element type for your audio stream is already an array of waveforms, you're effectively still providing some amount of buffering even if you disable buffering at the number of arrays of waveforms level. It could be that a single waveform element is sufficient for buffering your audio stream the same way a single picture image is sufficient for your vision stream.
The last point I was trying to make is that you're attempting to synchronize and phase align two different data streams throughout time that are running off of different time sources and start events or triggers. If this were an analog input/output data acquisition problem using one or more of our DAQ devices, I would tell you that you should configure the acquisition such that your exporting and sharing a common sample clock and start trigger signal from one of the devices. This will ensure the data streams start at the same instant in time (eliminating any initial phase offset) and don't drift apart from each other over time due to accumulation of time error as the result of inaccuracies of the clock used by each device. In your case, you have a sound card sampling one device at 15,000 samples/s and a USB camera sampling at 30 frames/s. The clocks used by each device won't be perfect and they certainly won't measure the passage of time at exactly the same rate. This creates a problem of drift over time. Also, you're relying on software start calls to start both the vision and audio streams at the same time. On Windows, this will probably result in a best case initial phase offset of several milliseconds and worst case of 10's to 100's of milliseconds. These times may vary a little or a lot depending on the hardware/drivers used for the devices so your mileage may vary. It's completely possible that these offsets and inaccuracies are small enough not to be noticeable with your application, but you'll have to do your own testing to determine that. It's also possible that the drivers are doing something clever to phase lock the acquisition to the system clock in the PC but I sort of doubt it. Unfortunately, if this does become a problem, I don't have many suggestions for you as I don't really have any practical experience working with such devices at that level. I'm just trying to point out possible causes for how the data streams could get out of sync so you don't spend all your time debugging Variable buffering only to find the problem lies somewhere else. It might be useful to write a test harness that allows you to swap out the audio and video sources with predefined data from a file. This will allow you to test without hardware so you can better isolate networking issues from hardware/driver issues as you develop your application.
03-04-2011 01:37 PM - edited 03-04-2011 01:40 PM
Hi!
Thanks for the detailed replies and sorry for acknowledging after long. First, it took me some time to digest all the explanations and second, it took some more time to upgrade my systems and then adapt the programs to that new version (as I had feared before, the old programs stopped working in the new installation). I wanted to reply to you only after doing enough tests or checking options. Also due to other work in the university I could not focus well in this job. Now back to the job.
Ok, I have some questions and some new situations and I am attaching the new versions (either in this post or the post in the other thread) to check what is wrong.
I changed the SendMsg boolean control back to "latched" mechanical action. Originally I had it as "switched" because the latched functionality was creating two "Value Change" events (1st as "true" when clicked and then automatic false) but I think that case structure inside the event resolved the situation.
That transfering images through variables using IMAQ data type was a good link. I have implemented it, but with some issues.
Ok, now about the detailed explanation of synchronizing sound and image data.
If your updating your images 30 times a second then you should also read and send sound data 30 times a second. In this case that would be reading 15,000 / 30 samples worth of data each loop iteration. This should help get rid of your initial phase delay.
And how do I do that? Creating a "Wait Until Next ms Multiple" node in the loop for, say, 1000/30 milliseconds (1/30th of a second) so that it will run the loop for 30 times in 1 second? And do this in both - audio and video - loops? Or some other way?
However, you should also make sure the buffer sizes are set equivalently or you also risk getting out of phase if at any time you ever lose data in the stream.
Yes, this is critical. Thanks for pointing that out. Actually I thought that if one of the streams goes out of capacity, it will simply lose some data and get back to the line. But perhaps it won't happen like that. The gap would, normally, grow.
In case of Waveform datatype, I don't understand a few things. I tried to read some math documents to understand the meaning of Array of Double Waveform but didn't get much help. So basically I don't know the real meanings of each of the buffer fields in this type of data. So it's a bit tricky to control things. I will have to blindly shoot. And besides this, I had some difficulty in catching your explanation as well.
For example,
"...and are buffering 10 elements on the audio stream where each audio element is a waveform representing 1/30th of a second of audio data."
What does this mean - 10 elements on the audio stream? You mean, data of one iteration (considering each loop is run 30 times per second)? But how do I count that in terms of buffer?
Ok, skipping the buffer calculation, the above phrase means, supposing I am buffering data worth 1/3rd of a second (10 * 1/30).
"...In this case, the server will be able to maintain up to a 1 second backlog or history of the audio stream..."
How come 1 second? Shouldn't it be 1/3rd of a second backlog?
When you say single waveform element, I don't understand how to interprete in terms of waveform data fields. What do I need for my audio data where I have -
Sample rate = 15000
Number of channels = 1
bits per sample = 8
Number of Samples / channel = 25000.
How does this interprete in terms of Number of arrays, Number of waveforms and Points per waveform?
Is Number of arrays = number of channels or number of waveforms? On a waveform graph, the points I see are points per waveform and the lines I see are waveforms, then what is the number of arrays?
Sorry, but I need some simple (may be short) explanation in layman's terms.
I tried to work on the suggestion of starting the capturing of both audio and video devices on the same time and came up with a simple looking solution which is a "Flat Sequence" which goes to next frame once both the devices started. But I could not do the other testing you suggested about sending a file to the other computer.
Ok, I will post the programs with their explanations on the other post, along with questions related to the explanation on that thread.
Looking forward to your comments/replies on the above discussion. And appreciating anyone's effort helping me in this situation.
Thanks again.
03-07-2011 04:57 PM
Hi Vaibhav,
Looks like this post has been going on for quite some time. Is it possible you could provide everyone with a brief summary that can remind everyone exactly what you are trying to do and where the holdups are? There is a lot of information in the thread, and to have it summarized would be very helpful rather than having to read through the entire thread.
Thanks!
Jon S
03-08-2011 07:01 AM
Hi Jonathan,
Thanks for the comment and showing interest.
To summarize the posts:
The thread creator "deskpilot" was looking for a buffering method for audio data (Array of Double Waveform) and was looking for clarification on some doubts about functionality of Data Socket and Shared Variables.
I am running in the same situation.
Was using DS widely and SV for a few cases only, and when a problem occured in simultaneously accessing a DS item from two functions, an error occured, so I had to start working with SV, and it seemed to me that I landed in the exact same situation as deskpilot.
In this particular thread I started looking for the solution, continuing the original discussion, regarding buffering of Array of Double Waveform (ADW) type shared variables.
And in another simultaneous thread, I started looking for solution on how to change buffer allocation of ADW type SV (and creating them programmatically). Because on LV 8.6, the online creation of SV in processes did not have ADW type option, only by offline method. Now I am using LV 2010 which has this possibility but my requirement has gone up too (need to create SV with Image data type). 🙂 The discussion regarding this issue is in that thread.
reddog has been helping me in both these threads and after he gave some tips last time, I wanted to implement and ask doubts if any.
Ok, regarding the situation in this thread, I am trying to understand the format of ADW data type, and its corresponding values in the buffer allocation.
After the discussion, reddog explained the relation between buffer allocation fields in Project's SV properties and programmatic property fields.
But, I still need to understand what are meanings of No. of Arrays, No. of Waveforms, Points per Waveform, in terms of an audio data type, with 1 channel and 8 bits and 15000 samples/second. How to organize/calculate buffer for these fields?
How much should be No. of Arrays (Network.BuffSize), No. of Waveforms (Network.ElemSize), Points per Waveform (Network.PointsPerWaveform)?
I understand that these values are case dependent, but as I wrote above, just need to understand their interpretation in terms of the fields of the ADW data type.
Also, after reddog suggested me about synchronization, I made some changes in the program and I have two versions of the programs, one with synchronization and anothe without. I tried some simple techniques in sync. The programs are in that other thread, with their explanations in my post.
Regarding my queries on how to create these variables, and deploy and access (using Shared Variable API), I will write a summary in that thread.
Thank you, and I hope it's summarized well. The detailed questions on my previous post of this thread are for reddog's comments from his last post. To answer them it would be necessary to read their contexts.
03-08-2011 07:26 AM