10-25-2010 03:36 PM
Ok, I finally got to the computer that had the code I had been working on so here it is.
I would like to be able to define the sampling rate of the incoming and outgoing signal if possible.
Again this code works but the two outputs do not update when the input signal changes. I tried the most recent code you posted, but I feel like it is missing pieces. Like how will it know what inputs to read from and what outputs to write to?
Thanks again for the help!
-Jason
10-25-2010 04:01 PM - edited 10-25-2010 04:02 PM
The missing pieces are the task configuration VIs, but there isn't reallly any point in setting the rates. This code is software timed. It should be okay for maybe 100Hz signals, but this is not really the right hardware for doing this. The way it is, you are readings a single point from the analog input, and outputting a single point on each analog output. The signals should track together (a slight delay between input and output, but nothing severe), but you will have horribly aliased signals if you input a high frequency signal. I would test with DC signals if you really want to see it work.
Now, if you were using an FPGA, you could get really fast single point reads and writes. Even an RT system will perform much better. A PC is really not good at this. If you are alright with a significant delay (but constant) between the input and the output, you could do this on a PC. Just read in a multipoint buffer and write to a buffer in the loop. The appropriate examples to look at would be continuous acquisition and arbitrary waveform generation.
What are your real requirements?
Chris
10-25-2010 04:36 PM
Hey Chris, my requirments are reading in up to a 3kH signal and outputting it on two AO's in real time. The system as a whole is a DSP based signal routing system. So I can't simulate this with a PC with a PCI E4 card? The whole system is dependent on the signals flowing in near real time. Thanks Jason
10-25-2010 09:58 PM
Why are you creating the channels inside the loop? That should be done only once before the loop starts like Tbob showed in message 6. I think you'll create a memory leak the way you are doing it now as you are continually allocating resources to the task that never get cleaned up.
10-25-2010 10:17 PM
10-26-2010 08:04 AM
With this card you can acquire reasonably fast, and generate signals reasonably fast, but you can't do those tasks point-by-point, which is really what you need for your application. Typically, to continuously acquire data into the computer, you set up an acquisition that is hardware timed. That is, the clock on the card times the samples and puts them into a common buffer that is accessible to the computer. From the computer, you pull data out of that buffer to display or save. It is not efficient at all to try to get each sample as it appears in the buffer; the way to do it is to take all of the samples out of the buffer at a reasonable time interval (I usually use 0.1 second). So, for an acquisition rate of 100k, you are pulling 10k samples from the buffer 10 times a second. A PC running windows is not "real-time". What this means is that there is no guarantee that any particular task will occur in a defined period of time. Sometimes it will take Windows longer than 0.1 seconds to get the data because there are other tasks going on that take precedence over what you are asking the computer to do. In my example of 100k S/sec with reads occuring every 0.1 second, I would configure the buffer to be about 100k samples in size. That means I've got up to a second to get samples out of the buffer before the buffer is overrun, and an error is produced.
Analog outputs work the same way. You set up the generation to be hardware timed, and you define a buffer where you start drop in samples. The acquisition card pulls data from the buffer as it needs it, and you just need to make sure you are putting data into the buffer faster than it is taking it out. Again, you can't do this efficiently one point at a time.
For your application, you want to minimize the lag between when sampels are read from the acquisition buffer, and samples are put into the analog output buffer. Ideally, each point would be read and transferred to the output buffer immediately. You can do this on an FPGA (easily at 100kS/sec rates) or on a real-time PC (probably at 10's of k S/sec, maybe 100k), but on a non-real-time PC, you are pushing it trying to route 1k S/sec this way. Maybe it's a little better than that, but it will not be real-time (i.e., deterministic), so if someone wiggles the mouse, your samples rates will be fried. This is so slow because the data has to come in through the acquisition card, be transferred to the PC, wait on your software timed code to put back down to the acquistion card, and then activate the generation. On the FPGA, this can all be routed within the hardware, so you are dealing with hardware timing the whole way.
It's at little confusing, but if you experiment with it, you will see what I mean. Take a look at how quickly you can generate a signal when you only provide one point at a time, as opposed to streaming with a buffer.
Chris
10-26-2010 09:11 AM
Hey Chris, thank you so much for the explanation! So what Im hearing is the system we're using is not designed for DSP type applications. More for data acquisition and control right?
So what basic hardware would we need to do this at sampling rates of around 10KS/s or higher with multiple inputs and outputs?
Also, is programming these RT and FPGA devices in labview similar at all to what I have been doing? i.e. is there big learning curve going to these devices?
Thanks so much for your help!
-Jason
10-26-2010 09:54 AM
You assessment is essentially right. I would say your hardware is best for acquisition. Control leads exactly where you are going, and pretty much demands an RT solution. Your desired rates aren't crazy. You might be able to get away with installing ETS (NI's RT OS for desktops) on the desktop you are using, and using the card you are already using. I'm not sure if that card is supported on RT targets (it's pretty old). Also, you'll have to see if the desktop you are using is compatible with NI's RT option. There is a tool somewhere on the site that lets you check your hardware. As for programming, LabVIEW for RT targets is very similar to LabVIEW for windows, but you do need access to the LabVIEW RT module. Your local NI field rep should be able to help you with all of that.
If you wanted to do a lot of work like this, it may be a good idea to invest in an R series card (7833R is one we use). You'd need the FPGA module to program it, but you will get the best performance this way. Programming is similar, but not exactly the same.
Chris
10-26-2010 11:53 AM
Sounds good Chris.
Ok I have been looking around for LabVIEW programmable DSP solutions, because the end goal would be to have this system run standalone on a board of some sort.
This company "Sheldon Instruments" seems to have what I need. A LabVIEW programmable DSP carrier with your choice of IO module to attach to it. It says it can handle either blocks of data, or point to point configuration. A basic setup, for non standalone testing, looks to be around $1700. (A PCI carrier and a 16AI/8AO IO module)
From what I can tell, this is exactly what I need. A high number of analog IO and the ability to handle all of it in real time. The IO module looks to mates to the PCI DSP carrier card and you have a 100 pin cable to a breakout box for IO connection.
The reason I need several analog IO is what I have originally described is only an 8th of the total system. It will be used for 4 wire, full duplex, redundant communication paths.
Can you back up my assumptions here? Do you think the hardware from Sheldon should do the job?
Thanks,
Jason
10-26-2010 12:22 PM
Based on your description, it sounds reasonable. I don't have any experience with this sort of DSP solution though. I would run through your application with the vendor and see what they say. It's definitely competitive cost-wise.
Chris