The whole application is a real-time system that processes about a thousand signals and reduces them to a handful of numbers indicating physical position of one or more objects. The physical position needs to be updated at the sampling frequency.
At each sampling interval, an array of data is read from the DAQ. This array contains one sample from each separate signal. (Our own circuitry scans the signals to get them into the DAQ at the appropriate times.) The array needs to be pushed through a pipeline of processing stages for filtering, averaging, and other processing, all the while the DAQ is accumulating another array for the next sampling interval. We will apply as many processors as needed to the stages of the processing pipeline in order to get the required performance.
Most processing stages need to retain state from a handful of previous samples in order to accomplish the required filtering or processing of a current sample. Since samples are organized into arrays, this means retaining previous arrays from previous sample intervals.
In my first attempt to structure this application, I found myself embedding the DAQ Read and all of the processing stages in a humungous loop, with lots and lots of shift registers to keep the necessary retained state from one iteration to the next. This got very unwieldy, and I had only got about 1/3 of the processing stages working before realizing that the architecture needed to be rethought.
At that point, I re-educated myself about queues, and now I am trying to design an appropriate architecture to solve this problem. I am leaning toward partitioning the problem at boundaries where concurrent processing makes sense. Each group of related pipeline stages would receive its input from a previous group via a queue and would communicate to the next group via a different queue.
Hence my question:-- what are the issues and limitations surrounding multiple producers and consumers?
Regards,
Hugh L.