LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Program architecture, where to process data

Ok, so I keep racking my brain trying to figure out the best way to impliment the data processing in my software. I keep thinking of different ways to do the same thing, but I can't decide on any particular format. I'm also looking for general suggestions for the overall architecture.

 

Basically, I have a connection to a database open at all times that logs status information and data from each module.

 

data_logger.PNG

 

I then have another parrellel loop that sends commands to the modules based on user input. I am using the event messaging framework from here:

http://forums.ni.com/ni/board/message?board.id=170&message.id=394109&query.id=312230#M394109

 

module_controller.PNG

 

Then I have each of the modules doing there thing here:

 

modules.PNG

 

 

Each of the run modules just connect to devices and spit out data in to the data reference and the database logger catches the change and logs it to the database.

 

My main question is where should I process this data?

 

For example, I need to average 10 min of 1 second values that originate from the TWSTT module. The data is all stored in the database, so I can poll the database at 10 min intervals and average the values from the previous 10 min.

 

Should I keep all the data processing outside of the modules and have 1 connection writing to the database(as above) and another connection reading/processing data from the database?

Or

Should I connect to the database in the TWSTT module and process the data there?

 

I would need to do similar processing for other modules and I will need configuration/calibration information from each module to process the data, which would be easy enough to pass through the framework, though I would need to create a 'data processing' module to handle all of it.

 

After the data is processed it will be used in other modules for certain actions, I could either send this data via the messaging framework, or I could have each module pole the database for new processed data.

 

So I keep coming up with different ways to do things, and I'm not sure which I want to do.  My mind is all over the place right now so if anything isn't clear let me know and I'll try to elaborate.

 

Thanks for any input!

 

Jonathan

-----
LV 8.2/8.5/8.6 - WinXP
Message 1 of 4
(2,687 Views)

Hi Jonathan,

 

Typically for code that has a core control loop and data gathering loop that also needs to process data in parallel, it is recommended to implement a Producer/Consumer architecture (http://zone.ni.com/devzone/cda/tut/p/id/3023 ) This allows you to only gather and process data when it is available so that you make the best use of your computer's resources. You can then grow this for the various processes you want to incorporate into your program.

Will
CLA, CLED, CTD, CPI
LabVIEW Champion
Choose Movement Consulting
choose-mc.com
0 Kudos
Message 2 of 4
(2,644 Views)

Two solutions come into my mind:

Process the data in place where it is aquired. This can nicely done by the p/c design pattern as suggested by the NI support.

Have one or more modules/objects that process the data. As I'm a bit a fan of asyncronous processing, I would trigger them with a message over the bus (maybe including some data ID), but have the data in the database and not on the bus.

 

It is a bit of a question what kind of processing of the data is needed, and what is done with the processed data afterwards.

If you have no need for the raw data anywhere, then I would process them just in place.

If all that data should end-up in the database anyhow, it seems to be a bit safer to write them as soon as they are available.

 

Felix 

0 Kudos
Message 3 of 4
(2,627 Views)

Thanks for the replies, I took a step back and re-evaluated what I was trying to do and here is what I came up with.

 

At first I was leaning towards the producer/consumer design (like Will mentioned) and having a separate module that would asynchronously process the data, though I realized I would have to send configuration information (calibration values, device ids, data ids, etc...) to the module.  That wouldn't be so bad, but the modules themselves already contain that information, so I saw no reason to duplicate it.  I also looked at the acquisition time and processing time and determined that processing the data during acquisition in the same thread wouldn't hinder the data collection.

 

Based on those observations I add a database class to the base module class, now all modules have a connection to the database that is initialized on startup.  The data from each module is directly saved to the database and a timer triggers a processing state at defined intervals.  In my case I have one module triggering every 10 minutes processing about 600 items in the database, and another module triggering at 1 hour intervals.  The raw data needs to be kept for archiving purposes so they are written as soon as they are available to ensure they are saved as Felix mentioned.

 

I removed most of the 'database logger' section of the code and I am using that section as more of a program logging, error logging routine.  I'm pretty satisfied with the way things are coming together.

 

Thanks again for the responses,

Jonathan

Message Edited by malkier on 06-01-2009 02:00 PM
-----
LV 8.2/8.5/8.6 - WinXP
0 Kudos
Message 4 of 4
(2,575 Views)