DQMH Consortium Toolkits Discussions

cancel
Showing results for 
Search instead for 
Did you mean: 

Tips for Logging Data from Multiple Sources

Hello All,

 

Looking to bounce some ideas around for logging from multiple data sources. Almost all of my LabVIEW development so far has only been working with DAQmx. It's been fairly straightforward, but now there's been a push for us to start using MODBUS over serial to log data that we are currently using analog input with a cDAQ. My DQMH app uses the recommended structure of a Data Acquisition module that acquires using DAQmx, and requests the Log Data module to, well, log data to a csv file. Pretty straightforward since the data is coming from a single source currently.

 

Working on a Serial/MODBUS Data Acquisition module that will read from the input registers of our instrument. The end goal is to log the cDAQ data with the data from the input registers to a single csv file. The sample rate for the two acquisitions would be at the same sample rate.

 

Two ideas that I have had while planning this out are:

 

  1. Have the cDAQ acquisition drive the serial acquisition.
    • Instead of requesting to log data after cDAQ data acquisition, request serial data acquisition which in turn will request to log data after appending the serial data to the cDAQ data.
    • I see many future problems with this especially the dependencies that are created.
    • It would let me set the sample rate of both acquisitions with only the DAQmx timing.
  2. Have a new module, say "Data Manager Module", that the DAQmx module and serial module would both send their data to.
    • Data Manager Module would receive data from the two modules and store the data in the local cluster. Perform the appending and call the log data request. 
    • One thought is I have had about this solution is where the log data request would be called from. Do I call it from one of the MHL cases that the data received in or would it be in separate case? The separate case would run the risk of missing data since we cannot guarantee another data log request won't be enqueued before. So which receiving MHL case do I call the log data request. Maybe it doesn't matter.

 

I appreciate any advice or tips, I am open to ideas. Searching the forums I saw that there was people who are acquiring from multiple sources, but not much detail on the handling of the data.

 

Edit: If you think both of my approaches would be poor way to handle this, please let me know.

0 Kudos
Message 1 of 9
(4,199 Views)

Hello Ryan,

 

please take a look at the diagramm I wrote. This is the way I handle the "multiple Sources to one LogFile" problem.

Saving the newest data from every Source to the MessageHandling Loop Shift Register. And when the Helper Loop sends the command to Log to File, Logging the newest data from every Source to file.

The Helper Loop is necessary. So the new Data Cases in the Message Handling Loop can run and dont pile up.

 

I am also thankfull for critics on my approach.

Message 2 of 9
(4,157 Views)

I like your approach, Marco. Just to speak put the obvious: The helper loop takes care of timing, right?

 

As to Ryan‘s question, I think there are many things to take into account. One single solution might not be able to cater to everybody‘s needs. Is it ok to miss out on samples (as only the newest ones are being logged)? What if sources have different sample counts (rates)? What if they need to be aligned by timestamp? How to deal with errors, gaps, delays, timeouts? One combined buffer or single buffers for each source? 

 

I‘ve encountered various scenarios, so I‘d also be very much interested in others‘ opinions!




DSH Pragmatic Software Development Workshops (Fab, Steve, Brian and me)
Release Automation Tools for LabVIEW (CI/CD integration with LabVIEW)
HSE Discord Server (Discuss our free and commercial tools and services)
DQMH® (The Future of Team-Based LabVIEW Development)


Message 3 of 9
(4,152 Views)

As Yoerg says, there are a lot of variables, but I tend to do what lichtenfield suggested.  It seems pretty multipurpose and covers a lot of cases.  

 

Sometime I would do an aggregator that collects data from multiple places and then coordinates and timestamps it before sending it on to be logged.

 

Sam Taggart
CLA, CPI, CTD, LabVIEW Champion
DQMH Trusted Advisor
Read about my thoughts on Software Development at sasworkshops.com/blog
GCentral
0 Kudos
Message 4 of 9
(4,140 Views)

Lichtenheld,

 

Appreciate the response. I like the idea of adding the helper loop to my logging module.

 

Some thoughts about that I had are:

  1. It will most likely cause my app to "miss" data, where the Save Data Request is called 2 or more times before the Log Data request is handled. For my application this is not a concern. Do you make the Log Data message a priority message? 
  2. It allows me to separate the acquisition rate and the logging rate (and the display rate). I could see some problems if I were to call the Save Data request at a rate much higher than the Log Data request.

Edit: Took a second look at the diagram. I see you have a request to log data which would route us through the EHL. A better solution would be to share the queue ref with the helper loop and direct the message straight to the MHL? 

Once I plan out the module some more, I will update on how this solution works. I think it should work great for my application though.

0 Kudos
Message 5 of 9
(4,132 Views)
Hello Ryan,

you are right. To "miss" data is a intendet behaviour to separate acquisition rate and the logging rate. But, as Sam mentioned, there are surely ways to dont miss on data.

In my case the Log Data message is a priority message. At this point I dont really know why I set it to a priority message. But I dont think this is necessary when you use a helper loop(and you must use a helper loop!).
I also use the queue ref in the helper loop. Thats why the arrow goes from helper loop direct to the message handling loop. The "Request" on it is misleading.

As a tipp: I ran into several issues with memmory with this structure (see other post in DQMH forum from me). At the end it was due to a failure in the "Stop Module" Event from the helper loop. But it was hell to find that failure.
While you develop it is also a good idea to use a counter in EHL and MHL.
The combined number of Broadcasts from all sources shouldnt be higher then something arround 300 per second(less is better of course).
0 Kudos
Message 6 of 9
(4,114 Views)

I believe all the proposed solutions are good, I didn't see anything bad. 

 

I would throw yet another option, that would be to have the helper loop do the save to file directly, this would ensure that the logging to data happens as soon as the timeout case executes and you don't have to worry about missing data. This means you need to figure out how to share the data to be logged between the MHL and the helper loop. One option would be to use the Current Value Table toolkit or a similar approach where all the data from the different sources are saved in an FGV table (this can be an array of clusters or a variant look-up table) and the helper loop just logs to file the current value for all the variables the logger receives. 

 

Other considerations for Logging modules:

  • Make sure that your stop module is not a priority event and that your Exit message is not a priority message (this is the default for all DQMH modules). You want to make sure you give chance to the DQMH module to finish logging any pending data fields before exiting. That said, you might want to have an abort logging option that does abort the logging in case you need to do this during troubleshooting.
  • A logging module does not need to write to file at the same rate as it logs data. You might want to read that sentence again, it sounds like a riddle. You can buffer the different data to be logged, including their timestamp, and then when the buffer gets full, then write to file. So you could, for example, log data to the buffer every second, but write to file every 30 seconds, in this case, your logging buffer would be an array of 30 values.

I hope that helps, regards,

Fab

For an opportunity to learn from experienced developers / entrepeneurs (Steve, Joerg, and Brian amongst them):
Check out DSH Pragmatic Software Development Workshop!

DQMH Lead Architect * DQMH Trusted Advisor * Certified LabVIEW Architect * Certified LabVIEW Embedded Developer * Certified Professional Instructor * LabVIEW Champion * Code Janitor

Have you been nice to future you?
0 Kudos
Message 7 of 9
(4,107 Views)

Instead of a Helper Loop for timing, one can also use one of the data sources to trigger the logging.  I often find I have one data source that is the "primary measurement", with other sources being "ancillary".   Every primary measurement is saved, with the latest available ancillary data.

Message 8 of 9
(4,104 Views)

@M.Lichtenheld wrote:
Hello Ryan,

you are right. To "miss" data is a intendet behaviour to separate acquisition rate and the logging rate. But, as Sam mentioned, there are surely ways to dont miss on data.

In my case the Log Data message is a priority message. At this point I dont really know why I set it to a priority message. But I dont think this is necessary when you use a helper loop(and you must use a helper loop!).
I also use the queue ref in the helper loop. Thats why the arrow goes from helper loop direct to the message handling loop. The "Request" on it is misleading.

As a tipp: I ran into several issues with memmory with this structure (see other post in DQMH forum from me). At the end it was due to a failure in the "Stop Module" Event from the helper loop. But it was hell to find that failure.
While you develop it is also a good idea to use a counter in EHL and MHL.
The combined number of Broadcasts from all sources shouldnt be higher then something arround 300 per second(less is better of course).


I don't think I will see anywhere near 300 broadcast per second, but I will make sure that I keep an eye on it.

0 Kudos
Message 9 of 9
(4,101 Views)