We are using real time stimulus profile for executing our test case and our test case are written in csv format. We are logging aliases for our post processing and we are using "Start Logging Configuration" step to log the aliases. When we run the test, which is having multiple stimulus profiles, sometimes randomly the tdms file seems to stopping the logging, data/values are missing after some point. For example if the stimulus profile is running for 2 min, it stops logging after 1 min. Our logging rate is 1000 and target rate in the sdf is 1000.
But when we run these stimulus profile individually its not creating any issue. This issue is observed randomly in various stimulus profiles. Attaching one of the stimulus profile. Please help us!
Based on what you've attached, it looks like you're starting logging, running your RT sequence, and ending logging. There's nothing in the stimulus profile that I can see that would cause you to not log information being acquired within the RT sequence.
Could you check to make sure that your RT sequence is acquiring all of the information you expect? You could try logging information on the target side as well within the RT sequence by enabling the Embedded Data Logger and comparing the data sets to see if they conflict. If they don't conflict, then the problem probably lies within the RT sequence.
If they do conflict, you might try logging physical channels as opposed to aliases to see if it makes any difference.
Thanks Andy for the reply. While using stimulus profile logging step it seems to be having issue.It is stopping after some point of time,no data after that. When we started using NI Teststand Logging step it seems to be working. This is FYI
Does there seem to be in pattern to when the stop happens? Will you lose data every time after 1min if you set up the logging rate to 1000 and the target rate in the sdf to 1000 or is it only sometimes? If it does happen every time does changing the logging rate or the target rate seem to affect when losses occur?
Did you ever get a resolution to this issue? I just had something similar happen. We are using VS 2015 on a PXIe chassis running at 1000hz that is connected directly to the host PC. We ran a stimulus profile that starts a logging session at 100hz and then keeps a counter of when a certain condition is met. The recording is stopped after the sequence is stopped. The counter continued to run, but the logging stopped after approximately 5 hours. The test (counter) continued to run for 20 more hours, so the stimulus profile was still running and should of been recording. We have had larger TDMS files than this one was with the same test/stimulus profile, and there was plenty of room on the PC hard drive. Has anyone seen this happen or have a resolution?
It looks like the original poster got everything working with the VeriStand Steps for TestStand. Do you have access to TestStand to call your RT Sequence?
Additionally, 20+ hours is a long time for a single test. In your sequence, would it be possible to split up TDMS logging across multiple steps to break up your file into smaller pieces?
Chris D. | Applications Engineer | National Instruments
Thanks for getting back to me. We don't use Teststand or have access to it on this setup. We were able to record 10 hrs of data (620 MB) the day before without any issues. I have set them up to do the file splitting when the data gets to 100MB, so hopefully that will help. I have never had the data stop logging in a Stimulus Profile before and wanted to see if anyone else has had issues and if so what the solution might be. Having a TestStand Sequence call the VS Stimulus profile doesn't seem like a practical solution unless TestStand was being used to begin with. VeriStand Stimulus profile and logging should stand on its own. Having the logging feature in VeriStand is a big selling point for us so we don't have to install a separate recorder along with VeriStand control. But if VeriStand is not reliable in its logging we are going to be in trouble with our customers.
We have still issues with stimulus profile logging. We tried with Teststand logging step and randomly it also showing the same issue. But i need to get more data on this. What we understood from debugging is the Veristand engine is losing the tcp/ip connection i believe, this can be observed using the system channel TCP Overflow Count. We are still working on the issue.
Sorry for the delayed response, since my notification was turned off. Thanks
The TCP Overflow Count channel keeps track of how many times streaming data has been overwritten. This could indicate data loss, but I would be surprised if it would cause logging to stop altogether. Is this the only reason you believe it is a TCP/IP connection issue? Are any errors being reported?
What you're seeing isn't totally unreasonable (or unexpected, in certain test configurations). Host-side logging does rely on the TCP connection to the VeriStand engine that is instantiated upon deployment. While not inherently lossy, the Communication Send Loop within the VeriStand Engine is a timed loop with low-priority relative to other loops in the engine. VeriStand will deprioritize that loop if it has to allocate processor resources to other higher-priority processes. Here's a more verbose breakdown of the loops within the engine and their priorities. It's admittedly a design tradeoff; however, most people would rather have their PCL, Custom Devices, and models run on time to maintain a safe test than have verbose logging if worst comes to worst.
To abstract out the possibility of data loss over the gateway, I'd recommend logging on the RT target and periodically transferring that file over to your host PC. You can use the Embedded Datalogger custom device that ships with VeriStand and replace the "Start Logging" and "Stop Logging" steps in your Stimulus Profile with a step that sets the value of the trigger channel within Embedded Datalogger to 1 (which enables logging) and then sets that value back to 0 when you wish to stop logging. That'll ensure that you maintain your data integrity since no network communication is involved. Since your tests are running for an extended period of time, I'd also suggest configuring the custom device to segment the files once they reach a certain file size.
Since you're using TestStand to call these sequences, you could throw in an ActionVI that periodically polls the RT system for a new log file (indicating you've reached your specified file size) and FTPs/WebDAVs that file over to the host when a new file is available. From there, you can aggregate the segmented files and post-process as you would otherwise.
Hopefully that's a workable solution. I would suggest trying to correlate the CPU usage within individual cores of your RT system to the times you're losing data, but that would probably only serve to corroborate the above hypothesis and isn't really a solution. If you have some extra time and do test this, I'm confident you'll see a spike in CPU usage correlated to where your host-side log file starts to lose data in the current configuration.