BreakPoint

cancel
Showing results for 
Search instead for 
Did you mean: 

Discussion: Automated Test Script Architecture for HIL Systems

Hello everyone,

 

I'm posting this because I really want to get some in depth discussion and opinions from other professionals about how they have implemented automated testing.  Not as it relates directly to National Instruments hardware/software but testing as a whole.  There seems to be a lot of information on HIL system architecture and real-time testing in general but I never see a lot of topics on architectures for developing automated test scripts and fitting them all together to completely test a piece of software.  Maybe I just haven't found the right place to look (if that's the case please post a link!).

 

My Background

I have been doing HIL testing in one form or another for the past several years.  I've used and seen many different sets of hardware/software including some proprietary systems to automate testing.  I've used a number of hardware and software tools such as the listed here:

Hardware: National Instruments, dSPACE, Opal RT

Software: LabVIEW, VeriStand, CVI, ControlDesk/AutomationDesk, Matlab/Simulink, Python

I've also used other "random" pieces of hardware such as USB controlled relays and some proprietary test systems.

The goal, for me, has always been to automate the entire test so that someone can click "run" and walk away.  These tests have typically been designed as a regression style test that attempts to black box test the controls software, diagnostics and fault reactions, and corner cases that a typical user may never encounter.

 

In my positions, there has been little opportunity for me to get a good feel for how other testers in industry go about implementing and designing there test setup (especially from an automated script side).

 

So that's where I come from.  Now onto.. The Topic.

 

The Topic

The question really is,

How do you implement your automated testing?

and

How "in depth" does your testing go?

 

How do you implement your automated testing?

There are many ways to write and run automated testing and I'm just going to list several that I have thought of or used as seeds for discussion.

 

To alleviate any potential for miscommunication, assume a HIL set up like this:

 ____________             _________            ________________

|Host Computer|  <-->  |HIL System|  <-->  |ECU/Hardware/DUT|

         host                               HIL

 

I will refer to them as host and HIL from here on.

 

Scripts can be written in some language (C, Python, LabVIEW, etc.).  They can be run to deploy and star up the HIL but would ultimately run on the host and control the HIL through [typically] a TCP/IP connection.  Since the connection usually TCP/IP there is potential problems with determinism but maybe it's "good enough" depending on what is being tested.

 

Scripts can be implemented directly in a real-time model or within something like state flow.  This gets deployed on the HIL and runs in real time.  The problem I see is that if you are running days (literally days) worth of regression tests as I have done, you can't "fit" all your tests in a model without running out of memory or making things non-deterministic (perhaps I'm just not clever enough to come up with a good solution yet).

 

I always imagined the best being a combination of the above.  Run most of your scripts from the host but when you get to the deterministic-critical portions of testing set a "flag" to start the deployed portion of the test.

 

How "in depth" does your testing go?

Where do you draw the line that says, this test is good enough and will confirm the software has "no" bugs?  This question is impossible to really answer as it depends on what you are doing.  If you are testing software for medical devices, I hope there is a higher standard than in the TV you bought.

 

I guess what I am trying to get at with the word "in depth" is, what types of test cases does you implement?  Maybe the best way is to give an example of what I mean.  Let's say I have an ECU that requires diagnostic testing of a driver pin and this driver pin has diagnostics for a short to battery, short to ground, or open circuit.

 

Simple case, we set a fault, check the fault is present, wait some time, check that the ECU recognizes the fault.  Repeat for each type of fault. Done?

 

Maybe.  What about the fault debounce time?  Is there a different debounce depending on fault type? Or if you go from fault to fault or fault to not faulted?

 

If those things matter, we have just created the 12 test cases that you see below.

 

okay to faulted

okay -> STB

okay -> STG

okay -> OC

 

fault to fault

STB -> STG

STB -> OC

STG -> STB

STG -> OC

OC -> STB

OC -> STG

 

fault to okay

STB -> okay

STG -> okay

OC -> okay

 

One more thing, do we want to make sure the diagnostic doesn't show up before the debounce time?  If so, now we have just doubled our test cases as those were all "more than" debounce tests.  So one pin, could potentially lead to 24 different test cases and this can take a lot of time to run when you have a whole ECU worth of diagnostics to test.  And this doesn't even take into account any testing of the control strategy - just diagnostics testing.

 

So when are we, as scripters and testers, done?  How many of you have to do into that much detail, or better, should, be going into that much detail?

 

 

The Conclusion [ish]

I hope that we can generate a meaningful discussion as I want to learn more about what others out there are doing in this area.  There are so many ways and levels of testing that could be done and automated.  Where and how do you draw the line?  What do you see day to day that you like or dislike about the way testing is being executed?  Is there a "best practices for automated test scripts" article somewhere?  What lessons have you learned or things you have discovered?  How much effort is it worth to get ba

 

Is it similar to what I have seen and done?

Are there better ways to do it? What the are the best practices for writing HIL test scripts?

 

 

0 Kudos
Message 1 of 4
(9,444 Views)

Good topic, I hope some good discussions come out of it.  My history with HILs have been pretty minimal, using a custom LabVIEW RT PXI controller running a model and script downloaded from the host, and using Veristand.  Both methods had a relatively short sequence.  Sure tests could run for days but if they did it usually wouldn't have many steps to run through, it was primarily something like change from a discharge battery state, to a regen battery state once an hour.  So the number of steps were always manageable.  But even so I always immagined that if I had a huge sequence, that I could still put it all down on the HIL.

 


@joedowdle wrote:

 

Scripts can be implemented directly in a real-time model or within something like state flow.  This gets deployed on the HIL and runs in real time.  The problem I see is that if you are running days (literally days) worth of regression tests as I have done, you can't "fit" all your tests in a model without running out of memory or making things non-deterministic (perhaps I'm just not clever enough to come up with a good solution yet).


In my mind this is the best option.  Being non-deterministic by sending commands from the host seems like it just can't work for any real testing.  Windows being the way it is means to me I can't get better than say 50ms of accuracy between commands, and that is just too slow for anything I've seen that requires a HIL.

 

So for me the only real solution is to put the whole sequence of things to do downloaded to the HIL, then the HIL is told to Start and it maintains timing and performs the steps commanded.

 

But how do we get around the issue of huge sequences that run for days with tons of steps that won't fit into memory all at once?  Why can't we make our sequence, a file that gets FTP'd over to the HIL, and then the HIL reads the next 100 steps at a time, and if we have less than 50 steps in memory, then go read another 100, or something like that.  Then at most 150 steps are in memory at one time.  I think this can work as long as we can read the sequences from disk, faster than we can execute them, and to be honest I think that is reasonable, even if you are logging to disk at the same time.  I mean in that example timing how long does it take to go though 100 steps?  If we go by the Veristand timing of 10ms per loop we are at 1000ms.  As long as the amount of data is reasonable I think you can get away with reading 100 steps in 1000ms.

 

To help with this I might do something like make the sequence a TDMS file, because you can keep the file reference open, and use the length and offset controls to get the subset of data you want.  A custom written binary file could do something similar but I see TDMS as being pretty fast as long as you've defragged before attempting to open it.

 

Keep in mind I've never actually done this, but it is always how I imagined I would if I ever had a requirement for this type of custom real-time sequencing.

0 Kudos
Message 2 of 4
(9,405 Views)

I don't think it is impossible to have some level of determinism using Windows but you are right there are latency issues.  We would see, if my memroy serves me, < 15-20 ms response to writes and < 30 ms response for reads on one of the systems we used.  That accuracy was adequate for the majority of our tests where it was not critical if we tested diagnostic debounces wtithin 5 ms but we our tests would pass with 10-20 ms tolerances.

 

We were also able to run our scripts from the host PC over an optical cable to a dSPACE unit while simultaneous running a real-time plant model on the host PC with a separate TCP/IP connection.  This had lead me down the path that not everything needs to be run on the HIL system itself if you can break things apart appropriately.  I think it really depends on how fast you need to loop deterministically.  The plant model needs to be deterministic but your inputs don't necessary have to start *exactly* when you say go if you know how it will affect your results.

 

I also emplored queues and buffers to store data during times when I know the data generated by the system would be faster than the host PC could read.  This worked really well and used it extensively for high speed communciations protocols.

 

I do really like the idea of using something like an FTP to load the next series of steps to be taken. Rather than steps I think I would try to design my tests so that I could upload an entire "feature test."  I'd feel more comfortable that I'd be able to at least get an entire feature tested before potentially introducing some other issues into the mix and have it crash.  The worse is when you kick off a test before going home only to come back and see it only made it 5 minutes into the script.

 

Even though I think this is a great idea - I still would argue it's not entirely necessary.  Using things like buffers and queues can save the information until you've had time to grab it with the host.  And as long as you timestamp information you can do all the post-processing and analysis that you want.  One big downside is that you may or may not be able or be severely limited to doing any real-time analysis so everything you report back on the host will be delayed (which really isn't anything new in the land of real-time) and if your test can diverge based on a result, it could have issues if things don't run "real-time enough."

 

I really don't think there is one "right" answer here but I definitely enjoy hearing other perspectives on the matter.

0 Kudos
Message 3 of 4
(9,353 Views)

One thing I'd really like to try in the future is to see if I can apply this concept of a downloadable script from host to HIL, using a myRIO.  There exists a board only version that would probably be sufficient, for the type of testing I'm thinking of doing.  As you mentioned the test can then be logged on the HIL (myRIO in this case) and then post processed later, probably by sending summary and detailed results back to the host.  During testing I planned on sending data back at what ever rate is possible for updating the UI.  I could really stretch the hardware by having different FPGA bit files to download if resourceing and performance become an issue.  With this in mind I could possibly come up with a HIL, with programmable scripting capability, real-time FPGA, and maybe modeling for under $2000 in hardware.  Still it's a lot of work and who knows if I'll ever get around to it, but if I do I'll be sure and post back my results.

 

As for windows timing.  I just see it as too much risk in most places I would want a HIL.  It isn't the response time that conserns me but the jitter.  What if Windows goes out to lunch and locks up for a few seconds, what in the world would my script do then?  What about blue screening?  Or other various Windows crashes?  It can be done, and I've seen it done and work, but again in my world I don't think I'd want to rely on it.

0 Kudos
Message 4 of 4
(9,332 Views)