From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

Random Ramblings on LabVIEW Design

Community Browser
Labels
cancel
Showing results for 
Search instead for 
Did you mean: 

Re: Debug Driven Design/Development - What?

swatts
Active Participant

Hello Lovelies,

Hope you are all doing OK.

I'm a bit broken, so I'm bored, on the upside this gives me the energy to write another article....

Sam Taggart and Joerg Hampel have both talked about this subject, I recommend you go check their material too.

 

https://www.sasworkshops.com/debug-driven-development/

 

I'd also like to thank Brian Powell for excellent analogies and Fab for all her input.

Preparation

Continuing the kitchen analogy, I wonder if there's a connection to "mise en place". Of getting everything organized and in place before you start to cook.

 

Error Handling

For me the point of agreement is with my error handling stuff, this being the one of the fundamental things in our template.

SSDC Error Handling ExampleSSDC Error Handling Example

SSDC Error reporting as an important template feature

We can see in the areas marked red (A,B.C,D) that we're logging errors at the end of the event, state, UI Queue loops. We do this in multiple loop programs, because it gives better feedback to know where an error occurs. We also have a loop dedicated to reporting errors (E).

 

This covers reporting and logging errors, we handle errors in the state machine with dedicated "error" states. This is because the error state is an important state to model in your systems.

 

HSE Logger

HampelSoft (Hampel Software Engineering) have tooling that they use in the development process purely to help with debugging. Credit to Manu for doing the hard work!

 

https://dokuwiki.hampel-soft.com/code/open-source/hse-logger

 

Joerg says that having HSE Logger in place first thing, before even starting the implementation makes the actual development process much simpler. 

 

HSE Logger Helper VIHSE Logger Helper VI

HSE Logger Helper VI

 

Understanding

Do your own work

Absolutely the best advice I can give is to always start your debugging session by understanding the issue, do the hard work, don't believe what you are told. Resist the temptation to change anything until you fully understand the issue at hand. In my many years of fault-finding the single biggest issue I have seen (and done) is changing more than one thing at a time. Slow and methodical will win the day.

 

Organization

Organized / Everything in the right place

The key to lightening the load of the debuggers brain is organization and discipline.

 

Continuing with the kitchen analogy a well-organized kitchen has things where you would expect them to be, e.g., based on how you use them or how often you use them, or what you use together. People that are good at debugging often have a sense of where to look faster than others, based on their mental model of the system.

 

One of the common issues I've found with systems that have got a bit out of control is that stuff is being done everywhere or anywhere. So stuff could be happening in the event structure, or in a queued message, or some dynamic process. Having some rules about what happens where makes that code much easier to debug.

 

State Machine Example
I like state machines, so let's consider the organization of a state machine, one of the things I like best about state machines is the language.
 
StateMachine.png
Waiting for Token State
For example with the state "Waiting for Token" we're being encouraged to put all the program logic required to detect a token in this part of the state machine. We could have a user event or a separate service that detects coins, but it's much easier to have the discipline of just waiting for the coin in the Waiting for Token state. If there's an issue with token detection, there's only one possible place to look.

 

Cohesion

I've talked about cohesion a lot in this blog and the advantage of highly cohesive designs comes back to everything being in its expected place. Got a database issue? Check your database component/module. Something iffy about a reading check your DMM module. If you are looking in lots of places to fix an issue it's a code smell for poor cohesion in your design.

 

It's actually a pretty nice exercise... Got a problem with XYZ, I should look in the XYZ module. Make sure XYZ is a tangible thing and you're onto design winner.

 

Visualisation

One essential element of debugging is being able to visualise how your code is working.

 

Iteration Indicators

This is a very simple thing to do and I've found it really useful.

Vis1.png

Using iteration indicators to help visualise the system

Do this for every loop in the block diagram, group them together and you get a nice indication for loop times and when a loop is complete.

 

Logging

It's really useful to log events, states, messages and communications in your system. Especially in distributed systems!.

 

Logging.png

Simple State Logging

 

We often have an ugly debug tab hidden away, this screen can be made visible from configuration.

 

Debug Screen.png

Configurable Debug screen.

 

Block Diagram Documentation

Check out my article here for more info - Commenting I like

 

My experience with all documentation is that is needs to be easy to get to, easy to update and locatable from the block diagram (if not on the block diagram). If all of those boxes are not ticked documentation will eventually fall into disrepair.

 

Ease of Navigation

I've talked a lot about this in my various presentations on Immediacy.

Speed is essential here IMO, how fast can you build a picture about what is happening, the faster you can do that the more you can fit in your brain. I view the brain as a leaky bucket. Certainly my brain is....

 

Depth of Abstraction

Every extra layer of abstraction is a barrier to ease of debugging, so while a very necessary design technique I would think about refactoring if you feel you are finding it difficult to find the running VI.

Abstraction.png

Scope Hardware Abstraction Layer

 

Use terminals to help navigation

This is another simple thing that can really help your quality of life. Simply place the terminal of the control where is doing most work. Often my technique to go to an area of the block diagram is to right-click on the control on the front panel and select find terminal.

Nav1.png

Using terminals to help navigation

 

This is not an exhaustive list of things that we do purely to help with debugging, if I remember more I'll amend the article. I'm sure the comments will have other things that you lot use.

 

The next article will show our debugging process in all it's glory.

 

All The Best

 

 

 

 

 

 

Steve


Opportunity to learn from experienced developers / entrepeneurs (Fab,Joerg and Brian amongst them):
DSH Pragmatic Software Development Workshop


Random Ramblings Index
My Profile

Comments
Jacobson-ni
NI Employee (retired)

I've never thought to set the iteration count to -1 when the loop ends but I'm going to have to start using that. When the iteration stops increasing I'll note that it's either hanging or the loop ended but never thought to make that easy to distinguish. You do lose the final iteration count but I'm not sure if that matters too often.

 

For logging the state, one thing I'll do when debugging FPGA code is just log the state transitions. Normally you only care about that anyways and not that it may have sit in some idle state for 1,000 iterations. On a normal OS you can just use the "Is Value Changed.vim". On the FPGA I'll also bundle some key state information as well so I can get a sense of what was happening over time (so like at iteration 1,000 we transitioned from Idle to Wait on Trigger and the state information shows 1024 elements to read and trigger condition set to rising edge).

Matt J | National Instruments | CLA
swatts
Active Participant

Oh-Ho! I completely missed RT and FPGA....

Great advice, thanks Matt.

Steve


Opportunity to learn from experienced developers / entrepeneurs (Fab,Joerg and Brian amongst them):
DSH Pragmatic Software Development Workshop


Random Ramblings Index
My Profile

Taggart
Trusted Enthusiast

One thing I always add right at the beginning along with logging and error handling is a manual hardware control screen. This allows the customer to toggle any output  and read any input. This helps immensely in debugging wiring and hardware issues.

Sam Taggart
CLA, CPI, CTD, LabVIEW Champion
DQMH Trusted Advisor
Read about my thoughts on Software Development at sasworkshops.com/blog
GCentral
Terry_ALE
Active Participant
Dhakkan
Member

The manual hardware control screen Sam referred to, has been immensely valuable to my coworker and me for our typically medium-sized projects. Even if the customer does not require it, we embed a 'diagnostics' UI that is invoked through 'secret' buttons on the main UI, followed by a password challenge. This diagnostics view for each hardware component itself evolves from our initial scratchpad interactions.

 

Of late, I've been creating a public, overrideable 'testPanel' method VI for the hardware classes - abstract as well as concrete. The testPanel for abstract only implements interaction with the public/protected methods. The testPanel for concrete may have more controls/indicators that interact directly with that class' private VIs, say for raw byte communication.

swatts
Active Participant

It's a great idea, we do it for some of our projects and it helps in loads of ways.

1) the customer can test some of their assumptions

2) and their wiring, this keeps the hardware development in step with the software development; rather than saving all the nasty surprises to the end.

3) it's a nice testbed for your hardware drivers.

 

Steve


Opportunity to learn from experienced developers / entrepeneurs (Fab,Joerg and Brian amongst them):
DSH Pragmatic Software Development Workshop


Random Ramblings Index
My Profile

Jacobson-ni
NI Employee (retired)

It's also super helpful after deployment. It can be much easier to figure out why your tests have suddenly started failing after a year if you can just look at the signal going into the test system.

Matt J | National Instruments | CLA
Intaris
Proven Zealot

On FPGA, I have developed a list of "habits" that I try to keep to 100% because not doing so will pretty much always cost me way more time and energy than doing so.

 

1) Never ever use project-defined FIFOs, Registers, DMAs or the like DIRECTLY (i.e. named nodes) anywhere in my code, ever. Always create the "references" on the top-level diagram and pass them into code. Works the same, but allows for way more flexibility when testing. For example: A Target-scoped FIFO "reference" with only a Write port is interchangeable with a Target to host DMA channel "Reference". This helps a LOT when debugging. The top-level VI decides, the sub-VI just does what it's told. I've been told there's a politically incorrect term for this.

2) Where possible, use VI instantiated resources. Keep the definitions in code. I just do not like defining these things in the Project. DMA channels unfortunately cannot be instantiated, but practically everything else can.

3) Add "debug" pathways - ways to see what's going on inside the code while running which, when not activated, will simply be optimised away

4) I have a lot of data storage on FPGA, scalar and array implemented as LVOOP. This way I can leave the abstract parent class in there and without anything to read or write, large portions of code will be optimised away. This is a geat "lazy" way of essentially switching off code portions which may be hindering compilation when optimising. I include Xilinx IP cores and I/O nodes in this. Practically everything which is hardware-specific is abstracted and can be switched out / disabled from the top-level diagram. This aids a lot in creating debug versions of code. The alternative (using conditional disables) is clunky and causes the soruce code to be in a perpetual state of change which makes my beloved "interactive mode" debug option completely impossible. Rarely do I see the "source code does not match the Bitfile" any more. LVOOP kind of side-steps this. Feels kind of hacky, but it works.

5) Modularity is key.Being able to run code in simulation mode is fantastic, but the scope of the code needs to be small. Making as many of my code portions on FPGA modular (using the above tips) makes running portions of code at a time in simulation mode feasible.

6) Use the FLEX-RIO "Assert" VIs where appropriate. Asserting constant values where constants are required (code should be constant folded), or forcing certain conditions (value must be positive) are helpful. Catch these issues BEFORE the Xilinx compiler gets involved. It saves a lot of time. They apparently operate on the DFIR level. Considerably beyond my payscale, but would love to delve deeper. The things I could do.....

 

To this day, I still find myself being tempted to "take shortcuts" when writing new FPGA code. And every single time, my "shortcut" ends up costing me even more time in debugging and searching for errors. Adding debugging code (in whatever form) from the start IS the shortcut. Neartly all of the habits I have developed on FPGA are related to keeping debugging as accessible as possible. Sure, it requires starting up a parallel compilation sometimes, I don't have the resources free to keep all the debug pathways in code, but it does not require conditional disables, it allows for a module-by-module check AND makes running in simulation mode (even in LV 2015 SP1) feasible. And when compile times measure in hours, this is a real boon.

swatts
Active Participant

Oooh I love this!!!!!

"Adding debugging code (in whatever form) from the start IS the shortcut."

Steve


Opportunity to learn from experienced developers / entrepeneurs (Fab,Joerg and Brian amongst them):
DSH Pragmatic Software Development Workshop


Random Ramblings Index
My Profile

richardscheff
Member

I am just curious, is Debug Driven Design different than Test Driven Design?

And in what ways is one method better or worse than the other?

 

Most of us start out with a simple unthought out request.  and we build it, and we show it off and then additional use cases come (changes).  Does debug driven help with that?

I write my software to help me find errors during execution, which I do not think is what you mean by debug driven design.

The challenge always comes that a new error comes up, and you can't think of them all.  I have an application I am developing right now, and because many tools go out for calibration we decided to autodetect the instruments connected.  Which works fine, up until a broken wire prevented communication.  So I didn't have a check for no instrument.  According to the front panel there is no instrument, but obviously there is an instrument.   So, I had to prove to myself that it was not a software bug first; and then checked if a different instrument or a different cable would work.

Or maybe in debug driven design this would be a non issue.

 

joerg.hampel
Active Participant

> is Debug Driven Design different than Test Driven Design?

> And in what ways is one method better or worse than the other?

 

Only speaking for myself, I think that DDD and TDD are slightly different approaches towards the same endgame ("no worries"). Neither one is better than the other, they are just two tools in your toolbox.

 

Many of our projects are rather short (i.e weeks or a few months), with a high probability of not touching the code again once it's been commissioned. Getting a new piece of software up and running ASAP is more important to us than catering to potential future changes, and we usually work with limited resources, so we designed our libraries and templates in a way that makes it easy for us to pin down the source of a problem fast.

 

Two examples that come to mind:

- Our HSE-Logger is an integral part of all this, we make sure that it's the very first thing that is setup and working in a new project, and that all pieces of the code make use of it.

- Modular code also helps with isolating potential problem sources, so we make heavy use of DQMH (one of many ways to create modular code).




DSH Pragmatic Software Development Workshops (Fab, Steve, Brian and me)
Release Automation Tools for LabVIEW (CI/CD integration with LabVIEW)
HSE Discord Server (Discuss our free and commercial tools and services)
DQMH® (The Future of Team-Based LabVIEW Development)


swatts
Active Participant

Rather unsurprisingly, I agree with Joerg.

For my type of brain, the type of projects I work on and the customers I have

 

So generally DDD works better for me, but I can certainly see use cases and a good ROI for TDD. One of the somewhat hidden points of the article is that that your design approach should be driven by what you deem important.

 

The one thing that TDD gives you is a record of working tests, which for some types of projects can be quite valuable. This takes more time and also takes maintenance. Because an incorrect test is actually worse than no test at all (similar to documentation in that regard). For some types of projects and teams this record is not only advisable, it's essential. For quite a lot of bespoke systems it's not so important and the time would be better spent functionally testing the system with the user.

 

In some ways they also complement each other, a good TDD system would probably lend itself to DDD and vice-versa.

 

Which is a very long-winded way to stick caveats all over the question "which one is best".

 

"It Depends...." is our mantra.

Steve


Opportunity to learn from experienced developers / entrepeneurs (Fab,Joerg and Brian amongst them):
DSH Pragmatic Software Development Workshop


Random Ramblings Index
My Profile

Dhakkan
Member

I was re-reading the two articles related to this topic and thought of another habit I follow. Felt I'd share. (Yes, inspiration from Intaris' post above.)

 

I use two monitors - the primary one is my workspace for LabVIEW code development. The secondary one makes the debug-driven development aspect useful for me through the wonders of human peripheral vision.

 

I keep two windows open in this secondary monitor - LabVIEW's error list window and my git tool. As I'm coding through, I perceive the error list window refreshing. There is a point where I expect no broken VIs in my project; but a grey blob in the general direction of this error window makes me realize something is off. Similarly, the git tool is displays files that are modified in my project. Here, I have to focus on the window, albeit periodically, to ensure that the changes it detects are as expected.

 

swatts
Active Participant

Makes me think of one of the original computers, the tuned the radio into a certain frequency and could hear when the computer went wrong.

I also remember helium leak checkers used to have an audio output that went higher the smaller the leak....

Steve


Opportunity to learn from experienced developers / entrepeneurs (Fab,Joerg and Brian amongst them):
DSH Pragmatic Software Development Workshop


Random Ramblings Index
My Profile

Taggart
Trusted Enthusiast

@swatts wrote:
Because an incorrect test is actually worse than no test at all (similar to documentation in that regard). 

What do you mean by incorrect test?  Do you mean a false positive or negative? 

Sam Taggart
CLA, CPI, CTD, LabVIEW Champion
DQMH Trusted Advisor
Read about my thoughts on Software Development at sasworkshops.com/blog
GCentral
swatts
Active Participant

Anything that convinces stakeholders that a module is correct without said stakeholders fully understanding the test parameters, coverage etc.

I think false confidence might be the biggest danger, thinking about when code is handed over to other people all they see is PASS and that's the job done....

Which is a long way of saying false positive.

Steve


Opportunity to learn from experienced developers / entrepeneurs (Fab,Joerg and Brian amongst them):
DSH Pragmatic Software Development Workshop


Random Ramblings Index
My Profile

Taggart
Trusted Enthusiast

I think it is rare for a set of tests to give you 100% confidence (there is such a thing as provably correct code - but I don't think most of us are writing that. https://www.infoq.com/news/2015/05/provably-correct-software/)

 

What it should give you is a warm fuzzy feeling that the code appears to work. If I run the tests I know that at least for all the conditions I am testing, the code behaves appropriately. Now if it is an inherited code base and I didn't write the tests, I am trusting that the tests actually test something useful. It should also give you regression testing. Make a change and run them again and its highly likely you didn't break anything.

 

Obviously, you can have tests that don't really test anything and always pass, but what is the alternative in that case? How do you tell if the test are "correct"? Measuring code coverage is nice, but doesn't say anything too meaningful. Neither does number of tests.

For documentation, to avoid stale documentation you can do auto-generation of documentation ala Antidoc. Is there something similar for testing?

 

Maybe something for tests that autogenerates a set of input vectors? or that checks your test to make sure you are accounting for them? ie. it sees you have a double input, so it says check 0, NaN, +-inf, etc. Of course, it couldn't account for every important input condition, but maybe give you a baseline.

As I'm writing this mutation testing comes to mind. You take something that is supposedly reasonably covered by unit tests. You start randomly injecting faults (changing constants or flopping logic around - flopping select statements, floppings ands with ors, inverting boolean signals, swapping cases in case structures, etc.) one at a time. For each fault, you run the tests and some test, somewhere, should fail. If not, then you are not testing that path.

It seems like mutation testing would be a way to have good confidence in your tests, no? More useful than code coverage.

Sam Taggart
CLA, CPI, CTD, LabVIEW Champion
DQMH Trusted Advisor
Read about my thoughts on Software Development at sasworkshops.com/blog
GCentral
Taggart
Trusted Enthusiast

https://www.techtarget.com/searchitoperations/definition/mutation-testing#:~:text=Mutation%20testing....

Sam Taggart
CLA, CPI, CTD, LabVIEW Champion
DQMH Trusted Advisor
Read about my thoughts on Software Development at sasworkshops.com/blog
GCentral
swatts
Active Participant

The project I am working on now has grown beyond the scope of a bespoke project and to a large-scale system. I think it would now benefit from a greatly improved testing regime. It would definitely benefit from unit tests, mainly for regression testing. It would also benefit from more extensive functional testing. 

The other aspect of TDD I like is that the tests can provide usage documentation as a benefit. For things like drivers and APIs this is really useful.

 

My plan is to write a set of functional tests as a test script and then some unit testing for some of the APIs (hardware, databases, reporting and communications perhaps)

 

But I find that some of the more rigorous methods of writing software end up being self-defeating tho' (i.e. they make the process of writing software so insufferable it only attracts people of a certain brain type)

 

My brain is much more attuned to debugging than it is to testing. It would be excellent if I could find someone who was more attuned to testing so they could write my tests for me!

Steve


Opportunity to learn from experienced developers / entrepeneurs (Fab,Joerg and Brian amongst them):
DSH Pragmatic Software Development Workshop


Random Ramblings Index
My Profile

Taggart
Trusted Enthusiast

@swatts wrote:

 

The other aspect of TDD I like is that the tests can provide usage documentation as a benefit. For things like drivers and APIs this is really useful.

 

 

 

 


I think this is the part where you were talking about tests being potentially misleading, like documentation. I actually find tests better than typical documentation in that the tests can't lie.

If I run the test and it passes, then whatever the test says it does is exactly what it does. If you change the code but forget to update the test, the test will fail (assuming you didn't write a test that just always passes). All I have to do is run the test to verify that yes, that is exactly what the code does.

Documentation is not so truthful. Change the code and forget to change the docs, unless they are autogenerated, they now lie.

Sam Taggart
CLA, CPI, CTD, LabVIEW Champion
DQMH Trusted Advisor
Read about my thoughts on Software Development at sasworkshops.com/blog
GCentral
James_McN
Active Participant

@swatts wrote:

My brain is much more attuned to debugging than it is to testing.

 


I'll be interested to hear about your experience if you do start trying some unit tests on that project as I would likely have said the same when I started. Given my time in tech support I became wired to this.

 

I think you may find that unit testing supports debugging nicely. For a given problem captured by a test you can debug in an incredibly focused manner within the scope of a test. You can rerun or create new scenarios for that piece of code in seconds. In fact it can feel like one big debugging session.

 

I liken the feeling to the immediacy that you describe. I really think of TDD as being what many already do in LabVIEW - running a VI from the front panel, just recording those results to be run automatically. (attempting to make this workflow easier was about the only thing NI's UTF got right IMHO)

 

It also means if debugging an issue not captured by the tests, you can be fairly sure the problem is the code is not called like it is in the tests - which greatly speeds up debugging in my experience.

James Mc
========
CLA and cRIO Fanatic
My writings on LabVIEW Development are at devs.wiresmithtech.com
Dhakkan
Member

If I have understood Test Driven Development correctly, the general order of development is:

  1. Prepare the interface(s) for the module under test (VI, LV Class, etc.); i.e. inputs and outputs.
  2. Prepare the test harnesses and suite. (Stubs, input values and matching expected output values, etc.)
  3. Loop until all tests in suite pass
    1. Run test suite.
    2. Write code within unit under test.

Therefore, regardless of whether one is building a new code module, or enhancing an existing one, the step to prepare the test harness and suite occurs before the code within the module under test itself is touched. Hence the 'Driven' part of the TDD. Herein lies the challenge for the TDD practitioner - what stubs, inputs and expected values are necessary and sufficient for the required degree of confidence; and what is the necessary and sufficient degree of confidence required for a given module.

 

I would argue that a well thought out and executed TDD approach will preclude false positives. (But hey, humans are not infallible.)