From 04:00 PM CDT – 08:00 PM CDT (09:00 PM UTC – 01:00 AM UTC) Tuesday, April 16, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

Random Ramblings on LabVIEW Design

Community Browser
Labels
cancel
Showing results for 
Search instead for 
Did you mean: 

Re: Rapid Modification and Debugging

swatts
Active Participant

If you saw me doing Fab`s presentation "How to Polish Your Software and Development Process to Wow Your End Users" I'd like to apologise for making you uncomfortable, ah who am I kidding! I found it very funny indeed. Just to explain I stopped for 30 seconds just before talking about splash screens, causing lots of uncomfortable seat shuffling and worried looking attendees. Had a very good day, thanks to all who worked so hard.

In the last blog I laid out that you will likely be hit with changes at the most inconvenient time in a project i.e. at the end. This got me to thinking about one of the key benefits of visual programming and that is Rapid Modification. Some call this debugging and Glass rather directly (and accurately) calls the process error removal. The trouble with both of these terms is that they have the taint of failure about them. So if you are debugging you must be an idiot for putting bugs in in the first place!! Error removal!, WHAT I DON'T PAY $xxx/hour FOR YOU TO PUT ERRORS IN! I think you know where I'm coming from.

With a software project the standard view is that maintenance is a significant part of the cost of a project (40%-80% from various studies, 60% is an average), and a majority of this cost is not error-removal but adding new capabilities. Adding new capabilities and rapid modification is a positive thing, successful software is modified.

So a more grown-up view is that modification at the end of the project is normal, and if like us, you classify maintenance as the phase that begins from the moment you deliver the software to the customer that 60% doesn't sound so extreme. In short this is a very very important part of the software process and making this easy for ourselves will pay big dividends.

What is the debugging process?

remember__m6rc8u__.jpg

  1. Reproduce...Observe the process,Understand the problem
  2. Diagnose....Visualise the process
  3. Fix....Introduce a change
  4. Reflect....Test the change

As I mentioned earlier LabVIEW has some help to give us and this is one of its fundamental advantages.

The block diagram is an enormously powerful visualisation tool, cherish it please.

Probes, breakpoints, execution trace are all helpful.

Searching (please improve it NI), even with it's limitations it's sooo useful.

We can also help ourselves, here are some techniques to use..

Reading Before Writing...

Harlan Mills has it right, software should primarily be designed to be read. The decision to sacrifice readability is an expensive one (not necessarily for the original developer, but definitely for anyone else involved in the process)

Labeling Loops

Labeling Cases

Bookmarks

Type Defs

Enumerated Types

Putting control in event case rather than just leaving it floating, that way the event is only a click away.

Comprehension is the most important factor in doing maintenance

Ned Chapin 1983

Design with maintenance in mind....

Part of the design process is to think about how our design affects the various aspects the project lifecycle.

  • Scalable
  • Modular
  • Reusable
  • Extensible
  • Simple

The SMoRE acronym is used quite a lot, the trick is to adjust how much you concentrate your design efforts on where you get maximum benefit. I have removed Extensibility from my projects because they were pushing the hierarchy too deep and this was affecting Simplicity and Reusability.

Cohesion

Encapsulation

Keeping hierarchy as shallow as possible, you don't want to be trekking through layers and layers of abstraction to find the logical decision that tackles the problem. At my age I often walk into a room to do something and then completely forget what it is. Now if Watts Towers had 25 rooms that I had to walk through 1st it is very likely I would be in this situation more often! For all my facetiousness this is a very important point. I want to rapidly get to the point of logical decision and this process has to flow, anything that inhibits this fluidity makes visualisation harder and me grumpier.

Another important facet of visualisation is getting the solution to represent the problem, this relatedness is very important to simplifying your code. This is why I'm wary of all this talk of patterns, I fear the day when I come to sort out a job and I'm presented with Factory Patterns, Singletons, Facades etc etc when all I'm looking for is the bit telling me what the software is doing and when. All these things have their place, but if they hinder debugging they are costly indeed.

Using searchable structures like Controls and subVIs. The Constant VI is a very good example of this as a technique.

ConstantVI.png

Exercise some constraint when using dynamic loading and dynamic dispatch. Especially where the dispatched vi change the functional decisions of the software. In my experience dynamic dispatch can simplify a block diagram if used correctly, but if the method changes the function of the software it can act as a block to visualisation.

I might do some funky graphics stuff in my next article.

Hugs, Kisses and Psychedelic Shirts to you all

Steve

Steve


Opportunity to learn from experienced developers / entrepeneurs (Fab,Joerg and Brian amongst them):
DSH Pragmatic Software Development Workshop


Random Ramblings Index
My Profile

Comments
FabiolaDelaCueva
Active Participant

Steve,

Great article as usual and I am sure you did a great job with the presentation at NI Days

One think I would add to the step "Reflect ... Test the change" would be to save the test when possible. I use unit testing, either using National Instruments Unit Test Framework (included with Developer Suite) or JKI VI Tester (free add on you can download via VIPM).

When a modification is made late in the project, I can run my battery of tests to verify that a change in corner A does not impact any other part of the program. I have seen that even in applications that are not tightly coupled, from time to time a change late in the game might have consequences in corners of the code that were long time finalized and closed. We all do small VIs to test our VIs or try a couple of values in the inputs and verify that the output is what we expected, the trick here is to not throw those away and instead create a Unit Test and save it for later. when a bug is found that one of the unit tests did not catch, the unit test is strengthened.

Having these tests have saved me lots of time and made it possible to make changes late in the game and lessen the possibilities of something going wrong.

I agree 100% on focusing on making our code readable, we spend more time reading code than we do writing it, I also label my constants, because while you are in the middle of the project they might make a lot of sense, but a couple of months from now, they might not.

Last week I was at a customer site and I was showing them the constant VI to be used instead of a global variable that has values that won't change and that are only read but never written. At the beginning they could not see the advantages of using a VI instead of the Global Variable unitl I explained that the VI has a block diagram, so if you decide that you are going to read those values from a database or form a configuration file, now you have a block diagram where you can make this change. Also, if you make your constant VI part of a library, you can decide if its scope is private and then just use it inside the library. And there is never the temptation to write to it like there is with the global variable.

Thanks and please keep these articles coming! Love to read them and see that there is a method to our madness

Regards,

Fab

For an opportunity to learn from experienced developers / entrepeneurs (Steve, Joerg, and Brian amongst them):
Check out DSH Pragmatic Software Development Workshop!

DQMH Lead Architect * DQMH Trusted Advisor * Certified LabVIEW Architect * Certified LabVIEW Embedded Developer * Certified Professional Instructor * LabVIEW Champion * Code Janitor

Have you been nice to future you?
ohiofudu
Member

Thanks Steve,

Readable Code = Use subvi + Documentation + API - ( Spaghetti Code) + Good parttern choice.

Munch Love

Certified LabVIEW Architect
Certified TestStand Architect
GregPayne
Member

Thanks Steve,

Great article to start the week.

drjdpowell
Trusted Enthusiast

Sorry for the late comment; I think I missed this blog post when it came out.

Keeping hierarchy as shallow as possible, you don't want to be trekking through layers and layers of abstraction to find the logical decision that tackles the problem.

The point of "abstraction layers" is to abstract away the details.  If you're digging through layer after layer, then those abstraction layers have failed or were never really abstracting anything in the first place, and are thus just getting in the way.  A good abstraction layer is one where "the logical decision that tackles the problem" doesn't cross (in either direction: one's "query a database" code doesn't care about details of the IP packets sent; while the TCP/IP code doesn't care what SQL you're sending).

-- James

swatts
Active Participant

The point I'm trying to put across (and an area of design I'm building up to) is that you can add excellent design schemes to everything, but sometimes they can get in the way. In this case the argument for a configuration abstraction layer from a design perspective is pretty good, but in practice the extra baggage got in the way. I have a feeling it is something to do with the extra depth in the hierarchy, but I'm only really using my own brain and irritation levels as a calibration. The feeling I was getting was that when something needed debugging I was losing track of where I was and this made is flow poorly. When I then made the decision to just concentrate on one method of configuration it made the software easier and the debugging process simpler.

It was interesting to me to see the affect of one design decision had on the overall maintainability of the software and as this topic is on debugging it's how easy it is to get things up and running again.

We have a configuration interface, so the internals are hidden away and can be modified so it's not a major problem to change it if required.

I've used abstraction layers for Hardware and connection schemes and they work lovely, in this case it didn't help. We also have tried a communications abstraction layer many years ago and I didn't get on with that either. The interesting question therefore is why some help and some don't. I'll think about this more and see if I can dig out the difference, I think it is something to do with how abstract the abstraction layer is with regards the system, I also think it's to do with hiearchy depth and how long it takes to get to the logical decision.

This is the stuff I love!

Happy New Year James

Steve


Opportunity to learn from experienced developers / entrepeneurs (Fab,Joerg and Brian amongst them):
DSH Pragmatic Software Development Workshop


Random Ramblings Index
My Profile

drjdpowell
Trusted Enthusiast

Is the "abstraction layer" you're discusing a specific design pattern, or more generic as I'm using the term?  If your "configuration interface" hides its internals, then it is an abstraction layer to me.  "Abstraction" is a mental simplification; if one can use your interface without having to understand its internal details, then it's a useful abstraction.  If not, then hidding the internals is just getting in your way.

-- James

drjdpowell
Trusted Enthusiast

How's this as an analogy?

Abstraction layers are like walls in your house.  If you use them to organize cohesive rooms that you can be productive in (like "the kitchen") then they are valuable.   But if you divide your house into a hundred different rooms, and you cant get anything done without passing back and forth between several rooms (get the bread from the breadroom and take it to the toasteroom, then go visit the butteroom) then you are just making things difficult.

That's why I responded to your writing about "trekking through layers and layers of abstraction to find the logical decision that tackles the problem".  Why are you wandering all over your house to make toast?

swatts
Active Participant

Very nice analogy James. I like it very much.

Essentially I was debugging a new configuration on a vanilla PC (fighting versions of ODBC, 64 bit Windows etc etc). In getting fed-up with this (it's worked fine until the 32bit/64bit cross-over) and inspired by your SQL demo at CSLUG I added another Class of direct SQL (i.e. not using ODBC). We then reviewed the finished design and decided we only needed 1 method of configuration and not a Configuration Abstraction Layer. This got me to thinking about taking things out and the design implications thereof.

Are you getting email notifications of these replys (because I'm not!)

Steve


Opportunity to learn from experienced developers / entrepeneurs (Fab,Joerg and Brian amongst them):
DSH Pragmatic Software Development Workshop


Random Ramblings Index
My Profile

drjdpowell
Trusted Enthusiast

Just got an email notification.

I strongly agree that one should be biased against adding a lot of "Abstraction Layers", but that is because coming up with a good abstraction layer is hard, and really requires one have experience in the specific problem.  It's worth doing, partly because a good abstraction layer is practically synonimous with a good code-reuse package.  But you need to have already solved similar problems more than once before you can really see what a good abstraction is.