Random Ramblings on LabVIEW Design

Community Browser
Showing results for 
Search instead for 
Did you mean: 

Re: What's the ROI of all this Process Stuff?

Active Participant

Wotcha My Spiffing Chums,

NI Employee of the month (awarded by me) Peter Horn emailed me an interesting question the other day. It was pertaining to the LabVIEW Center of Excellence and associated software processes.


To quote "I was wondering if you’ve ever measured the impact of implementing these processes – either with your own development work or customers you’ve been working with? Everything I’ve found online so far makes sense, but figuring out how to apply it to a specific account isn’t making sense to me just yet."


Now that is an interesting question.......


I feel SSDC has taken a lead in this space over the last 20 years, it's therefore a little difficult to apply a sensible benefits comparison, I then started thinking about Risk Registers. These are things that you need to fill out as part of ISO9001:2015. What if we look at the main points of the CoE assessment as risk reduction techniques and what are the risks we're trying to mitigate. This is very much a work in progress so feel free to chip in friends.

Here's the spreadsheet of the register with the points allocated for out of control projects.


You assign a score of 1 (low) to 5 (highest) to grade Probability, Impact and Control to get a score. Any score over 37 needs urgent attention.

Let's take a look in detail (I think the wording of some of these is poor, but feedback is more important than perfection)


Requirements changes negatively impact a project

Really high probability of this happening, Really, really high!

From the CoE assessment document the mitigants for this are......


  • Process for gathering requirements
  • Process for tracking and updating requirements
  • Process for changing requirements

I'd put this all under team has a process for managing requirements. All we can really do with these is to reduce the Control score. To reduce the Impact score we're talking about Design, designing with requirements changes in mind.

By applying these techniques you can reduce the risk from 100 to 40 fairly easily, take control of your design will reduce it to 20.


Poor design decisions affect the performance of the project

The truth is a bit more extreme in my experience, poor design decisions often kill a project. The ROI of better design decisions is therefore somewhere between 0 and All of the Project Costs!

So we can't really reduce the Impact much, but we can improve the Probability and Control. And you do this through Design Reviews (and training, and templates)


Poor coding practices are allowed unchecked

Similar to the design, poor coding practices affect things like maintenance, team support, quality, reputation. This can go unchecked for months and even years, having a mechanism for doing peer code reviews states up-front that you will be looking at each others code from the off. Just that is valuable.


Lack of code reuse affects competitiveness and quality

I struggled with scoring this one, but for some businesses it's vital. Scoring therefore is relative to use-case.


Poor code management affects project delivery

Poor code management includes not knowing where your code is, not knowing what version it is. Impact is sending broken, untested or wrong versions of the code to the customer.

Plenty of Tools are available to improve the Probability and Control of this.


Tests not applied to ensure customer satisfaction and adherence to specs

Records of tests, teststubs, a testplan, unit tests are all ways to reduce the probability of sending crap to the customer and the cost of sending crap out is not only impactfull on your reputation but also awful for team morale. Cost that one management types!


Poor training affects competitiveness, employee retention (this is also related to poor coding practices).

If you don't invest in your good staff they'll probably go and play somewhere else, that's expensive. Think 120% more expensive than staff retention. It also affects code quality and one would assume it improves productivity somewhat.


Inefficient Deployment and Distribution makes coding as a team costly, projects late

Often a project needs to be scaleable, you will need to have more than one person working on it at a time. Or maybe you just need to have some temporary help. For us it's related to code management as we tend to deal with source-code rather than deployments. But when dealing with deployments there are plenty of tools and techniques to mitigate the risks. Automating this will be an easy thing to cost as a ROI and also can change the way you deliver code. This is especially true when dealing with remote customers as I learnt from Fabiola's presentation at the eCLA summit this year (Technical Debt vs Technical Wealth)


So what are your thoughts gang? Is this a reasonable way to present tangible benefits? Anyone think of anything better?

Lots of Love



Excellent work as usual Steve! Thanks for putting the time and effort into that.


The main reason I'm thinking about this is for one of the accounts I'm working on LV CoE with - they're interested in ways they can measure the impact the investment has given, as back at the start of the process they asked management to make the investment in them and want to demonstrate ROI to gain continued investment.


There is still a lot of work to do on this, but I like the approach of looking at risk registers. Using those to create before and after numbers should allow us to create some high level stats (which are always going to have a lot of error/uncertainty but there's no way around that). 


I'll be very interested to see other thoughts and comments on this post Smiley Happy


Oh yes, and thanks for the Employee of the Month award - do I get a trophy or something? Smiley Happy

Active Participant

Really interesting discussion. I've tried to set myself up in a way to measure impacts of these focused on my goal which is effeciency (but some may be easier to measure like staff turnover and happiness).


Because you are measuring risks I think you have to do it over a large sample set. I don't have enough projects so I attempt to measure it at the task level. This is still a work in progress but I split out tasks such as new features, bugs and tech debt fix's. I also track whether bugs were found during development, unit testing, system testing or customer use. Also whether they are software defects or requirements issues (I still call them all bugs as I believe the customers see it as this, I still debate this slightly though)


I've not gotten very good at actually doing the analysis yet! But it should give me key indicators such as better processes should reduce the bugs that reach customers and time spent on bugs or tech debt. I've simply built this into my work flow with planio so the only additional work is remembering to track time against tasks.


I basically started this for the same reason, if I'm implementing new processes, how do I know it's worth it! (Now gaming all of this is another discussion!)

James Mc
CLA and cRIO Fanatic
My writings on LabVIEW Development are at devs.wiresmithtech.com
Active Participant

At one place we worked we gave the employee of the month award to the manager that suggested it. We then put his desk on a plinth, and covered it in glitter and balloons....the idea was quietly dropped after that!

I don't think Risk Registers are the perfect way to do this, the perfect way is to do 2 identical projects one with no control and one with.

I then think would also need to apply intellectually similar teams. Keep one team in a career bubble so they are not exposed to any process at all. I have actually worked for companies that could apply here!

So while not perfect at least it gets the conversation going....

Active Participant

@James: Over the years I have tried to measure the process, it becomes so difficult because project-project it's just not linear/repeatable in any fashion. So a project has an inherent level of difficulty, this is added to by external factors. So a project of difficulty x will be x+1 for a customer with good communications and engagement and x+5 for one that doesn't talk to you or even want you there. And that's only one variable when comparing projects that are similar. I would suggest that there are >10 variable that will add to the inherent difficulty of a project with a tiny team, you can multiply this as teams get bigger and management layers get involved.

All our projects are extremely variable and generally that is why we charge the big bucks!


Reading it back it looks like a set of reasons not to measure and that's not my point. Measuring is great, analysis will always be subjective IMO, but that is great too. Because even a small amount of company self-awareness is better than none at all.

Active Participant

> Because even a small amount of company self-awareness is better than none at all.



An opportunity to learn from experienced developers / entrepreneurs (Fab and Steve amongst them):
DSH Pragmatic Software Development Workshop @ NIDays Europe in Munich on 22nd November
Active Participant



Thanks for mentioning the presentation on Technical wealth. It is very hard to get teams to move more into thinking on investing in the future and moving away the bad habits of focusing on Technical debt and dealing with the task in front of them. A typical excuse is: we don't have time for that! I can only think of the image of the guys using square wheels, telling the guy with the round wheels that they don't have time for that.


One way we have measured the return on investment is doing an audit of the process (or lack there of) and a team's code. We measure how they are doing in different areas like Source code control, deployment, unit testing, etc. We even look at the human aspect, do they communicate well, etc. At the end of the audit, we have a chart that we call "The Delacor Thermometer". This gives a good one visual image for the managers that are paying for us to help teams implement processes and new tools.


We emphasize that not every single area needs to be addressed at once and recommend what are the areas we think will give them the largest ROI. We also give them where we think their thermometer should be in a year. We come back a year later, do another audit and measure if they made it there or not.


James, you could measure how you are doing today with the things you are already tracking and set a goal where you want to be in 6 months or a year. Then put that goal aside and continue to track your progress. Then comeback later and see if you made it there.


Peter, it is hard to show ROI if the team is not already tracking things. One way is to ask them where their tasks and team goals fall within the company's goals. Ask the managers what is their biggest issue. If they say: the time it takes from requirements to having a finish product. Ask if they are already measuring that. From there, you can project how much time you think they will save if they follow processes. You don't have to go for 50% reduction, maybe all they want is to have an accurate estimate of how long things take. Just having a measurement on how long things take might be enough ROI for them.


Like Steve says, every project, every team, every customer (internal/external) will be different.


There is no one size fits aal, the ROI will be unique for each team.



Thanks for another excellent article and for starting, once again, a good conversation.




Get Going with G! at G Central GCentral
DQMH Lead Architect * DQMH Trusted Advisors DQMH Trusted Advisor * Certified LabVIEW Architect * Certified LabVIEW Embedded Developer * Certified Professional Instructor * LabVIEW Champion * Code Janitor

Have you been nice to future you?


I realise that most of the previous comments have been regarding the actual programming side of things, so here’s my opinion on the ROI for the solution side of it.


To measure ROI the customer needs to specify what it is the investment is trying to achieve. If it is replacing an old system/process then measurements need to be made on the current system and the same ones taken on the new system.


I have recently worked on 2 projects that had very different requirements. The first was developing a very modular way of creating “labs tests” to carry out one off tests for characterisation work. All the customer wanted to be able to do was quickly create a test (that the QA guys had pre-approved to be a valid test), run it and get the results. They weren’t interested in memory management, efficiency etc. It was all about the time it took to make the LabVIEW program to do the tests, more than an hour and there’d be complaints!


The second project was almost the opposite, this one was all about test time and not really focusing on how long it took to create the program. They wanted high efficiency, low memory/processor usage etc. for tests that would run (in theory) uninterrupted for months and months. Every millisecond reduction in test time counted, if it took a week to create the program it was insignificant to the time saved long term.


So defining exactly what you are comparing is important.


Great piece Steve, you should write more of these things...

Darren Mather CLA & Champion inusolutions.com alarchitects.org
Active Participant

Thanks Mr Mather,

It's a proper can of worms, I think having a number at least gives something that can be discussed and monitored even if it is a bit arbitrary.

Active Participant

You certainly need some kind of metrics, otherwise it's hard to compare things and it becomes too subjective...  You just have to be thoughtful about what you are measuring, because that also becomes what you are incentivizing (and in the same way implies what you are discouraging)...

Sam Taggart
CLA, CPI, CTD, LabVIEW Champion
DQMH Trusted Advisor
Active Participant

I think it was purely return on investment for companies that have a budget for process improvement. So how do measure improvements in productivity for something like software. If like Daz points out, you invest time to make a framework reducing the time to produce a new test it's pretty tangible. If it's across a large company and numerous teams and projects I think it's next to impossible to get a tangible number..


@Taggart completely agree on that, the biggest issue we have at SSDC is that none of us is very disciplined on boring stuff like time-keeping. I'm planning on updating my project manager tool to allow some tracking, I've been planning it for years now!




I've taken the document and added a couple of equations (and colours as there were too many numbers in columns) to try and track how each metric is progressing from previous to current and on towards their target for that item. The equations are for % improvement from the previous to current, and how far from the previous are they currently towards the target.


risk register.png





Active Participant

Nice work Pete

I think this would work well with some case studies too.