Most project failures are not failures of programming skill or technique. The blame lays pretty squarely on a failure of judgement when applied to risks. This is the blog form of my presentation for the eCLA Summit 2017 in Vienna. i.e. it's (words and pictures) - (messing around with microphone and accidental sweariness)
As it seems like narcissism is the current way of behaving I thought it might be interesting to keep a diary for a couple of weeks. If you're ever thinking of starting a business like ours, it may be of interest, or if you are already running your own business I would be very interested in your counter-diary. I'm nosy like that.
This has not been a typical week as we've done a bit of travel, that said it's not really that atypical either.
Often SSDC is asked to conduct design reviews and one area that this can be improved is by reviewing against a customers use-case. With this in mind I'm trying to come up with a questionnaire that will help us to understand the project requirements better.
Hopefully this will be a useful document to help weed out the nuances and maybe group-think will make it even more useful!
Only those on the path to programming enlightment should read further. In the last couple of years I've changed my view of the world of programming. Reinforced by my conversations here. I promise there will no spiritual stuff here and very little mindfulness.
Welcome to my century of blogs, if you've been with me since #1 then color me impressed!
Pretty much every job we do has a tight deadline and we often turn work away because the deadline is ridiculous. For example one of the jobs I'm working on at the moment has been underway for years, we've been called in 2 months before the delivery date. In these cases there is often a genuine reason (non-delivery by contractor).
Other times it's essentially because an accountant stipulates when a project needs paying. This is very prevalent in government work. Yesterday I asked facetiously if the accounts department would be using the software, because the tight deadline will effect the quality and they won't feel the impact.
Here's the thing, quality improves with time. If you remove time from a project the first sacrifice will be quality.
From an engineering perspective I've done 2 jobs that I think were exceptional. Before a £1,000,000 experiment I wrote a cRIO distributed DAQ system, experiment was booked in for Monday, got the purchase order Thursday. In another job I had 6 weeks to design, build and deliver a high-current contactor test system. What characterized each system? Nobody remembers the ridiculous/miraculous engineering effort. Everybody notices the slightly clunky and limited functionality. The contactor tester has done its work for over 10 years now and it's embarrassed me for all 10 of them!
So here's the life lesson....
You will be remembered for your quality long after the missed deadline is forgotten, only sacrifice it if you really really have to.
As this is article #100 I'd like to express my gratitude a bit.
Thanks to all of the contributors your comments have improved this blog far beyond my expectation.
The effort you put into educating, clarifying, questioning in response to my half-formed ideas is what makes it for me.
I'm probably going to go quiet over the next few months, business has got proper busy, we have 2 new ships to do, SingleShot has taken an interesting direction, I'm currently working on 7 projects and my mind is beginning to melt!
Here we are at the cusp of article 100, if you've endured all of them I thank you!
I've finished Peopleware and a damn good read it was too, I highly recommend it for programmers and managers, in fact anyone involved with a team.
As seems to be common practice these days I will now pick all the bits that align with my world view and ignore everything else!
One of the most important life lessons is that Software is a People-Oriented activity. There are many aspects to this and I'll skip through the ones that interest me.
Everyone is different (and that's a good thing)
This revelation has actually come from the discussions on this blog, so it's worth the admission price just for that.
The trouble with methodologies and processes is that they are designed by people who inherently like working in that fashion. This is then sold as the "best" way to work, well what's best for me is not necessarily good for you. Here's some 50/50s
Some people like comments in their code. Some hate it.
Some people like LabVIEW. Some hate it.
Some people like starting. Some like finishing.
Some like working to standards. Some find it restrictive.
Some like Unit Testing. Some don't.
Us noisy people loudly say one way of doing something is the correct way, it only really means that it works for us and people similar to us.
One absolute rule I have observed is that there are people who moan and people who do, the venn diagram of moaners and doers never seems to cross!
Google spent a lot of time and energy and came to the conclusion that team members that look out for each other work better, This simple cost-effective concept seems to elude groups of clever people in a fair proportion of businesses I deal with.
For people who watch The Apprentice the classic macho-manager stomping about bullying people into submission seems to be desirable. Nice people are weak and in business you need to be STRONG!!! Sigh! This way of working is idiotic, destructive, old-fashioned and childish!
Here's the thing I am really unproductive if I feel annoyed or irritated. One simple way to improve productivity is to treat me nice!
Here's another secret, people do business with people they trust and like, especially where intellect and experience is the commodity being traded
Behavior is Influential
One poisonous individual can destroy a team and damage related teams and these people who only see bad in others, the gossipers, the spreaders of discontent, the vampires of ego need isolating or removing. Sacking these people is horrible, but not as horrible as keeping them on!
If they bubble to the top of a company they affect the mentality of the whole organization, sadly this happens more often than it should.
Here's the kicker, the work we do should be fun. The things that stop it being fun are people related. Spend effort on this and good things happen.
This article concentrates on a very simple rule we apply to all our projects, it's
Identify Risk and Tackle it First
The best way to emphasize this is to describe some common scenarios.
Scenario #1 Hardware
We quite often get projects where the hardware is purchased prior to even a schematic being drawn. You then are presented with hardware you are unfamiliar, for a use-case you barely understand. So what we do first is manually get the thing running. In fact one of our standard milestones is a manual hardware screen, with this you can test the wiring, start exercising the UUT (unit under test), confront your customer with the real world.
Scenario #2 Badly Defined Requirements
Very rarely* one of our clients doesn't fully define their requirements accurately, sometimes they don't fully understand how the system goes together, how the hardware works, even how their UUTs work. So it's not incomprehensible that they can't define their systems requirements. Once again we therefore have to confront them with the real world as quickly and cheaply as possible. To mitigate these risks we use prototype screens.
Scenario #3 Payment
For SSDC we deal with approximately 5 new clients a year, some are nice, some not so much. To mitigate financial risk in the UK you can do some rudimentary checks. Websites like companycheck, DueDil are very useful for some background checks. Talk to your suppliers. Generally we make sure our hardware liabilities are paid first, so pro-forma invoices are the way to go. Define your terms (30 Days Net) and stick to them!
The standard project management approach is to do a Risk Assessment and a matching Method Statement (RAMS), while we only do these on an informal basis (some companies require them to work in certain environments) it's not actually a bad idea to think about risks.
And you should never ever sit on them and hope they go away!
Lots of Love
* By very rarely I actually mean all the bloody time.
To save you some time in your busy schedules this article can be summed up in 8 words a comma and an exclamation mark.
Before you write any code, do some research!
See.. told you.
I'll now expand it just to fill the page with words.
For the lazy engineer there's nothing worse than discovering a toolkit after you've expended blood, sweat and tears to make your own. You may not end up using it, but it will save you time. I recall a conversation with an engineer who had spend 9 weeks creating a PIC relay controller that took a serial command and closed a relay...."um you know you can buy these for $60 don't you.....".
Take my tree control example found here the google search was "tree control database", this then leads on the second part of research for a software engineer.
Learn another language
There's a lot of info and ideas outside of the world of LabVIEW so look outside the community.
As a LabVIEW programmer you will benefit from being multi-disciplinary. So you can spend all your brain-power learning 100% LabVIEW and pride yourself in the knowledge of esoteric tools and techniques from the dark cobwebby corners of the LabVIEW IDE or learn enough to be proficient, then learn how to design databases and learn enough Linux to host stuff.
The second option will allow you to tackle enterprise level solutions out in the real world.
If you use source code control (SCC) then Kudos to you, this then will lead neatly onto version numbering
I've seen a fair few ways that you can version your source-code and this is what we've settled on. Guess what! it's not very complicated.
For every unique deliverable item we require a version number.
For our main VI we have a version number and this is linked clearly (via the comments) to our versions in SCC. Clicking on the indicator will pop-up the details
Version info can be found in SCC comments and we also stick them in VI Documentation too. If they get too wordy we can link to another file or write a specific container.
On the block diagram we just use a version constant vi.
We use embedded exes and micro-services for some our programs and these will need versioning too.
Here we see that the bugreporting dialog is an exe called by the main VI and this one is on Beta Version 1:14.
For exes it's very useful to be able to query the version from the command line. Here's the code.
Using this we can send the command line bugreporting.exe --Version and it will respond on the std_out it's version number. It won't actually load itself as this is a dynamic calling VI not the actual bugreporting screen. We use this for auto-updating.
For Realtime systems we use similar tactics, but as well as something visible on the front panel, we also need to be able to query it remotely.
Here we query the rack info for a connected server and our version constant is front and centre.
In fact this applies to all embedded targets, for example we have PIC controllers that respond in a similar fashion.
Finally for FPGA targets we just embed a couple of controls on the front panel and these are then passed up to the host.
These are updated each time they are released to the outside world and not for every commit to SCC.
Here's some practices we've observed and DON'T do ..... you may have a use case that these apply to.
I've seen CLA level work that individually version numbers each VI, I guess this is instead of a pukka version control system. Seems very counter-intuitive to me!
I don't use the individual revision history options as part of LabVIEW, to my mind it drowns you in information you don't need.
In short I want to know what version of software is being used/ reported against and then be able to trace this back to a SCC release.
Keeping the output coming thick and fast, this article describes something I've been thinking about a fair bit in the last few months.
First of all let's describe the nuances here. I don't really like supporting software (it's boring) and supporting software with a customer breathing down your neck is significantly more difficult than writing it in the comfort of your office. Some of my worst programming experiences are when something doesn't work in front of the customer and you just don't know why.
A large part of repeat business is based on how well the software is supported. Support is judged on responsiveness, the important criteria is how quickly/cheaply you find a fault or add additional features and get the hell out of the way.
So the thing I've been saying recently is that we program slow so that we can debug fast. In fact our whole way of working is debug focused, and this suits us. Fair warning, it may not suit everyone.
We invest a lot of time on our code, look here for proof.......
All of these are ways that we spend a great deal of time on our code, that has nothing to do with solving the problem and everything to do with making life easier down the line. In fact it could be thought of as technical overhead.
All of this work is us building our technical assets and paying down our technical debt.
So that when it matters our projects are easy to maintain. In fact all of this effort is so that we're not struggling in front of a customer!
If "Delivery is All" is our mantra, "Never Be Fearful" is something at the heart of the business we wanted to build................And it's really difficult to adhere to.
I don't really buy into Vision Statements or Mission Statements but this really something we discussed when we started up. So what does it mean in the real world?
1. It Allows us to be Generous
Throwing off the paranoia that business is cut-throat and people will steal your ideas and generally screw you over allows you to build healthy, strong business friendships. If you trust your customer to look out for your interests, you're in a wonderful position. Supplying source-code, designs etc (all of which we have been paid for) re-assures customers that the code will be supported in our absence. This allows us to chase after bigger projects than if we followed a more code-protected way of working.
As we build up our own IP it has become increasingly difficult to follow this through, but we go back to "Never Be Fearful".
We share everything, processes, sourcecode, designs and we have never lost work through it.
I've talked about this in the article linked above, an article I'm proud of (it only got 1 bloody "like", so I'm obviously in the minority).
The route to a stress-free life is to tell the truth. Thinking about projects, all the stress comes from dishonest deadlines or promises that never had a chance to be kept. One thing I often say is that deadlines are fictional and have nothing to do with the engineering task in hand. If a job has 20 hours of work in it, telling me it's urgent will not make me work faster, I'm already going as fast as possible.
When I ask someone for a progress report the only useful response is an honest one. Saving face, being scared of the response to bad news etc etc are not helpful.
This is something we share with the best of our competitors too, I think it may be an engineering thing.
3. Taking Risks
Taking risks is good for your ego!
Our stories, our new skills, our favourite jobs are all closely linked to the risk associated with them. Every time we walk into a new industry, stand up to present in front of our peers, share code and design ideas we expand and increase ourselves. To mitigate these risks we plan, mixing risky jobs with easier ones.
So we may not have made shedloads of money, but we have had some great fun along the way and that's because we work without fear.
Here's the second in my series of short articles, each one will be about something that we've learnt over many years of writing industrial software.
It's all about delivery.
Strictly speaking this should be #1 as it is the foundation of everything we're about.
As it's clear from the discussions here we take a rather contrary and brutalist approach to LabVIEW and project management. The reason we have the confidence to take that stance is that we DELIVER. It's actually very difficult to argue against.
And before anyone says it, we don't just do easy projects. In fact a reasonable percentage are given to us because they have gone terribly wrong (we call them rescue jobs).
I guess you'll be more interested in what the secret to delivery is.
Projects can be tough, this week I started out with 2 projects and both had hardware that wasn't working. I could have thrown my hands up in the air and got frustrated or knuckle down and solve the issues. By Friday everything was working and once again my Engineering Ego is recharged.
When a fixed price job goes wrong, the hourly rate reduces. A sound business case is to walk away from these jobs. That sucks! Try to avoid doing this.
One of the best aids to finishing a project is a checklist of work, it can be ticking off requirements, or tests or even open issues. If you find yourself stalling on a project, make a list of things to do and pick a couple of easy ones and a difficult one. One of the nice things about some of the formal Agile techniques is concentrating on a manageable list of tasks in a set period of time, it's very powerful.
Are you a starter or a finisher? Be honest do you really love the start of the project, all the fun bits are the early design and customer interaction or do you love the feeling of satisfaction you get from signing off a project.
I'm about here...
10 years back it would have been 80/20 in favour of starting, so perhaps finishing on projects improves with experience.
If you are a flakey artistic type it's likely you'll be good at starting, but poor at the more detail oriented tasks related to finishing. The cure? Find someone to do the finishing for you. This can be a colleague or even a customer. One of our customers has a vested interest in detailed testing of our code and he's really good at it. In my opinion we make a good team.
Within SSDC we support each other, so when a project becomes challenging we know that there is someone there to talk it through with. Sometimes it's just nice to have someone to share the responsibility.
Alluded to this earlier and research has backed up that teams that have a history of delivering will most likely continue to deliver.
So remember: people won't remember that your software is clever, your unit tests are well organised, your methodology is bang up to date, you're really highly qualified and you have fantastic documentation. But they will remember if you take a lot of money and don't deliver anything.
Welcome to my new series of short articles, each one will be about something that we've learnt over many years of writing industrial software.
Today it's all about project portability. Time spent making your project portable is time well spent. At SSDC we like to be able to download a project onto a Laptop, Virtual Machine or customer machine from our repository and for it to work.
Keep all referenced data and libraries relative.
Remove external dependencies.
Minimise use of Instr.lib and User.lib
For us the project is a directory that wraps around the LabVIEW project containing everything pertaining to the project, source code is only one part and our projects can contain multiple LabVIEW projects.
And because we are using the directory structure of the filing system we don't use virtual folders in our LabVIEW Project. I think that may make us bad people for some reason I can't fathom.
So here's our Project
The test for portability is one of our code review items and involves exporting the repository into a clean virtual machine (with LabVIEW Loaded) and into different directories (usually desktop and \User\Public\Something or other.
Doing this has saved us a lot of hassle over the years.
I know I said way back when that ideas are cheap until someone puts the work in. here
My gift to you (mostly NI, Alliance Partners, Entrepreneurs and customers), is an idea I've put some work and risk into. It is an also an idea I'm enthusiastic about.
The image above came from my NIWeek 2016 presentation on Shock Testing and was intended to describe the standard way organizations purchase large expensive distributed systems. Essentially you have 2 choices.
1. Purchase a turnkey system
2. Do it yourself (DIY)
Over a certain price range and system complexity the risk of DIY becomes potentially career limiting. From what I have observed it becomes harder for DIY vendors (like a certain Texas based company) to sell these systems over a certain price point. So if for example the system consists of some PXI racks full of cards and the alternative supplier offers software or even just screens and a USB slot, the "full" system will always win. I've stuck "full" in quotes because the purchaser will always settle for 80% of their required functionality. As an example for the system I have developed I've had over 5 enquiries and each time they want it exactly the same (but with a few changes).
To add insult to injury from what I have seen, the hardware purchased is inferior and lots of compromises are made on the software.
Looking at the image again you can see that the turnkey system is controlled by the vendor and in my experience they are not that responsive to change requests and are quite prescriptive when making changes and removing support. Also they are prone to takeover and business refocusing.
In short the DIY vendor finds themselves at a considerable disadvantage, but actually the turnkey vendor is not providing a good solution either.
The ideal situation is that the hardware comes with software that is easily adaptable using an industry standard language, this software could be loaded from the language vendors app store. I'm talking LabVIEW and the tools network here for anyone still struggling to keep up.
I can't believe that my system is the only example here (Singleshot High Channel Count Oscilloscope). So what else? I dunno, RF, Spectral Analysis, we're looking at any large distributed system, synchronized readings, high value hardware, replacing existing bespoke distributed electronics. Thinking caps on people.
I thought long and hard about the licensing because in the traditional world there is a value to this code, my conclusion is that I'm not a salesman and my assumption is that customers will want changes and who better to do the these changes than the designers (talking high value business here in customer accounts that you would normally struggle to get access to). When you are in these large accounts you will find extra work, we always have. Also I would never get into the companies I'm talking to without this, I probably wouldn't even get into the carpark!
So a standard BSD license, with liability restrictions.
Freeing up this bottleneck will also help your relationship with the hardware supplier, essentially your software is the enabler for selling their hardware. This improved relationship is mutually very beneficial.
This is a matter of trust, you are throwing your code to the world and trusting everyone to do the right thing. In the real world there should be little incentive for any of the parties to do anything but help each other. Obviously small mindedness, short term thinking and profiteering could cause issues. For me the worst bit is feeling disconnected from the code.
So far the experience has been really excellent. Our method has transferred to remote working.
In truth tho' it's actually not that much of a problem if you can get over the initial hurdle of being paid for development. In my case a long term relationship with an existing customer mitigated the risk of tackling the project.
What can NI (or another hardware vendor for that matter) do to mitigate the risk? They can have decent amounts of high value equipment available and maybe there is even a business case to provide a grant to pay for or contribute to the development. They will then need to resist the temptation to OWN the code, open-sourcing it will be better for everyone believe me.
A Different Type of Sales
You might be thinking that because I espouse ideas in writing and with some force that they are well thought through. This particular idea is pretty much me trying to make sense of it by writing about it (I'd say a fair few of the things I write about fall into this category!). So here goes.
I think some of the sales and marketing that alliance members/consultants and integrators do is misplaced (mostly this is aimed at SSDC). The last two large opportunities that we have successfully pursued have been where we have had something to sell, rather than where we have tried selling ourselves. In someways I think this is also more useful/profitable than selling toolkits etc into the LabVIEW community. So we are changing our model to one where we sell ourselves to solve any problem, to where we have a solution and can adapt it quickly and efficiently.
Why Open Source?
Most systems integrators are more problem solver than entrepreneur. We negotiate a price to design some code and then we design it, for us we nearly always offer up the source code to the customer. Our experience so far is that the customer is keen to engage with us at the "Trusted Advisor"* level and it's a nice start to a relationship. Also they are freaked out by someone giving something away! This fits the Open Source model much closer than other models I've looked into.
I think this is an enormous opportunity for NI and system integrators. We just need to identify the gaps in the market.
Lots of Love
* for more on "Trusted Advisor" see my comedy skit at CERN, where all is explained.
I'm full contrarian mode now but rather than just dismiss the arguments out of hand think about your own experiences, you may find I'm not so contrary after all.
The caveat on this discussion is that at SSDC we do not repeat many projects, stepping away from the test arena has made our world a more varied place, less friendly to re-use.
Way back in article 28 I trashed the traditional way re-use is taught/advertised in the LabVIEW world in this article here.
Our technique of keeping everything in the project works very nicely for us over all of our 100+ active projects. Issues are extremely rare. But our evolved practice still puts us at odds with traditional programming best practice in one area.
Looking at the traditional view of reuse and to clarify I'm speaking about planned functional reuse here. Planned functional reuse differs from ad-hoc (opportunistic reuse) in the levels of effort expended by teams to manage the reuse library.
"Planned reuse — This involves software developed with reuse as a goal during its development. Developers spent extra effort to ensure it would be reusable within the intended domains. The additional cost during development may have been significant but the resulting product can achieve dramatic savings over a number of projects. The SEER-SEM estimation model shows that the additional costs of building software designed for reuse can be up to 63 percent more than building with no consideration for reusability." D Galorath
Computer Science books tend to say that good practice is to manage a reuse library of highly reusable components with your team, these are tested, documented and approved. We also bend our architectures into a shape that will allow us to plug in reusable components. Finally we have to maintain this library across new versions of the software, operating system, hardware etc etc.
As a consumer of the reuse library we have to trust it will work in a new environment, giving 100% functionality (or allowing us to extend functionality).
So how many programmers have I met that enjoy maintaining a reuse library and all the effort that entails? Let me count them on the fingers of zero hands!
All of these measures of efficiency have been focused on software languages that take a considerable effort to add functionality, how does working in an environment like LabVIEW change this dynamic?
Thing is LabVIEW is rapid, I can hand crank a driver in very little time, I know it is 100% of what I need (and no more). Because it is stripped down, I'm not lugging a vast weight of unnecessary functionality with me. Multiply this by 20 pieces of hardware and it's probably a significant size improvement, which is a bonus.
The argument against this point of view is fairly strong, a reused component should be better tested, more stable, better understood etc etc. Well this is possibly true if you work in an environment where you are churning out variants of the same project. My response would be, if this is the case branch and adapt the entire project.
In the past I really bought into the whole reuse thing, investing a significant effort in pulling out useful stuff and maintaining a repository. I don't even know where it is any more, it's all a bit sad. I wonder how many man-hours have been wasted on reuse libraries full of old code for obsolete hardware on out-of-date versions of LabVIEW.
But sure Stevey, a company as awesome as SSDC must reuse loads of stuff, is a question I've never been asked. If I was, I would say YES we reuse entire project templates, documentation, methodology specific templates. Where we have identified projects that will have possible repeats we generate a project template. I view this as architectural reuse rather than functional reuse and it's an amazing time saver, standard enforcer and competitive advantage. Pretty much everything else is ad-hoc and unmanaged.
ooooooh controversial, hope I've rocked your worlds.
Lots of Love
"Code reuse results in dependency on the component being reused. Rob Pike opined that "A little copying is better than a little dependency". When he joined Google, the company was putting heavy emphasis on code reuse. He believes that Google's codebase still suffers from results of that former policy in terms of compilation speed and maintainability"wikipedia
I had a delightful time in Texas, then back for a week of work (mostly fitting stuff to ships) and then off for a weeks vacation. My brain really needed a rest!
Saturday was a trip to Dallas with Jonny and much amusement was had starting off with a top-notch breakfast at Ellens Southern Kitchen highly recommended!
At the Gas Monkey Bar and Grill I ordered the vegetarian choice of Stuffed Peppers, only to find out they were stuff with pulled-pork!
Sunday I was the guest of the Smiths and had a grand time looking round the hill country and caves, my thanks for their time it was a good time!.
Coming from the UK I love big horizons and here we see the Kitty-Litter Castle in the hill country above Austin.
And then NIWeek stuff starts on Sunday night with drinks at the Ginger Man, great staff, beer and rubbish dartboards. It is here I discovered that great cider is made in Texas. Namely Austin Eastciders Original Dry Cider, after discovering this the week was a bit of a blur.
I'll admit this now, I deliberately limited myself, being an introvert I find crowds of people very trying, the good thing about Austin Convention Center is that it is so enormous you can always find some alone time. If I didn't get to the session I wanted to it was because I was lost, talking, in a meeting or resting my brain. Also we stayed in the Hilton, this was very handy for ducking out for a breather.
Monday was the Alliance Day and after the trauma of the keynote (I DON'T DO GROUP ACTIVITIES!!!!!!!!), it passed by nicely. Various kick-off drinks etc happened in the evening and people kept giving me name stickers to fill in, even approaching my fiftieth year I find it difficult to write my name sensibly on a label. So if you met a loud idiot with MEGATRON on his label, I'm afraid that was me. I got to sit on a table with Darren, Stephen and finally met Christina and think I enjoyed your company more than the other way round.
Tuesday was all business, lots of meetings and I think they went OK. Thanks to all for organizing them. Then I wandered down to the LAVA BBQ via the Mohawk (a bar I like very much, old punks seem welcome there).
Wednesday My presentations. Sadly TS9044 - Shock Test Using Multiple Synchronized Racks had stiff competition by Jeff Kodosky and Stephen Mercer in the room next door. Therefore there was plenty of spare seating in this one, there was a lot of NI people in attendance and that was perfect tho', they were actually my target audience (more on this in my next article).
In the afternoon came ISO 9000 and LabVIEW - TS9456 and this benefited from being a Jeff Kodosky Top Pick, I fact I was unaware of until after the affect. Honored!
An honor I shared with TS9446 - Project Templates: Making the Most of Code Reuse by Becky Linton, which I am waiting impatiently for on video.
This vindicates the somewhat uphill struggle I feel it is when talking process in the LabVIEW community versus talking technique. Another plus-point is that the interest in code reviews seems to be increasing, the session on this was extremely well subscribed.
Anyhow here's my presentation.
Q1 32:43 - Dmitry - Is it all internally driven or do you integrate customer processes too
Q2a 34:07 - Fab - As a relaxed person how do you make everyone agree on a processes and then stick to them.
Q2b 35:10 - Fab - When you get audited you can feel that you have to have very time consuming processes, this appears to be more minimal
Q2c 35:45 - Fab - Is it true that we sailed through our accreditation with nothing but a well done chaps. Thanks Fab
Q3 36:08 - Does this process apply to everything, internal and external.
Q4 37:48 - Did the ISO9000 accreditation change your process or did you get something value added by doing this work.
Q5 41:20 - How many times are you audited?
Q6 42:55 - Merging - how does your process handle merging.
Q7 44:10 - Michael (I Think)- Can you elaborate on the sub-contract model
Hope it is of use. I've had some nice feedback on it. I'm going to list the questions as soon as I get a free moment.
The best thing about presenting at NIWeek is the quality of attendee. I had attendees from USAF, Boeing, Lockheed Martin, a lot of military research, SpaceX, research from various countries. As a platform for putting yourself in-front of quality companies it's second to none.
I missed a lot of the other sessions because I was either nervous about presenting or really relieved at having presented. Although I did go to TS9723 - DQMH: Decisions Behind the Design and witnessed some ugly trolling behavior from people/person who were not there to learn, but to criticize. My response would have been very much less professional than the presenters, something along the lines of "Fxxx Off and get a life". If you don't want to use it, don't use it. Sorry it's the first time I've run directly into this type of behaviour and I feel protective of people who share code and ideas.
Thursday Seriously relaxed now, so I went to see TS9725 - Understanding Test System Performance and in the evening went to Pete's Duelling Piano Bar with a group led by the wonderful Jeremy Marquis who's where's Where is Jeremy now? twitter feed thoughtfully ensures you need never be alone at NIWeek. His lovely wife Rozann also ensures that not idea is never left unlistened to. I love live music and thought I might hate the experience of Duelling pianos, but it was amazing! At midnight I found myself sat in a leather armchair listening to Industrial dance music and drinking tequila. Austin is fun!
Friday flight home. Club World is very nice, thanks for the upgrade!
Now I just need a Return on Investment to justify more trips ($6000 to get back)
It's been 16 years since I last went to NIWeek with my buddy Jon Conway (gobshite) and I intend to drink considerably less this time round!
We were just starting out as SSDC (Jon had started the company a few years earlier) and now we older and wiser. We honestly couldn't be any dumber than the 2000 version of us, but we probably laugh less now, ah the heartache of growing old.....
The itinerary is as follows...
Fly in on Friday 29th
Saturday Hire big truck and drive to Dallas early in the morning (we'll be away at stupid O'clock anyways)
Sunday Family fun
Monday Alliance Day stuff
Tuesday LAVA BBQ, I'll be good because....
Wednesday - presenting x2 Details below (I'll be badly behaved on Wednesday night as post-presentation relief kicks in)
Thursday Morning - HOL9108 - Code Review Best Practices <-- very enthused about this one
Friday Afternoon Fly Home
There's a couple of meetings still to organize and as you can see I have not listed a lot of other presentations I want to go to, I'm very behind with that part of my organization!
I've been twittering on about this project for a while now, so come to this if you're interested in measuring explosions. There are also interesting techniques on show and I'm pleased with the code too. For me the most interesting thing about the presentation is how it explores a broken software/hardware distribution model and the advantages that releasing source-code for free may have. I suspect there is some real potential to dislodge incumbent suppliers with this model. All it takes is a bit of trust......
Wednesday afternoon it's off to room 15 where all the cool kids hang-out (I'll be there too, bringing down the cool average). Here we have the Advanced Users Track and you can get your fix of technical stuff. I'll be presenting TS9456: ISO 9000 and LabVIEW . Specifically it will looking at ISO9001:2008 and how we have developed a toolset to provide seamless and helpful processes. It will be crazy knockabout fun.
The supplied links will allow you post questions (or if you are a twitter user abuse).
One thing I won't be presenting is TS10989: Use by Exposure: From myDAQ to ATE by bizarre coincidence there are 2 brits presenting at NIWeek called Steve Watts. This ones a lecturer at Cardiff University and I expect his presentation will be way more professional than mine! I'm going along to put my name to someone elses face!
And just to make other peoples presentations weird, keep the following in mind
Now we have entered a brave new world of stupid. Visas may need to be applied for to read this, especially if you are from the EU.
NIWeek is careering towards us at almost unnatural speed and I have got my presentations pretty much sorted. If you want to see a nervous cockney, speaking too fast and sometimes swearing here's my agenda.
I'll write about them afterwards.
For the last year or so I have been blathering on about nuance and trying to temper some of the discussions around and about software with a few more layers. On that note I'm going to make a statement and then pick it apart.
"A good software designer will produce good software regardless of language, process or methodology"
Deconstructing the argument from the above statement might therefore beg the question. If that is the case what use therefore is language, process or methodology in the task of creating software. Simply put they are not essential, but they help. Also not everyone is a good software engineer, statistics dictate that 49.9% of us is below average. I've mentioned in Agile - Tail wagging the Dog(ma) that there can be considered 5 levels of software developer.
Description - LabVIEW
These will take a project backwards and should be escorted from the building (and they do exist!)
Trainees <CLAD, start of career.
CLAD/CLD - interested and capable
CLD/CLA - capable of creating nice code, can manage simple projects, probably lacking in experience.
CLA - years of experience, educated and capable of adapting to change because the understand the implications.
To be able to thrive in a changing and challenging environment you will need at least a 2 and probably a 3. Quite a large proportion of LabVIEW jobs sit in this area and quite a few LabVIEW programmers are expected to cope on their own. This struggle will lead to the following questions and mistaken conclusions.
I'm struggling therefore there must be a problem with the language.
Most programmers have a language they like using, very few good programmers will productive in one language and unproductive in another. They may be comparatively less productive but they will still be productive.
I'm struggling therefore there must be a problem with the process.
In the wider software world if you're not using some Agile variant you are a sub-human idiot, thing is, a project has different stages, some really benefit from Agile, some don't. Consider the graph below.
Agile projects work well when there are a lot of requirements over time, for most of our projects the following seems fairly accurate. Initially we're getting a lot of requirements up until the prototype/design review. Then it settles down while we do the architectural work and preparatory stuff required to cope with the avalanche of requirements generated when the customer finally actually starts to seriously think about/use the software. For us this is the Beta stage and we like to hit it as early as possible.
True agility comes from being able to adapt to changing situations throughout the project lifecycle.
I'm struggling therefore there must be a problem with the methodology.
Here the level 3 programmer will pick the methodology that is most appropriate for the part of the project being worked upon, the personal preferences of the team, the use cases of the customer and the limitations of the hardware. Some methods in LabVIEW are better for APIs, some are better for top-level etc etc.
The only thing I would add to the methodology discussion is that the best one is one that is shared by your team (and customer if that's your bag). There are real advantages to sharing, that's what my Mum always taught me.
And that my friends is the last time I bang this drum, I just wanted to get all my thoughts together in one place.
In 24 days I will be in Austin, if you see me stop me and say hello.
Good conversation starters revolve around comics, art, music, nature, dogs, jokes, funny videos, making things (anything), history. I'll probably be either awkward or amusing depending on my mood.
While scrubbing up my NIWeek demos this annoyance raised its head again.
This is an OS issue and not LabVIEW, but the solution may be out there already.
I press the filepath button and use the express filedialog function to throw up a dialog (the browse button does the same thing). It appears behind the calling Front Panel causing all sorts of mayhem associated with hidden modal boxes.
As I understand it the root of the issue is to do with a 32bit program (LabVIEW) calling a 64bit dll function (filedialog), this modal function doesn't know who called it so doesn't know what to sit in front of. An Alt-Tab sorts it out, but it still doesn't meet requirement zero for me.
SSDC has it's first certified scrum-master (stand up Jon Conway), so I thought it might be a good idea get our teeth into Agile it and give it a shake. I'm not going to describe any of it, go get the books or check out wikipedia.
Here also is some more material that's well work a look.
In typical style I am not completely sold, some of the language almost completely puts me off. It feels monetised and corporate. I can't stand company mission statements and it feels a bit like that. It's as if by talking the talk we don't have to knuckle down and do the hard work.
I've said before that I think LabVIEW is the original Agile language, I also think that our way of working is a reasonable compromise. It's not dogmatically Agile, it's not completely plan driven. Consider the following image based on the rather excellent (and short) book Balancing Agility and Discipline.
From this we can see that where the considerations are nearer the centre it favours Agile methods, whereas when we start moving out a more plan driven approach is needed.
Note: on Personnel for level 1 think CLAD/CLD for Level 2 think CLA, for Level 3 think CLA with a lot of project experience, the Boehm/Turner book classifies Level 3 as a very rare resource, it also splits 1a and 1b and -1 as levels.
I would add that as you travel through a project there are aspects that would benefit from being more plan-driven and other times where an Agile approach is definitely the way to go. For most of our projects we go in hard with prototyping, design reviews and iterate until we're happy we have collected a sufficient amount of requirements (the requirements shift is high at this point). Then the customer interaction naturally reduces as we knuckle down and do the hard architectural work. At this point in the project the requirements shift will be low. The next stage is commonly the Beta stage and we tend to try and hit this stage as early as possible, this is because the requirements shift begins to increase as the users actually start using the system. The effort you have put in prior to this creating a decent flexible, configurable back-end will now pay off coping with this ramp-up in requirements feedback. This is pretty predictable for a majority of our projects.
I've left quite a lot unsaid as it's a large topic and I know that Agile has vastly improved project delivery in many industries, but that's not comparing like for like. I would bet that very few LabVIEW projects are built using large teams where a customer does not see anything working for years. This is the environment that Agile has thrived in.
In short I like the adjective "agile" when used to describe a nimble way to run projects. I dislike the noun "Agile" where it is a commercial commodity to be sold as a self-help silver-bullet.
A discussion about the poor sods who get thrown onto complex projects undercooked and what can be done to educate project managers about the value of employing experience. I've witnessed this a few times this year.
Open up the design of the API for the ODS tool. It's working rather nicely in my application, but I know it's not sorted yet.
Or alternatively I could leave the ODS API half-cooked and I'm not in the mood to grizzle. So instead I'm going to talk about ODTs
ODTs are the output of Open Document Word Processors, including newer versions of Microsoft Word, Open-Office Writer and LibreOffice Writer.
Before I plough in and do the hard work I thought it might be good to request some input from other users.
Here's one use-case that would be marvellous for me.
Here I have my bug reporting tool, when I press commit I would like it to generate an issue report document, with all the details filled in for me. This would be way better than my current "Issue Saved" dialog confirmation.
Using a similar model to the one I employed for the spreadsheet I could load a template and update it.
Luckily both Word and Openoffice offer bookmarking facilities. We could organise our template with various bookmarks and then offer methods to insert things at the book mark point. In ODS I have fathomed tables, Images are just links and the images go into a directory called ..\Configurations2\images\Bitmaps, text is a direct replacement.
Main page data goes in content.xml
Header and Footer info goes into styles.xml so I would need to search that for bookmarks too.
The bookmark tags come in two flavours.
<text:bookmark text:name="headerbookmark"/> <--- this is just a single place point and your insertion goes either at the end or the beginning of this tag.
<text:bookmark-start text:name="my2ndbookmark"/>ddd<text:bookmark-end text:name="my2ndbookmark"/> <--- this is a range of text that should be replaced by whatever is inserted.
The only additional facility that I can think that will be useful is the ability to duplicate pages.
Here's the updated (version D1:01) example program screen
This demonstrates both ODS and ODT generation.
Here's another instance where I could use this type of functionality.
I generate a new document number for a project using this little utility, it would be splendid if it actually generated the document all numbered and dated.
Is there any other use-cases that would be handy?
Leave words of wisdom in the comments please.
04-May-2016 added ODF toolkit, this purely replaces/insert text into a template and saves the template, as a bonus there's a bit that then converts the ODT file to a PDF if you have LibreOffice (probably works with OpenOffice too). Created file works with Word too. Questions and use cases into the comments please. ODTExample.vi shows usage. Video to follow...
06-May-2016 Made the text entered XML safe (as far as I know) for ODS and ODT
19-05-2016 D1:01 added version numbering and prettied up the example, tested with provided templates and works OK. Used LibreOffice 5 for the pdf printing
08-06-2016 D1:02 ODTGetBookmarks removed table of contents "RefHeadings" bookmarks LV2014