Now we have entered a brave new world of stupid. Visas may need to be applied for to read this, especially if you are from the EU.
NIWeek is careering towards us at almost unnatural speed and I have got my presentations pretty much sorted. If you want to see a nervous cockney, speaking too fast and sometimes swearing here's my agenda.
I'll write about them afterwards.
For the last year or so I have been blathering on about nuance and trying to temper some of the discussions around and about software with a few more layers. On that note I'm going to make a statement and then pick it apart.
"A good software designer will produce good software regardless of language, process or methodology"
Deconstructing the argument from the above statement might therefore beg the question. If that is the case what use therefore is language, process or methodology in the task of creating software. Simply put they are not essential, but they help. Also not everyone is a good software engineer, statistics dictate that 49.9% of us is below average. I've mentioned in Agile - Tail wagging the Dog(ma) that there can be considered 5 levels of software developer.
Description - LabVIEW
These will take a project backwards and should be escorted from the building (and they do exist!)
Trainees <CLAD, start of career.
CLAD/CLD - interested and capable
CLD/CLA - capable of creating nice code, can manage simple projects, probably lacking in experience.
CLA - years of experience, educated and capable of adapting to change because the understand the implications.
To be able to thrive in a changing and challenging environment you will need at least a 2 and probably a 3. Quite a large proportion of LabVIEW jobs sit in this area and quite a few LabVIEW programmers are expected to cope on their own. This struggle will lead to the following questions and mistaken conclusions.
I'm struggling therefore there must be a problem with the language.
Most programmers have a language they like using, very few good programmers will productive in one language and unproductive in another. They may be comparatively less productive but they will still be productive.
I'm struggling therefore there must be a problem with the process.
In the wider software world if you're not using some Agile variant you are a sub-human idiot, thing is, a project has different stages, some really benefit from Agile, some don't. Consider the graph below.
Agile projects work well when there are a lot of requirements over time, for most of our projects the following seems fairly accurate. Initially we're getting a lot of requirements up until the prototype/design review. Then it settles down while we do the architectural work and preparatory stuff required to cope with the avalanche of requirements generated when the customer finally actually starts to seriously think about/use the software. For us this is the Beta stage and we like to hit it as early as possible.
True agility comes from being able to adapt to changing situations throughout the project lifecycle.
I'm struggling therefore there must be a problem with the methodology.
Here the level 3 programmer will pick the methodology that is most appropriate for the part of the project being worked upon, the personal preferences of the team, the use cases of the customer and the limitations of the hardware. Some methods in LabVIEW are better for APIs, some are better for top-level etc etc.
The only thing I would add to the methodology discussion is that the best one is one that is shared by your team (and customer if that's your bag). There are real advantages to sharing, that's what my Mum always taught me.
And that my friends is the last time I bang this drum, I just wanted to get all my thoughts together in one place.
In 24 days I will be in Austin, if you see me stop me and say hello.
Good conversation starters revolve around comics, art, music, nature, dogs, jokes, funny videos, making things (anything), history. I'll probably be either awkward or amusing depending on my mood.
While scrubbing up my NIWeek demos this annoyance raised its head again.
This is an OS issue and not LabVIEW, but the solution may be out there already.
I press the filepath button and use the express filedialog function to throw up a dialog (the browse button does the same thing). It appears behind the calling Front Panel causing all sorts of mayhem associated with hidden modal boxes.
As I understand it the root of the issue is to do with a 32bit program (LabVIEW) calling a 64bit dll function (filedialog), this modal function doesn't know who called it so doesn't know what to sit in front of. An Alt-Tab sorts it out, but it still doesn't meet requirement zero for me.
SSDC has it's first certified scrum-master (stand up Jon Conway), so I thought it might be a good idea get our teeth into Agile it and give it a shake. I'm not going to describe any of it, go get the books or check out wikipedia.
Here also is some more material that's well work a look.
In typical style I am not completely sold, some of the language almost completely puts me off. It feels monetised and corporate. I can't stand company mission statements and it feels a bit like that. It's as if by talking the talk we don't have to knuckle down and do the hard work.
I've said before that I think LabVIEW is the original Agile language, I also think that our way of working is a reasonable compromise. It's not dogmatically Agile, it's not completely plan driven. Consider the following image based on the rather excellent (and short) book Balancing Agility and Discipline.
From this we can see that where the considerations are nearer the centre it favours Agile methods, whereas when we start moving out a more plan driven approach is needed.
Note: on Personnel for level 1 think CLAD/CLD for Level 2 think CLA, for Level 3 think CLA with a lot of project experience, the Boehm/Turner book classifies Level 3 as a very rare resource, it also splits 1a and 1b and -1 as levels.
I would add that as you travel through a project there are aspects that would benefit from being more plan-driven and other times where an Agile approach is definitely the way to go. For most of our projects we go in hard with prototyping, design reviews and iterate until we're happy we have collected a sufficient amount of requirements (the requirements shift is high at this point). Then the customer interaction naturally reduces as we knuckle down and do the hard architectural work. At this point in the project the requirements shift will be low. The next stage is commonly the Beta stage and we tend to try and hit this stage as early as possible, this is because the requirements shift begins to increase as the users actually start using the system. The effort you have put in prior to this creating a decent flexible, configurable back-end will now pay off coping with this ramp-up in requirements feedback. This is pretty predictable for a majority of our projects.
I've left quite a lot unsaid as it's a large topic and I know that Agile has vastly improved project delivery in many industries, but that's not comparing like for like. I would bet that very few LabVIEW projects are built using large teams where a customer does not see anything working for years. This is the environment that Agile has thrived in.
In short I like the adjective "agile" when used to describe a nimble way to run projects. I dislike the noun "Agile" where it is a commercial commodity to be sold as a self-help silver-bullet.
A discussion about the poor sods who get thrown onto complex projects undercooked and what can be done to educate project managers about the value of employing experience. I've witnessed this a few times this year.
Open up the design of the API for the ODS tool. It's working rather nicely in my application, but I know it's not sorted yet.
Or alternatively I could leave the ODS API half-cooked and I'm not in the mood to grizzle. So instead I'm going to talk about ODTs
ODTs are the output of Open Document Word Processors, including newer versions of Microsoft Word, Open-Office Writer and LibreOffice Writer.
Before I plough in and do the hard work I thought it might be good to request some input from other users.
Here's one use-case that would be marvellous for me.
Here I have my bug reporting tool, when I press commit I would like it to generate an issue report document, with all the details filled in for me. This would be way better than my current "Issue Saved" dialog confirmation.
Using a similar model to the one I employed for the spreadsheet I could load a template and update it.
Luckily both Word and Openoffice offer bookmarking facilities. We could organise our template with various bookmarks and then offer methods to insert things at the book mark point. In ODS I have fathomed tables, Images are just links and the images go into a directory called ..\Configurations2\images\Bitmaps, text is a direct replacement.
Main page data goes in content.xml
Header and Footer info goes into styles.xml so I would need to search that for bookmarks too.
The bookmark tags come in two flavours.
<text:bookmark text:name="headerbookmark"/> <--- this is just a single place point and your insertion goes either at the end or the beginning of this tag.
<text:bookmark-start text:name="my2ndbookmark"/>ddd<text:bookmark-end text:name="my2ndbookmark"/> <--- this is a range of text that should be replaced by whatever is inserted.
The only additional facility that I can think that will be useful is the ability to duplicate pages.
Here's the updated (version D1:01) example program screen
This demonstrates both ODS and ODT generation.
Here's another instance where I could use this type of functionality.
I generate a new document number for a project using this little utility, it would be splendid if it actually generated the document all numbered and dated.
Is there any other use-cases that would be handy?
Leave words of wisdom in the comments please.
04-May-2016 added ODF toolkit, this purely replaces/insert text into a template and saves the template, as a bonus there's a bit that then converts the ODT file to a PDF if you have LibreOffice (probably works with OpenOffice too). Created file works with Word too. Questions and use cases into the comments please. ODTExample.vi shows usage. Video to follow...
06-May-2016 Made the text entered XML safe (as far as I know) for ODS and ODT
19-05-2016 D1:01 added version numbering and prettied up the example, tested with provided templates and works OK. Used LibreOffice 5 for the pdf printing
08-06-2016 D1:02 ODTGetBookmarks removed table of contents "RefHeadings" bookmarks LV2014
Way back when I started a project to make a Open Document Tool, some words can be found here.
It came grinding to a halt due to memory leaks in the xpath tool used for navigating content.xml and pressure of work.
Well .... I've only gone and cracked it.
So attached to this article is a project with the following example.
And the block diagram looks like this.
And what it does is
opens a template spreadsheet (to get all the formatting, preload data etc etc).
Returns Sheet names
Returns Contents for a selected sheet
Stores the Spreadsheet
There's also couple of methods for replacing and inserting sheets. This is restricted to strings at the mo' but there's no reason we can't add formatting/polymorphism.
This should be all we need create spreadsheets from LabVIEW (obviously we can be adding more methods).
I would like to add some word processor functionality in the coming months as another class.
It's been tested with LibreOffice and OpenOffice and one converted excel file with the help of Jarobit, but this is a pretty early release so expect surprises.
I've used the open G ZLIB tools, but these are embedded in my project and namespace protected in an LVLIB so it should sit in a project and mind its own business.
Now I need to sleep.
night night Lovelies
Late night issue fix (always the recipe for awesome software), issues 1 and 2 sorted SW 09-04-2016
12-04-2016 I think I've sussed the Excel issues, so I've included a Template.ods with some data in it too. (there is a minor untidyness with some the repeat rows of emptyness)
It's now 4M because I've added all the ZLIB stuff, my thinking being that we can trim it back down when we're happy.
13-04-2016 disappointing result on the rebuilding of the ODS file, ZLIB is unzipping empty dirs as 0 value files. This is causing me some angst as I'm struggling to get the file correct for both LibreOffice and Excel.
ODS files are slightly simpler when created by Excel, so it all works OK using the template.ods provided <-- Well Well Well it appears there's an issue with ZLIB Read Compressed File__ogtk.vi where it is extracting an empty directory.
15-04-2016 I've tidied the project and improved the example.
New situations become more manageable when existing pattern knowledge can be applied to understanding how things work. Consistency is the key to helping users recognize and apply patterns.
It saves a lot of extra brain-work and the skills are more transferable.
The additional benefits - If there is a vibrant user-base you know that it will be supported, added to and will have an active eco-system. Within LabVIEW you know it's base purpose will be worked on and improved. If you are employing some peculiar edge-case you are risking no improvement and quite possibly deprecation.
Databases for Storing Data
There's a load of ways to store your results, but databases are the only one that's worth mastering. There's a great deal of support online and MySQL and SQLite have thriving communities with huge amounts of tools and help available. Type MySQL into Google and you'll get over 100 million hits, I think that's enough material for anyone.
SQL for Talking to Databases
It's all very well using a LabVIEW toolkit to plug data directly from cluster into a table, but if you want a skill you can build on have a bash at SQL. It will end up as one of the most useful tools in your toolkit. Again a search on Youtube for SQL Tutorial resulted in over 500k hits.
UDP for Communications
Inter-system communications is a rich area for toolkits, add on technologies, APIs etc etc. The biggest successes I have had are home-grown low-level protocols just chatting out of UDP (either broadcast, direct or a combination of both). It's robust, simple, low level and there are tools out there to sniff it. People get obsessed with security, but all of our systems are on secure networks and security can get added if you need it. Where-as it can make debugging and programming difficult when you don't. Google 58 Million hits.
TCP for Bulk Transfers of Data
One of the issues with UDP is that it is so basic, there's no handshaking and it can't handle large lumps of data. If you need this capability then you can use the TCP API. It's only marginally more difficult to use and we use UDP for inter-system messaging and TCP for transferring data.
OPC for Scada
In Industry pretty much everyone uses OPC for parking centralised data in distributed systems, it will be beneficial for us just to do the same. A write tag to OPC vi is not such a difficult abstraction to get my head around that I need to use a Network Shared Variable. 33 million hits on Google for OPC.
ODF for Documents
One day I'll find time to finish my ODF toolkit and when I have done that we should use ODF for all documents, spreadsheets and reporting.
Event Structures for Events
Events are really handy for passing data about, they have a great attribute in that they are strongly typed and yet still I have reservations (Chris Roebuck gives an extremely sound argument for the counter view and writes some lovely code based on this). For me Event Structures are for events, any other use is against the standard and therefore an additional burden.
Queues for Queuing
Back in the days of yore (BQ - Before Queues) and before Mr Mercer provided the wonderful queue API in LabVIEW we rolled our own. As we know we're now in the AQ period of history.
That was circa 2000 and we still have something similar in todays code.
This is how much we love Queues!
Globals for Global Data
As I've discussed many times Global data need's viewing with suspicion, however if you need it you'll find the most efficient and easy to understand method is just to use a global.
Feedback Nodes for Local Permanent Data Storage
I know they're a bit peculiar, named wrong and have a weird interface!
but compared to the previous technique of using shift registers in a while-loop (i.e. an accidental side-effect) I think it has a much clearer purpose.
So my friends the take out from this article is that it is better to use some technique or tool that is standard and popular even if it is not the "best".
I'm pretty sure I did a presentation on this some time ago, sadly I seem to have lost it so I can't just regurgitate it here!!
A few years back there was a marketing campaign based around how learning LabVIEW could lead you into different skill areas and work. Normally these fly right over my head, but this one made an impact.
So this article will be pretty self-indulgent, but the point is that the link between LabVIEW and the hardware it can drive can push your career into new and interesting areas.
Below is a slide from my presentation on immediacy, it maps the way SSDC has changed from a mainly Test based company to much more of a Distributed Machine control company.
All of our experience has stemmed from designing test and production equipment from the factory floor (to those of you that have never seen a factory, they are what industrial estates used to be full of back in the olden days).
A few years back we saw the test market becoming increasingly commoditised and so decided to look for work in other areas (a decision also based on the fact we were bored with designing test systems). RT came along and we embraced it for control work, both in Factory Control and Machine Control.
The first factory control job was for Mclaren where we designed the composite moulding plant for the body and chassis (worlds first!) for this beauty!
Another job in 2005 was focusing a blu-ray dvd mastering system to 40nm on a glass platter spinning at 5000rpm. Pretty damn challenging, but excellent fun.
Databases have always been central to our skill-set and we found ourselves doing Laboratory Management Systems, these are essentially large UI applications, connected to a database with lots of screens to manage reporting, scheduling etc etc.
If RT was good for us, cRIO was brilliant!!
The weird thing was that we weren't using it where we expected to. A large percentage of our work was actually using the FPGA to hack and convert old systems communications. The ability to do this fast (1 load/unload of the buffer) has been really useful. The best thing about this type of work is that our customers have never heard of NI or any of their technologies. It's fantastic to break an entirely new market.
So this week we will be fitting a system on this ship.
It's a proper engineering geekfest. The next picture is the ship balanced on its keel in the drydock!!
This job came from Adrian cold-calling the ferry company and telling them that we could hack the ships monitoring system. Kudos and employee of the year to him as there is a lot of ships bobbing about that need this work!
So who knows where we will be in 10 years time, all we know now is that the skills that currently pay the bills are....
Databases (MySQL, SQLite)
Communications - Serial, CAN, UDP, TCP
So looking at the original slide, a large percentage of our work is now in the bottom half of the triangle and who knows what will be added to the bottom.
I'm often asked "How do you keep your project so shiny, clean and smelling so minty fresh?"
Well first let's have a look at a clean project
This is our General Application template and you'll notice that the main VI (GenAppMain.vi) sits atop the hierarchy and that's because he is the king and everything should be relative to him. This is how LabVIEW like things organised, it will always look below itself in the hierarchy first as standard behaviour.
You'll also notice that we use auto-populating folders showing the entire hierarchy, the way we structure our projects lends itself to having no virtual folders (IMO it's an abstraction too far)
Dlls, config database file, startup VIs are also at this top level. All of this stops LabVIEW going searching when you move the project around. Keeping everything at this level also helps when you build an executable. It's much easier just to search in your own directory.
Like Unit Testing? well look in the TestVIs directory (we are trying to use Teststand if we need to do Unit Testing, this is quite new to us as we far prefer lots of functional testing)
Next we need to make sure there are no conflicts. Click on the exclamation mark and solve the problems.
What we are aiming for is a portable project. Ideally I want to dump my project directory on a new computer and the it to work. At the very least I want to be able to move the project around on the computer you are developing on.
All this preparatory work is to allow us to upload the project into our repository (SVN). If the project is portable we can then download the project onto any SSDC machine or even across customers machines.
We take an uncompromising view on dependencies and in truth anything outside of SCC (Source Code Control) is something that can change how your software works in unpredictable ways. See here for a discussion on this. With this in mind we can see that the Dependencies section is pretty light and we aim to keep it that way.
Over time scrappy bits of old project can get caught in the gaps, and these need cleaning. This is because they are indicative that a lack of control and understanding. One common area that needs regular flossing is in the Dependencies.
In this example we can see that although there are no conflicts, there is still some ugliness in the Dependencies section.
Cleaning these out is fairly straight-forward, you just need to find the unused VI in the project that is referring to these dependencies. But if you use dynamic VIs you should employ the following trick first.
Sticking any dynamic VIs in a non-called case of the main calling VI will not effect the running of the VI (LabVIEW will compile them out), but it will allow the IDE to tell you if they are broken. It will also allow LabVIEW searches to ascertain if the search VI is in the hierarchy.
Now you are ready to floss....
Right-click on the offending Dependency and select Find>>Callers, because the main calling vi is not broken and all dynamic VIs are on the main calling VIs block diagram we can be pretty sure that there is a broken, unused vi in the project. Find it and delete it (using the SCC delete function!)
Also notice the \user.lib folder, for us this is a sure sign of something untoward in our project structure. This is because we NEVER use user.lib. Also view instr.lib with suspicion, anything that is not part of the standard LabVIEW install should be moved out and put under the project (use lvlibs to protect the namespace).
Having a tidy project makes cross-linking and unpredictable loading a rare thing and it may even prevent gum disease.
I'm stupidly busy at the minute so forgive my lack of output!
This particular article is about numbers. It's the numbers we charge and the what they mean. I'm hoping it will help people starting out in the trade and help when a customer is considering hiring a company.
SSDC are a LabVIEW consultancy and as such we charge people for our services. There are 2 ways you can charge for your services, hourly (time and materials) or fixed price.
For hourly the customer takes on the risk (although there is always a limit to what your customer will pay), for fixed price the risk is on the supplier. As such the price you use to calculate a fixed price job should always be higher.
How much higher is the important question?
I'll drag up a story from the past to give some context.
When we started out our rate for fixed price work was £35 ($50)/hour and we struggled! Every job was hand-to-mouth, we couldn't offer the customer service we wanted and it was generally hard. If you can't offer decent levels of customer service you won't get the repeat business that makes life so much easier. Things were bad so I did some maths!
For my company I needed to know how efficient we are. The easiest calculation is hourly rate * number of hours worked in a year = £67200/person/year. If I got an hourly paid job at my hourly rate I'd earn £67k ($100k). Trouble is we were doing fixed price work and this dropped us down to a net turnover of £40k($57k), take out 20% tax, business expenses, insurance, transport and it all begins to look a little sad!
It dropped to 40k because we were 60% efficient, this means we were spending time on unpaid work -> Sales, Marketing, quotes, accounts etc etc. So even if we were fully employed we were still barely earning enough to pay the mortgage.
And this 60% efficiency shouldn't be regarded as a bad thing!, you should spend time on all of those things, you just need to account for them.
The other assumptions here is that you'll always win on fixed price work, I would suggest that even if you were good at software estimation you'd still only win 50% of the time (and I'm still waiting to meet THAT person!).
Lastly on fixed price is what happens if it all goes wrong....you don't get paid!
So when calculating a rate to apply to a fixed price job you need to take the hourly rate you would be happy to earn if you had a contract for the year, factor in your efficiency. You can then apply this rate to the estimation of hours and add in a contingency (we'll come back to this).
So for us we should have been aiming for an hourly rate of £58.3 ($83.3) when calculating fixed priced work.
Disclaimer: these numbers are circa 2000 and we are considerably more expensive now!
Coming back round to contingency, we use this help us cope with uncertainty. So if for example there are some requirements that are poorly defined, you could bully your customer into divulging all the details or you could just add some contingency. This is where experience really helps.
So the final fixed price equation should be at least
If like me you find programming the tree control to be a pain in the bum you may be interested in this little bit of code.
With it you can take a database table and create a tree hierarchy
So the table above will create a tree that looks like this when you use the query SELECT * FROM experimentTree
Going back the other way we want to generate SQL INSERT statements (this is SQLite flavour) or a table of data to parse back.
The code to generate this is included in the teststub. As a separate statement or as part of the main statement you should also empty the table.
The front panel of the teststub looks like this
To get a tree from the database the block diagram looks like this.
To plant a tree in your database, the block diagram looks like this.
For simplicity I have contrived the tags. For my purposes I will be linking the selection to an experiment, obviously your requirements may differ, in essence the experiment field will be a foreign key into another table.
It's in LabVIEW 2014, but I can down-version if required.
Perhaps this will help make tree-huggers of us all
Have a process, declare it, develop to it and improve it.
That way your customer knows how you operate, your developers know what you expect from them. If you go into a project without a process it will be haphazard and the more complicated projects will really struggle. If you are employing contractors, you should ensure that they understand your processes. We have seen several projects that suffer because developers are poorly selected and just thrown at a project with little else but a list of requirements (incomplete)
For me, the hardest part of any project is finishing it off. But this is actually the most important thing, I know it sounds ridiculous but we've made a pretty good living recovering abandoned projects. One key personal attribute for this is tenacity.
Do not give in when the hardware doesn't work as expected.
Do abandon a project because it is no longer cost effective.
Do not quit when your software is a bit flakey.
Try not to allow your personal problems affect your output.
Do not start a more interesting project before doing the hard bit of signing off existing projects.
All of this is a lot harder than you would think, but it is important if you're in this for the duration. Because abandoning projects is very, very bad for customer relations!
Not all projects go well, we are in the business of prototyping and bespoke software is difficult. It's why we charge the $$$$. Your customer should know if you are putting in the hard yards and often they appreciate it.
You will likely as not suffer a failed project by assigning 6 newly qualified CLAs, straight out of university, with no prior project experience, to anything complex.
If you assign a team of engineers that have successfully completed other projects, it will probably succeed.
Managers generally struggle with this concept and I have seen quite a few new LabVIEW engineers put under unbearable pressure because their company has paid for the training and LabVIEW is easy! I've also seen multi-million dollar projects nearly fail, because the organisation has just employed anyone with the word LabVIEW on their C.V. (actually it would probably be typed Labview).
There is too much discussion about how one methodology or another is better. The truth is that any methodology is better than none, a methodology your developers are comfortable with is an extremely valuable thing.
One thing I would add that your methodology needs to be able to cope with changes at the end of the project.
Open and Honest Communication
Keep your customer involved in the project.
Tell them if there are risks.
Tell them if you are struggling.
Do not lie to get the work.
Do not keep silent if you are unhappy about something.
Do not promise what you can't achieve.
This has the added bonus of making your life less stressful!
A healthy social life involves surrounding yourself with people that make you feel better and jettisoning people who make you feel crap. Business is usually a social activity and the same rules apply.
A good rule of thumb is that if the Accounts department are pleasant and pay you when they promise, then the company is usually a good company.
This relationship is conducive to project success.
Hardware costs are usually a very small part of the life-time costs of the system as I discussed here. Hardware that can easily satisfy the requirements at a projects inception and has the ability to be expanded will vastly improve the chances of project success. Second-hand, under-powered and under-specc'd equipment will add a significant risk of failure.
Talking of risk, your process should always push risk to the front. Always, always, always. This can be uncomfortable and natural human instinct is to get instant gratification by doing the easy stuff first. So if you suspect that the users requirements are not being expressed, then supply prototypes. If hardware issues need solving, solve them first.
A company fairly local to me had fully sorted it's CI method (Continuous Integration), bought loads of PXI racks, fully loaded with gear and then failed to deliver a single working test system, because no-one had actually built a prototype and run a test. Even after all my years in the business this still made my jaw hit the floor.
These points are pretty obvious, but it's sometimes good to have them in one place.
If you can think of any more, chuck them in the comments
Decided to go the micro-services route for post-processing of acquired data, that way I'm not locked into LabVIEW if customers have Matlab or other ways of doing it. The post-processor is then called by a command line and the TDMS file fed as an input.
Oh and it has a funky progress bar.
Blog of the Year
For consistently good technical content it has to be ....
Happy Christmas Christians and Happy Holidays non-Christians
I've done enough language stuff for the time being, it's time to look a bit closer at a subject that is far more interesting to me.
LabVIEW projects are getting bigger and more complex, generally SSDC is now working on distributed systems, large database enterprise systems or control systems. NI is wanting to sell system level hardware rather than single boards. All this means project failure can now impact on peoples jobs and company profitability.
I have referenced the Standish CHAOS study in presentations before and a quick look at the graphs will tell us that any improvement observed from 1994 to 2010 is marginal, statistically I would say that any improvement is within the general variance.
This is interesting because I am constantly inundated with propaganda promising me a brave new world of software development.
The take-outs from this are LabVIEW projects are getting more complex, and I would expect these complex projects to begin to follow the findings of the Standish CHAOS study.
If we now look at these two videos by Steve McConnell (If you don't know who he is then look him up!)
This webinar looks at case studies of 4 projects ranging from embarrassing failure to great success and the 4 judgement points are a nice simple way of evaluating a projects chances of success.
If you've not bothered to watch the video the 4 factors are Size, Uncertainty,Defects, Human Factors and how you manage each of them affects project success.
This next video is in a similar vein but looks at some common graphs associated with the software development process.
40 minutes in StevieMac (as he loves to be called) discusses human variations and it is a stunning bit of study.
Again if you couldn't be arsed to watch the vid, the spoiler is that human variations affect a project by factors way greater than methods or processes. So as an example filling your Agile team with idiots is likely to lead to failure.
Sometimes a project is JUST DIFFICULT! The likelihood of failure is greatly increased as you add function points to a project. If you have trouble defining a project or you just can't picture the design in your head I reckon it's a sure sign you have to put some hard miles in up-front.
2. Bad Judgement
As was shown in the 2nd video sometimes a project will fail just by intelligent people doing stupid things, in the case of the various medical failures it was bad business as much as bad software engineering. It's well worth understanding the financials of a project before you begin.
3. Lazy Estimation
If you are fairly professional you will have some very busy times (being professional is a really attractive sales technique). It can be very tempting at those times to take a bit of a punt on the estimate and often you will be caught out in spectacular fashion
The general rule in our business is that we very rarely make what we expect to make on a project. I would hazard a guess that we're not alone on this. What is the percentage? well taking the CHAOS report terminology I would say the the majority of our projects could be classed as challenged, very few fail (I think we've had one canceled and this was essentially political). How were they challenged? The majority would be late for a variety of reasons, generally it's over optimism! Very few would lose functionality. This is all indicative of poor or lazy estimation. For us it is mitigated by sticking rigidly to fixed price, absolute openness and early release of useful code.
4. Lack of Experience
I touched on this subject in Damn! My Silver Bullet Appears To Be Made of Crap. The graph showing how experienced teams have vastly improved productivity over inexperienced teams in the second video backs up my feelings here. That doesn't mean that a team should only contain gnarly old gits (SSDC defined), but often employers are less than discriminating when employing LabVIEW programmers. That poor decision is often the foundation of a failed project.
Experience gives you the flexibility and resilience to change that most projects need.
NOTE: Qualification <> Experience
5. Lack of User Interaction
One of the most best bits of the Agile Manifesto is this
"Our highest priority is to satisfy the customer
through early and continuous delivery
of valuable software."
We need to understand that the industries that most benefited from Agile processes were not actually delivering software to customers at all. The simple expedient of showing the customer stuff on a regular basis was deemed to be a huge breakthrough!
If you are not doing this with LabVIEW you really are in the wrong game!
6. Inappropriate Cost Saving
I discussed this in The Cost of Failure Article. The only addition could be applied to the people doing the work too, in any engineering discipline paying peanuts really does mean employing monkeys.
You could be happily working away on your project and management change everything. We've also had projects that were designed for a nefarious purpose (in our case to extract processes from the shop floor so the factory could be closed and shipped abroad).
Saying all is going well when it isn't is dishonest, similarly agreeing to unattainable delivery dates to win a job. Customers generally have fairly simple requirements with regards to projects, they expect you deliver what you you say you're going to deliver, when you said it would be done. Getting money for a project is usually a one-shot thing, going back for more money is incredibly difficult for all involved and is bad for customer relationships.
9. Offloading Responsibility
Quite often we find the customer expectation to be that not only do we design the software, we are expected to generate the requirements, specify the equipment and we have even been asked to redesign the customer hardware. In this type of job you need to be very sure of your ground because you are going to be taking all the blame.
The healthy situation is that the customer has the domain expertise and can communicate these as part of their requirements and feedback. Jobs are far less risky in this environment.
I've added Pascals comment into the main text as I think it more eloquently describes one of the points I am trying to make.
I also see this phenomenon in projects and I think that if we generate the requirements the costumer is less involved/impressed/excited about the work that is done. Because of him not having to think requirements through, he will change requirements if problems arise, just to move on with the project. Which leaves a bad tasted in my mouth.
10. Too Much Belief in Silver Bullets
This is a bit like self-help books that try to convince you that you're the most important person in the world and sheer force of positive thinking will make everything OK. It won't and probably the worst thing you can do to a complex project is add new processes or tools to it.
11. Inappropriate Design
If it doesn't fit the problem it's probably inappropriate. A large proportion of the jobs we rescue have had an unsupportable or just strange architectures applied to them. Similarly if the solution is radically more complex than the problem it is a sure sign of poor architectural decision making.
12. Trying New Things
An important, risky and time constrained project is probably not the best time to try out that new process or architecture. Essentially self-learning shouldn't be done on a customers paycheck (unless declared) and it also increases risk. A better way is to allow enough resource to attempt new stuff on internal projects or discuss it with your customer, sometimes they are willing to take the risk if there is a tangible benefit.
One reason I like fixed price work is that it forces you to analyse risk well and to then push these risks to the front.
I'll leave with a motivational poster to set you up for the new year.
We left our screen with a nice little undock button and the code to instantiate a graphscreen vi.
This article will concentrate in this graph screen display. If you skip to 6 minutes into the following video I demonstrate what I am aiming for.
The behavior I'm expecting here it for the graph to fill the available space without encroaching on any other objects on the screen. The only issues I've found are the palettes, titles etc seem to have a bit of a life of their own so give them a bit of space.
List Box for Channels
The behavior we expect from a list box is that it keeps it's width but extends in height, this can be achieved by adding a splitter and locking it.
As Sam Sharp says splitters are cool!
Measurements and Cursors
I wanted the measurements and cursors to be off-screen unless required. They also needed to stay the same size in a repeatable position. To become visible when selected. To achieve this we have a button that keeps to the edge of the screen and when selected comes onto the screen allowing interaction with the tabs.
This little tip was from Tim Hursts presentation at the CLD summit with a couple of modifications by me.
We don't want this part of the display to re-size, so I made the tab a strict type-def. when not required I push it to the side.
I've called the tab control settingsCarrier and the code to bring it in is as follows
The subvi just sets the position left or right depending on the SettingsHidden boolean (I have a separate button to push it back out of sight) and this vi is also used on a return button or resize event.
The only issue left was that if you had the tab visible it looked really ugly when you resized the panel, so my trick is to have a tab within a tab and make the carrier tab transparent, I could then flip it to a transparent window when it is hidden.
I've been a bit sneaky by embedding a graphic of a tab in the tab selector (it would be transparent too otherwise).
The only other tips I would give is to follow a rule of the simplest way to handle these screens is to remove as much functionality into menus and windows title bars as possible, so then you don't have to worry about them.
I hope this has been useful and if you have any questions post them in the comments (or pm me)
LabVIEW can be quite cumbersome when dealing with screen sizes, our stock way of dealing with this is to restrict the screen size of our applications. This hasn't been too much of an issue until now. My current project needs to be able to run on different resolutions and it also needs to undock graphs, these also need to be resizeable. I thought it may be useful to list out the techniques and challenges.
First lets think about what we want to happen when a screen size is changed.
We want some controls to change in proportion to the screen size, graphs for example.
We want some controls to stay the same size but be anchored to a position on the screen, buttons and drop down controls.
We want some controls to stay in proportion to one axis, so a list box to adjust vertically, a status string to adjust horizontally.
Some controls can be removed (tip: the easiest to manage control is one you don't need to manage!).
Some controls can be restricted.
First let's deal with undocking.....
The first challenge here is to make a button that keeps its proportions, and is locked to a position on the graph. I want it to always be in the top right hand corner of the graph. Aesthetically I want it to be unobtrusive (i.e. use transparency to make sure it doesn't get in the way of the graph). I made a line drawing in LibreOffice Present and cut and paste this into a vertical button (from the UI Control Suiteystem Controls Palette). The easiest way to restrict its size is to set it as a Strict Type Def. Finally we need to lock it into the top-right hand corner of the graph, sadly the only way to do this is writing code!
x1Undock is the button, so we take the width and move it relative to the Right of the Plot Bounds of the graph.
This is code worth noting as you'll be using it a fair bit, due to the fact that LabVIEW is natively a little random when left to its own devices!
Pressing this button will instantiate a graphscreen.vi instance, pre-loaded with the settings from the docked graph.
The code to instantiate the graph is as follows
Keeping control of all the dynamic references in this manner has worked out nice, it allows us easy housekeeping and access to control and indicator values.
As a multi-media project I've been recording myself designing a screen with a view of condensing the entire process into one small, sped-up video.
I was so pleased with the results I'm going to post it incomplete because it made me happy!.
Right back to work for me
I suppose I should give some details of the job.......
The idea here is that my esteemed colleague Adrian (Tark on here) is doing all the hard stuff to do with deciphering comms, designing servers and depositing data. In MVC terms he is handling the Model and the Controller. As is our norm we have decoupled the screen (essentially a seperate loop with a queue component) and he has sub-contracted that aspect to me (I like UI stuff). To keep it simple I don't have access to the code repository, I just give him my updates to incorporate.
So while not a complete MVC we can see the advantages of splitting off the UI.
For validation reasons we are attempting to keep the screens similar to those that are currently part of the system.
Lots of Love
Update 09-Feb-2016: Here's a picture of the system, just after it's Lloyds Register factory test was completed.
And I left it with the PXI Trigger Routing set up in MAX. I wanted to do this in software, because I don't like systems that need lots of different programs to work.
We use the PXI Trigger lines to route signals along the Racks backplane to connect all cards. An 18 slot chassis is split into 3 buses so we need to connect these buses to transfer signals along the backplane.
We have allocated PXI_Trig0 for the Start Trigger Line and this tells the cards to start acquiring data. This is generated from the trigger card (Card2) so needs routing from bus 1 along 2 and 3.
We also use PXI_Trig2 for the Sync Pulse. This comes from PFI2 of the 6674T in slot 10. So will need routing outwards from bus 2.
PXI_Trig1 and PXI_Trig3 are left-overs from development and can be ignored (but I'm at a fragile point of my testing so they stay in!)
So to do it in MAX you open up the rack like this.
In my opinion it has a pretty weird API and quite a few caveats, so stay with me.
First of all I wanted my software interface to look similar to the one provided by MAX.
It's been good practice to clear all of the trigger assignments before allocating new ones. This is because if a routing has already been assigned it will throw an error.
Here's a breakdown of the commands
VISA Search for something with BACKPLANE in its reference and that's our kiddy. Stick her into a shift register for further use.
Away From Bus 1
The VISA Unmap Trigger.vi is probably over-kill but for now it stays in until it's fully tested (it's pretty early release yet),
Away From Bus 2
Away From Bus 3
Without throwing errors just run through all combinations to clear any bad house-keeping up.
Finally close the reference when done.
Now the hard work starts in adding all of this to my server architecture.
Edit: This is version D1:04 (I'm working on D1:06 currently), but it shows the sync nicely.
WARNING: This is one boring-ass blog post! but it may just help a couple of people out if anyone could actually find it.
I'm lucky enough to be working on a multiple rack, high channel count oscilloscope project at the moment and I thought I would tell the story of how we got the various chassis synchronised to <1ns.
The requirement is pretty standard IMO, I want all the cards on all the chassis to trigger at the same time. The trigger signal can go to each of the chassis but they need to be synchronised and in phase.
When I saw the hardware list (8x 18 slot PXI chassis, full of oscilloscopes and each having a timing and sync card) I thought this is the equipment for the job.
Right tools for the job and a standard requirement, what could possibly go wrong? This was my 1st mistake. It was really hard!
In the end I had to lean on Sacha Emery, one of the patient and knowledgable System Engineers employed by NI and he sweated blood on this job too, there just didn't seem to be much information out there.
Anyways here is the solution and I hope is saves someone some time.
To test the system I send a pulse and read it on channels across the backplane of the PXI rack. We need to use the OCXO (Oven controlled crystal oscillator) as our clock and export this clock to all chassis, we need also need ensure the scopes start at the same time and are linked to this clock on the backplane of the rack. Additionally we need to make sure the clock etc go across all of the racks backplanes (18 slotters are split in 3 for flexibility).
Step 1 Route Master Clock
Route the OCXO from the PXIe-6674T on the master chassis to the CLKOut connector on the PXIe-6674T.
Gotcha: Dev1 is the 6674T in my system and Oscillator is OCXO, I didn't find any documentation that stated that this is so, but it is!.... apparently.
Step 2 Master Initial NI-Sync Settings
Turn on the PLL circuit and set the frequency to lock. To cope with the splitting of PFI signals we need to disable ClkIn Attenuation and reduce their threshold to 0.5V (or lower).
Step 3 Master Route Signals
To remove propagation delays due to cabling we bring the clock back in again from the CLKOut terminal via a matching length lead (matching to the slaves). This is routed to the 10MHz backplane clock.
The start trigger is routed out to PFI0, this is the signal to the other chassis to tell them to start logging data (for pre-sampling). This does not need to be matched in length.
We then use the Global Software Trigger as our Sync Pulse. Because this needs to compensate for lead length propagation delays it is output on PFI4 and routed back on PFI2.
Step 4 Set-up Slaves
The slaves should be set-up as follows and should be waiting for the PLL to lock prior to the master.
Step 5 Master and Slave Wait for PLL Lock
Pause and loop, waiting for PLL to lock on both master and slaves.
Step 6 Set-up Master Acquire
Card2 in each rack is designated as the Master Card and by this we mean the card that creates the trigger for the rack. The start trigger is routed here to the PXI Trigger line on the PXI backplane and then out of PFI0. The Sync Pulse is also routed from PXI_Trig2 so that NI-Tclk is synchronised across the backplane for all cards in the rack.
For cards that are not the trigger card the following code is run.
Step 7 Set-up Slave Acquire
Here the Start Trigger is sources from the VAL_RTSI_0, this is yet another name for the PXI backplane (PXI_Trig0). The sync pulse is routed from the backplane PXI_Trig2 as on the master chassis.
The other cards are set up as follows.
Step 8 Master Wait for Send Sync
All the NI-TClk references are collected and configures the properties commonly required for the TClk synchronization of device sessions and prepares the Sync Pulse Sender for synchronization. It's then held up waiting for the Slave to be in a state ready for the next stage.
The slave is set in exactly the same way except it has to be done after the Master.
Step 9 Slave Ready for Trigger
Step 10 Master Wait for Send Start Trigger
The first VI sends a pulse on the global software trigger terminal. Then we finalise the NI-TClk synchronising and wait to send the start trigger. When the button is pressed the start trigger is sent and the whole system is ready to acquire data.
Step 11 Acquire
The test was to send a pulse to Chan 0 on the first and last cards in 2 racks. The first channel on each rack will be used as the trigger source for the rack and the racks will be synchronised. The cabling has been carefully balanced to minimise propagation delays in the leads.
The test was done at 1.25GSamples/sec and 125000 samples were taken for each channel.
We then combined the graphs and compared the readings.
From this we can see a delay caused by the triggering system (essentially the detecting of the trigger and passing it to the backplane). This is <1ns and only applies to the trigger channel and can be post-processed out if required.
The synchronisation of Master Card 18 Chan 0 and Slave Card 14 Chan 0 is representative of all the other cards and channels in the system and should scale up to 8 racks (the trigger signals will be reduced by splitting them). The results here are 16ps which is astonishing!
The code and drawing is attached.
When I have completed the system I will probably post the code on-line, because I think there should be a ready made LabVIEW system available for this type of application (personally if I was buying $500k of hardware I would expect something to be available)
There is no way in hell I would have worked this out without the help of system engineering!
I think system engineering need better access to high value systems, because it felt like we were treading virgin territory here.
The support I received has been exceptional and should be something NI is proud of.
I promised I would write all this up as a response to a query on the forums.
From this point I need to package all this up as part of my Client-Server architecture and the jobs should be a good-un.
I'm updating my presentation on Immediacy for the CLD Summit on September 9th in Newbury. Part of this exercise is to right a wrong that I committed during the original presentation to no lesser a person than Jeff Kodosky (ooooooops). You'll have to check out the presentation video for the full confession or wait for somebody to snitch on me in the comments, but during my research I have dug out this little gem!.
(I've linked the image to the document)
It's a really interesting read because at this early version some of the design reasons are writ large.
And this segues nicely into 2 nice little bits of new functionality in LabVIEW and just to show I'm a new tech sort of guy I've made a video instead of typing it. ooooooooooooh multimedia baby!
I've seen a couple of demos and the filter hasn't been mentioned (and it's brilliant!) and the hyperlinks were only http, linking to local files is way more useful for me. Ideally I would like to be linking to files relative to the containing VI but this will cause all sorts of issues I would imagine.
Hope you all had a spiffing NIWeek if you attended. It looked splendid from my sad and lonely office!
I have reached perfect Twitter balance in that I have as many followers as people I follow on Twitter (it is on purpose and a bit of self analysis uncovered that it has to do with egalitarian OCD!). Someone I follow is James MacNally and he tweeted the following.
And it linked to the following excellent article and that article mentioned the Nirvana Fallacy which is a rather wonderful term for the subject of this article. (I was originally going to title it "People Ruin Everything").
The nirvana fallacy was given its name by economist Harold Demsetz in 1969 and refers to the informal fallacy of comparing actual things with unrealistic, idealized alternatives. In software it is characterised by requiring a "perfect" solution in areas where it is clearly unfeasible or even impossible.
Software can be flexible OR easy to use. It is rarely, if ever, both.
Deliver it in 2 weeks AND be completely robust. Radically shortened timescales will affect the robustness.
We don't have any requirements but we expect a low fixed price and delivery date.
In many cases the drive for impossible perfection actually inhibits usable improvements being made. It's way better to deliver something useful and improve it. Similarly it is important to manage stakeholder expectations.
Pareto had life sussed when in 1896 he noticed that 20% of the peapods in his garden contained 80% of the peas. (I know that's not accurate but it's way better than saying the he published a paper etc etc). The Pareto Principle or the Law of the Vital Few helps to concentrate effort into the areas of maximum benefit. If you can offer 80% of the functionality in 20% of the time you really are onto a winner.
So the article was originally titled "People Ruin Everything" and it was going to be a diatribe on how you can put in all these fantastic processes, APIs, frameworks, methods etc and they stay fantastic until people come stomping in, with their great fat feet and balls everything up. In the Nirvana Fallacy I see an explanation for this, as a provider of these things we expect the user to be as perfect at using it as the designer is. Well they won't be!
I had a customer who would break everything I gave her, it was uncanny. But rather than get grumpy I made sure she was the first person to see any software I wrote, she was the worlds best software tester.
The moral of this most rambly of ramblings is don't expect perfection, you will only receive misery and frustration for your efforts. Improvement is a good expectation. Expect improvement!
So the year is shooting past at a breakneck speed and here comes NIWeek 2015, sadly I blew my Jolly Fund in Rome and I won't be crossing the Pond this year. So this is a bit like me in a restaurant picking the meat dishes for my carnivore friends (I'm a veggie), here's what I would go and see if I was going....
As well as the keynotes, these are my picks (bear in mind I don't do many test systems any more so it's heavily biased on the software side of things)
TS6421 - Don’t Panic: LabVIEW Developer’s Guide to TestStand - Chris Roebuck
TS5740 - Computer Science for the G Programmer, Year 2 - Stephen Mercer and Jon McBee
TS7238 - Curing Cancer: Using a Modular Approach to Ensure the Safe and Repeatable Production of Medicine - Duncan MacIntosh ,Fabiola De la Cueva
HOL7309 - Hands-On: Introduction to Software Engineering and Source Code Control - Chris Cilino
TS6142 - Hidden Costs of Data Mismanagement - Pablo Giner,Stephanie Amrite
TS6408 - NI Linux Real-Time: RTOS Smackdown - Joshua Hernstrom
TS6720 - Effective Project Management of LabVIEW Projects - Ryan Smith,Paul Herrmann
TS6977 - 10 Differences Between LabVIEW FPGA and LabVIEW for Windows/Real-Time Programming - Erin Bray
TS6303 - LabVIEW FPGA Programming Best Practices - Zachary Hawkins
TS5898 - Inspecting Your LabVIEW Code With the VI Analyzer - Darren Nattinger
TS7477 - Using LabVIEW in Your Quality Management System - Maciej Kolosko
TS5698 - Augmenting Right-Click Menus in LabVIEW 2015 - Darren Nattinger,Stephen Loftus-Mercer
HOL5977 - Hands-On: Code Review Best Practices - Brian Powell
TS6420 - 5 Tips to Modularize, Reuse, and Organize Your Development Chaos - Fabiola De la Cueva
My interests here are software engineering, FPGA, and an increasing interest in data management. So I've chosen 13 hours of presentations + keynotes. Hopefully they will be video'd!
Talking of videos I've been busy tidying up some of our CSLUG presentations (more to come!)
If we manage to organise it, we're doing something a little different for the next CSLUG meeting on the 17th September, we're going to pick a subject and have a series of 4 slide presentations with multiple presenters. We should then get a spread of concepts for databases, error handling, customer sign-off etc etc. Each subject will be about 30 mins. We'll see how it works out!
Workwise: SSDC Maritime Division is beginning to blossom, we're currently waiting on orders that are worth 150% of last years turnover! I'm also working on a distributed oscilloscope program that seems to be generating some interest (I'm actually quite proud of it so far) and then we are waiting for the go-ahead on a data repository design and reporting system. Any of these jobs may have a profound effect on SSDC and I may finally get the company hovercraft I always wanted. The downside is I will probably be slower on my blog output!
If you are travelling to Austin; travel safe, if you are presenting; present well!
I'm going to be very candid in this article and once again no LabVIEW so bonus random points for me.
There will be no meditation, sobriety or jogging in this article so rest easy.
I'm pretty unemotional and shallow. In fact below are photos of me with my various moods on display
And I have an iron constitution, so essentially for 40 odd years my body was just for moving my brain from point A to point B and for filling with beer.
My twitter feed has a few comic book writers on it and these guys are doing a job they are intensely enthusiastic about, they are also doing a job they can carry around with them. Most are slightly younger than me and have been doing it for years.
It sounds idyllic and not dissimilar to my situation.
So it really interested me when one of these writers complained about feeling anxious and what could he do it about (it was anxious to the point of not being able to work, not just a bit fretful)
And then came the flood of rubbish advice.
Have a cup of tea (mostly British)
Have a walk
Listen to music
blah blah blah
And I really wanted to chip in, but not on twitter.
A few years back the same thing happened to me. Here's the details.
I love writing software, just love it. I'd do my 9-5 work and go home and have some food, say hello to my family. I'd then sit down and while they watched rubbish TV I would start programming again and this would then go on until 10ish and then bed. Every gap I had I would fill with software.
This was fine until one day I was driving home and I felt most odd, like I was having a heart attack. Bear in mind my body had never spoken to me for 45 years so I wasn't very good at listening.
Trip to the Docs, Heart going too fast, asked me if I was stressed, working too hard, told me to stop being stressed and working too hard.
So I looked at my life and felt surely doing something I loved wouldn't have this effect on me would it!. Well actually it did.
The issue was I wasn't differentiating my work time and home time. Essentially I was never finishing my working day.
I just separated the things I think as work (i.e. programming), with the things I don't regard as work. The things I do as work I do 9-5, the other stuff I do when it entertains me.
Luckily I don't regard research, writing blogs, making presentations as work so I could still manage to pack in some extra hours, which is useful when running your own business.
Did it work? Almost instantly and following these rules have kept me very happy for the last few years.
Just look at my little face....
So my friends be nice to your brain, it's kind of useful!
I'm very enthusiastic about the subject of the last article and will attempt to flesh out some details in the months to come (sorry but my brain does not work in a linear fashion and neither do my enthusiasms).
I think the combination of OOP, Functional etc etc may be the unified theory of software development!
And these grand statements may be part of another problem, a problem that we have contributed to in our innocence.
In fact we kind of predicted it by using the following quote in our book.
"If this is true, building software will always be hard. There is inherently no silver bullet"
- Frederick P. Brooks, Jr 1987 Computer Vol 20
And I really want everyone to understand this. Please, please understand it!
We presented LCOD as a method and carefully explained modular design, thinking that by describing the whys and wherefores the job was done. We then went back to our day jobs and didn't really have time or energy to follow it up.
I see all the frameworks, methodologies, processes, techniques and theories enthusiastically evangelized (ours included) and I want to shout "a badly implemented Silver Bullet will still be crap"
What do I mean by badly implemented? Taking LCOD as an example we can separate the techniques of implementation (what we place on the block diagram) and the hard work of design. So we could quite easily create a god component that does everything we want our program to do, but this is obviously a bad design decision. With LVOOP we could similarly have god objects and I expect we could have a god actor (Morgan Freeman).
Someone said the following to me at the CLA summit "OOP is the best way to write software!", it elicited the following slightly snarky response... "Define BEST". Statements like that de-value all the great software written that isn't OOP and doesn't really add anything to the conversation. OOP is a great and valuable tool to be added to your toolbox and used where appropriate. You do not become a great carpenter by using great tools, they help but it's not the full story.
And with all our best intentions you can put the silver bullet in a dirty weapon and have it back-fire on you (as Fab put it). Now I have some bad news for buyers of self-help books, I think good software design comes with hard work, study and experience, and pretty hard won experience at that! Also good software design does not depend on the methodology or process you favor, it depends on using a technique appropriate to the requirements.
"Rules, guidelines, and principles are gems of distilled experience that should be studied and respected. But they’re never a substitute for thinking critically about your work."
I found this quote from a spat between Joel Spolsky and Uncle Bob Martin, 2 people who are very loud in the software design community and have very differing views on Test Driven Design. Both can't be right, but their opinions are espoused as absolute truth.
And here is an uncomfortable truth: you won't create decent designers by teaching techniques, offering frameworks and imposing rules. Good designers understand their tools, understand design and can assess what they have been told and reject what doesn't work for them. Also they generally get better with each job they do!
Software is a practical subject the best way to learn how to design is to design stuff.(<---Mr Mercer has made me re-evaluate this statement!)
Software is a practical subject the best way to learn how to design is to design stuff, while being taught how to design stuff. (<-- this is how I learnt and it served me pretty well, but I needed the experience of writing lots of code prior to being taught to really hammer home the lessons)
As one of the loudest people in our little community I have one message for you all to take away
(actually it was really an attack on mindlessly following acronymns)
and like most things it's not quite as black and white as I originally put it (time, conversation and thought have added colour and shade).
The conversation on this subject has mostly been with AristosQueue and I think we have stumbled on a point of convergence (I'm only speaking for myself here). Link to one part of the conversation is here
The breakthrough statement is as follows.
You need different design methods for different stages in a project.
To back up my theory I've split a project into 3 stages
In reverse order let's look at the Low Level API/Drivers part. So this would be hardware drivers, reporting, data handling, communications.
Next we come to the framework there are plenty available (Actor Framework, Delacor, JKI, James Powells, TLB, Aloha to name a few), this should handle all the common tasks associated with most applications.
Finally we have the Customer Facing/Final Application software, this is the bit that brings together all of other parts and makes you money.
Here's a table of important considerations for good software design. And as an exercise I've selected 5 of the most important for each stage.
A bit of Acronym/Anagram fun gives us
Low Level API/Driver = STORE
Framework = FRERC
Final Application = FRUMF
It's important to note that the considerations I picked are the ones that are important to me, my company and my customers. They may not be important to you, so pick your bloody own!.
So if the requirements at each stage vary so much it stands to reason that we should apply different design tactics, even different designers.
There is one extra consideration that I would like to put in for frameworks (with thanks to Thoric) and that's Inobtrusive. I want a framework to be visible but not get in the way. I would like to see what it is doing but I don't want it masking the solution to problem I'm solving.
The other aspect to think about is the size of each section, so if you only ever tackle one type of job you may find that the customer facing part is small compared to a big ol' framework. In short they are variable in size.
More questions than answers as usual, but it's food for thought.
And to quote Leslie Lamport for the second time this week "Thinking does not guarantee that you will not make mistakes. But not thinking guarantees that you will."
I'm such an adult! but the title made me laugh so it stays.
AristosQueue presented "Various Tangents Around The Circle of Design Patterns" and as usual it was a well thought out, extremely well presented and original piece of work. Sadly it wasn't recorded so you will have to take my word for it.
Now I've said all these nice things I'm going to discuss one of the ideas presented and my starting position is that I'm not entirely sure a component as defined in LCOD is actually a singleton pattern at all....
I have a very early edition of the GoF book and I actually was pretty keen on patterns when I was researching our book. Sadly my edition of Design Patterns is in almost perfect condition. This tells me only one thing....I didn't find the book very useful or applicable to LabVIEW, this is where AQs talk of idioms comes into play I think. My opinion is that the patterns described are idiomatic to C++/Java and they help deal with a lot of the heartache and nause related to those languages. I strongly believe LabVIEW has it's own idioms.
The Singleton Pattern Defined (by wikipedia)
In software engineering, the singleton pattern is a design pattern that restricts the instantiation of a class to one object. This is useful when exactly one object is needed to coordinate actions across the system. The concept is sometimes generalized to systems that operate more efficiently when only one object exists, or that restrict the instantiation to a certain number of objects. The term comes from the mathematical concept of a singleton.
There is criticism of the use of the singleton pattern, as some consider it an anti-pattern, judging that it is overused, introduces unnecessary restrictions in situations where a sole instance of a class is not actually required, and introduces global state into an application
For us a component/action engine is purely there as a high level construct to restrict access to related modules in a program. This is design construct aimed purely at simplifying the block diagram. So for example here is a component as defined by us, this is an oscilloscope and the customer requires the ability to have either NI racks or Agilent with a variety of oscilloscopes.
Note: I'm still wrestling with the Acqiris Cards, they will hopefully be an AcqirisScope subclass as per the NIScope class.
As we can see there is little relationship to a singleton pattern here, rather the component is a restricted interface to a group of related objects. So what is the point of this? I think this is where AQ and I are converging (this is not a static situation, because we are analyzing designs at greater depth, publishing our ideas and discussing them).
I will talk about this convergence in greater depth in my next article, for a preview sneak a peak at the conversation here
At the European CLA 2015 summit Sacha Emery presented "Actor and benefits of different frameworks" where he described his journey through various design choices when applied to a standard set of requirements. This segued nicely into various topics that were being discussed throughout the summit and it I also hijacked it selfishly to promote one of my interests.
I'm after finding some way of measuring the complexity of a system, now ideally I would like this to be something tangible like function points.
Internal logical files
External interface files
The complex problem is how to identify these items within a LabVIEW Block Diagram (I haven't given up on this)
One of the advantages of function points is that you can can get them from requirements and use them for estimation. My primary interest is not for estimation tho' I want a common measure of the the complexity of a project so we can better assess and discuss our design choices. The disadvantage is that it is hard work and takes a lot of time to undertake a study.
So how does Sacha's presentation link to this?, like all good test engineers I like my processes to be calibrated and I think this might be the perfect tool to do this. Here's how we think it will work (and I say we because this is based on conversations with Jack Dunaway, Sach and various other luminaries).
We have a common set of requirements.
We apply a framework/design/architecture to provide a solution.
We tag all the project items that are original items.
We analyze all the items that are in the problem domain (i.e. separate the solution from the architecture).
We apply a change/addition in requirements.
We analyze the additional items that have been generated to satisfy the change.
This should give us several numbers
Items in Basic Architecture (A)
Items to Solve the Problem (B)
Items to Make an Addition (C)*
And this is what we call the Emery scale.
The preconception is that generally the code to solve the problem should be pretty repeatable/similar.
So dumping Function Points for the time being and going back to a more simple idea (care of Mr Dunaway), we should tag the architectural parts and count the nodes using block diagram scripting. As Chris Relf has stated there is some good data to be had from LOC (lines of code) comparisons in similar domains.
And what use is all this work? For 1 it should give us an estimate about the number of VIs our design will have on a given target (The ratio of B:C(#requirements) should give us a size). This is an essential bit of design information.
Early stages and a long term project. It could be part of my CS thesis if I was doing one...
This is only one potential benefit of the kind of baselining that Sacha has started (from an educational perspective seeing the same problem solved with different frameworks will be invaluable)
*A late addition to give us some concept of how a change affects the project