I'm full contrarian mode now but rather than just dismiss the arguments out of hand think about your own experiences, you may find I'm not so contrary after all.
The caveat on this discussion is that at SSDC we do not repeat many projects, stepping away from the test arena has made our world a more varied place, less friendly to re-use.
Way back in article 28 I trashed the traditional way re-use is taught/advertised in the LabVIEW world in this article here.
Our technique of keeping everything in the project works very nicely for us over all of our 100+ active projects. Issues are extremely rare. But our evolved practice still puts us at odds with traditional programming best practice in one area.
Looking at the traditional view of reuse and to clarify I'm speaking about planned functional reuse here. Planned functional reuse differs from ad-hoc (opportunistic reuse) in the levels of effort expended by teams to manage the reuse library.
"Planned reuse — This involves software developed with reuse as a goal during its development. Developers spent extra effort to ensure it would be reusable within the intended domains. The additional cost during development may have been significant but the resulting product can achieve dramatic savings over a number of projects. The SEER-SEM estimation model shows that the additional costs of building software designed for reuse can be up to 63 percent more than building with no consideration for reusability." D Galorath
Computer Science books tend to say that good practice is to manage a reuse library of highly reusable components with your team, these are tested, documented and approved. We also bend our architectures into a shape that will allow us to plug in reusable components. Finally we have to maintain this library across new versions of the software, operating system, hardware etc etc.
As a consumer of the reuse library we have to trust it will work in a new environment, giving 100% functionality (or allowing us to extend functionality).
So how many programmers have I met that enjoy maintaining a reuse library and all the effort that entails? Let me count them on the fingers of zero hands!
All of these measures of efficiency have been focused on software languages that take a considerable effort to add functionality, how does working in an environment like LabVIEW change this dynamic?
Thing is LabVIEW is rapid, I can hand crank a driver in very little time, I know it is 100% of what I need (and no more). Because it is stripped down, I'm not lugging a vast weight of unnecessary functionality with me. Multiply this by 20 pieces of hardware and it's probably a significant size improvement, which is a bonus.
The argument against this point of view is fairly strong, a reused component should be better tested, more stable, better understood etc etc. Well this is possibly true if you work in an environment where you are churning out variants of the same project. My response would be, if this is the case branch and adapt the entire project.
In the past I really bought into the whole reuse thing, investing a significant effort in pulling out useful stuff and maintaining a repository. I don't even know where it is any more, it's all a bit sad. I wonder how many man-hours have been wasted on reuse libraries full of old code for obsolete hardware on out-of-date versions of LabVIEW.
But sure Stevey, a company as awesome as SSDC must reuse loads of stuff, is a question I've never been asked. If I was, I would say YES we reuse entire project templates, documentation, methodology specific templates. Where we have identified projects that will have possible repeats we generate a project template. I view this as architectural reuse rather than functional reuse and it's an amazing time saver, standard enforcer and competitive advantage. Pretty much everything else is ad-hoc and unmanaged.
ooooooh controversial, hope I've rocked your worlds.
Lots of Love
"Code reuse results in dependency on the component being reused. Rob Pike opined that "A little copying is better than a little dependency". When he joined Google, the company was putting heavy emphasis on code reuse. He believes that Google's codebase still suffers from results of that former policy in terms of compilation speed and maintainability"wikipedia
I had a delightful time in Texas, then back for a week of work (mostly fitting stuff to ships) and then off for a weeks vacation. My brain really needed a rest!
Saturday was a trip to Dallas with Jonny and much amusement was had starting off with a top-notch breakfast at Ellens Southern Kitchen highly recommended!
At the Gas Monkey Bar and Grill I ordered the vegetarian choice of Stuffed Peppers, only to find out they were stuff with pulled-pork!
Sunday I was the guest of the Smiths and had a grand time looking round the hill country and caves, my thanks for their time it was a good time!.
Coming from the UK I love big horizons and here we see the Kitty-Litter Castle in the hill country above Austin.
And then NIWeek stuff starts on Sunday night with drinks at the Ginger Man, great staff, beer and rubbish dartboards. It is here I discovered that great cider is made in Texas. Namely Austin Eastciders Original Dry Cider, after discovering this the week was a bit of a blur.
I'll admit this now, I deliberately limited myself, being an introvert I find crowds of people very trying, the good thing about Austin Convention Center is that it is so enormous you can always find some alone time. If I didn't get to the session I wanted to it was because I was lost, talking, in a meeting or resting my brain. Also we stayed in the Hilton, this was very handy for ducking out for a breather.
Monday was the Alliance Day and after the trauma of the keynote (I DON'T DO GROUP ACTIVITIES!!!!!!!!), it passed by nicely. Various kick-off drinks etc happened in the evening and people kept giving me name stickers to fill in, even approaching my fiftieth year I find it difficult to write my name sensibly on a label. So if you met a loud idiot with MEGATRON on his label, I'm afraid that was me. I got to sit on a table with Darren, Stephen and finally met Christina and think I enjoyed your company more than the other way round.
Tuesday was all business, lots of meetings and I think they went OK. Thanks to all for organizing them. Then I wandered down to the LAVA BBQ via the Mohawk (a bar I like very much, old punks seem welcome there).
Wednesday My presentations. Sadly TS9044 - Shock Test Using Multiple Synchronized Racks had stiff competition by Jeff Kodosky and Stephen Mercer in the room next door. Therefore there was plenty of spare seating in this one, there was a lot of NI people in attendance and that was perfect tho', they were actually my target audience (more on this in my next article).
In the afternoon came ISO 9000 and LabVIEW - TS9456 and this benefited from being a Jeff Kodosky Top Pick, I fact I was unaware of until after the affect. Honored!
An honor I shared with TS9446 - Project Templates: Making the Most of Code Reuse by Becky Linton, which I am waiting impatiently for on video.
This vindicates the somewhat uphill struggle I feel it is when talking process in the LabVIEW community versus talking technique. Another plus-point is that the interest in code reviews seems to be increasing, the session on this was extremely well subscribed.
Anyhow here's my presentation.
Q1 32:43 - Dmitry - Is it all internally driven or do you integrate customer processes too
Q2a 34:07 - Fab - As a relaxed person how do you make everyone agree on a processes and then stick to them.
Q2b 35:10 - Fab - When you get audited you can feel that you have to have very time consuming processes, this appears to be more minimal
Q2c 35:45 - Fab - Is it true that we sailed through our accreditation with nothing but a well done chaps. Thanks Fab
Q3 36:08 - Does this process apply to everything, internal and external.
Q4 37:48 - Did the ISO9000 accreditation change your process or did you get something value added by doing this work.
Q5 41:20 - How many times are you audited?
Q6 42:55 - Merging - how does your process handle merging.
Q7 44:10 - Michael (I Think)- Can you elaborate on the sub-contract model
Hope it is of use. I've had some nice feedback on it. I'm going to list the questions as soon as I get a free moment.
The best thing about presenting at NIWeek is the quality of attendee. I had attendees from USAF, Boeing, Lockheed Martin, a lot of military research, SpaceX, research from various countries. As a platform for putting yourself in-front of quality companies it's second to none.
I missed a lot of the other sessions because I was either nervous about presenting or really relieved at having presented. Although I did go to TS9723 - DQMH: Decisions Behind the Design and witnessed some ugly trolling behavior from people/person who were not there to learn, but to criticize. My response would have been very much less professional than the presenters, something along the lines of "Fxxx Off and get a life". If you don't want to use it, don't use it. Sorry it's the first time I've run directly into this type of behaviour and I feel protective of people who share code and ideas.
Thursday Seriously relaxed now, so I went to see TS9725 - Understanding Test System Performance and in the evening went to Pete's Duelling Piano Bar with a group led by the wonderful Jeremy Marquis who's where's Where is Jeremy now? twitter feed thoughtfully ensures you need never be alone at NIWeek. His lovely wife Rozann also ensures that not idea is never left unlistened to. I love live music and thought I might hate the experience of Duelling pianos, but it was amazing! At midnight I found myself sat in a leather armchair listening to Industrial dance music and drinking tequila. Austin is fun!
Friday flight home. Club World is very nice, thanks for the upgrade!
Now I just need a Return on Investment to justify more trips ($6000 to get back)
It's been 16 years since I last went to NIWeek with my buddy Jon Conway (gobshite) and I intend to drink considerably less this time round!
We were just starting out as SSDC (Jon had started the company a few years earlier) and now we older and wiser. We honestly couldn't be any dumber than the 2000 version of us, but we probably laugh less now, ah the heartache of growing old.....
The itinerary is as follows...
Fly in on Friday 29th
Saturday Hire big truck and drive to Dallas early in the morning (we'll be away at stupid O'clock anyways)
Sunday Family fun
Monday Alliance Day stuff
Tuesday LAVA BBQ, I'll be good because....
Wednesday - presenting x2 Details below (I'll be badly behaved on Wednesday night as post-presentation relief kicks in)
Thursday Morning - HOL9108 - Code Review Best Practices <-- very enthused about this one
Friday Afternoon Fly Home
There's a couple of meetings still to organize and as you can see I have not listed a lot of other presentations I want to go to, I'm very behind with that part of my organization!
I've been twittering on about this project for a while now, so come to this if you're interested in measuring explosions. There are also interesting techniques on show and I'm pleased with the code too. For me the most interesting thing about the presentation is how it explores a broken software/hardware distribution model and the advantages that releasing source-code for free may have. I suspect there is some real potential to dislodge incumbent suppliers with this model. All it takes is a bit of trust......
Wednesday afternoon it's off to room 15 where all the cool kids hang-out (I'll be there too, bringing down the cool average). Here we have the Advanced Users Track and you can get your fix of technical stuff. I'll be presenting TS9456: ISO 9000 and LabVIEW . Specifically it will looking at ISO9001:2008 and how we have developed a toolset to provide seamless and helpful processes. It will be crazy knockabout fun.
The supplied links will allow you post questions (or if you are a twitter user abuse).
One thing I won't be presenting is TS10989: Use by Exposure: From myDAQ to ATE by bizarre coincidence there are 2 brits presenting at NIWeek called Steve Watts. This ones a lecturer at Cardiff University and I expect his presentation will be way more professional than mine! I'm going along to put my name to someone elses face!
And just to make other peoples presentations weird, keep the following in mind
Now we have entered a brave new world of stupid. Visas may need to be applied for to read this, especially if you are from the EU.
NIWeek is careering towards us at almost unnatural speed and I have got my presentations pretty much sorted. If you want to see a nervous cockney, speaking too fast and sometimes swearing here's my agenda.
I'll write about them afterwards.
For the last year or so I have been blathering on about nuance and trying to temper some of the discussions around and about software with a few more layers. On that note I'm going to make a statement and then pick it apart.
"A good software designer will produce good software regardless of language, process or methodology"
Deconstructing the argument from the above statement might therefore beg the question. If that is the case what use therefore is language, process or methodology in the task of creating software. Simply put they are not essential, but they help. Also not everyone is a good software engineer, statistics dictate that 49.9% of us is below average. I've mentioned in Agile - Tail wagging the Dog(ma) that there can be considered 5 levels of software developer.
Description - LabVIEW
These will take a project backwards and should be escorted from the building (and they do exist!)
Trainees <CLAD, start of career.
CLAD/CLD - interested and capable
CLD/CLA - capable of creating nice code, can manage simple projects, probably lacking in experience.
CLA - years of experience, educated and capable of adapting to change because the understand the implications.
To be able to thrive in a changing and challenging environment you will need at least a 2 and probably a 3. Quite a large proportion of LabVIEW jobs sit in this area and quite a few LabVIEW programmers are expected to cope on their own. This struggle will lead to the following questions and mistaken conclusions.
I'm struggling therefore there must be a problem with the language.
Most programmers have a language they like using, very few good programmers will productive in one language and unproductive in another. They may be comparatively less productive but they will still be productive.
I'm struggling therefore there must be a problem with the process.
In the wider software world if you're not using some Agile variant you are a sub-human idiot, thing is, a project has different stages, some really benefit from Agile, some don't. Consider the graph below.
Agile projects work well when there are a lot of requirements over time, for most of our projects the following seems fairly accurate. Initially we're getting a lot of requirements up until the prototype/design review. Then it settles down while we do the architectural work and preparatory stuff required to cope with the avalanche of requirements generated when the customer finally actually starts to seriously think about/use the software. For us this is the Beta stage and we like to hit it as early as possible.
True agility comes from being able to adapt to changing situations throughout the project lifecycle.
I'm struggling therefore there must be a problem with the methodology.
Here the level 3 programmer will pick the methodology that is most appropriate for the part of the project being worked upon, the personal preferences of the team, the use cases of the customer and the limitations of the hardware. Some methods in LabVIEW are better for APIs, some are better for top-level etc etc.
The only thing I would add to the methodology discussion is that the best one is one that is shared by your team (and customer if that's your bag). There are real advantages to sharing, that's what my Mum always taught me.
And that my friends is the last time I bang this drum, I just wanted to get all my thoughts together in one place.
In 24 days I will be in Austin, if you see me stop me and say hello.
Good conversation starters revolve around comics, art, music, nature, dogs, jokes, funny videos, making things (anything), history. I'll probably be either awkward or amusing depending on my mood.
While scrubbing up my NIWeek demos this annoyance raised its head again.
This is an OS issue and not LabVIEW, but the solution may be out there already.
I press the filepath button and use the express filedialog function to throw up a dialog (the browse button does the same thing). It appears behind the calling Front Panel causing all sorts of mayhem associated with hidden modal boxes.
As I understand it the root of the issue is to do with a 32bit program (LabVIEW) calling a 64bit dll function (filedialog), this modal function doesn't know who called it so doesn't know what to sit in front of. An Alt-Tab sorts it out, but it still doesn't meet requirement zero for me.
SSDC has it's first certified scrum-master (stand up Jon Conway), so I thought it might be a good idea get our teeth into Agile it and give it a shake. I'm not going to describe any of it, go get the books or check out wikipedia.
Here also is some more material that's well work a look.
In typical style I am not completely sold, some of the language almost completely puts me off. It feels monetised and corporate. I can't stand company mission statements and it feels a bit like that. It's as if by talking the talk we don't have to knuckle down and do the hard work.
I've said before that I think LabVIEW is the original Agile language, I also think that our way of working is a reasonable compromise. It's not dogmatically Agile, it's not completely plan driven. Consider the following image based on the rather excellent (and short) book Balancing Agility and Discipline.
From this we can see that where the considerations are nearer the centre it favours Agile methods, whereas when we start moving out a more plan driven approach is needed.
Note: on Personnel for level 1 think CLAD/CLD for Level 2 think CLA, for Level 3 think CLA with a lot of project experience, the Boehm/Turner book classifies Level 3 as a very rare resource, it also splits 1a and 1b and -1 as levels.
I would add that as you travel through a project there are aspects that would benefit from being more plan-driven and other times where an Agile approach is definitely the way to go. For most of our projects we go in hard with prototyping, design reviews and iterate until we're happy we have collected a sufficient amount of requirements (the requirements shift is high at this point). Then the customer interaction naturally reduces as we knuckle down and do the hard architectural work. At this point in the project the requirements shift will be low. The next stage is commonly the Beta stage and we tend to try and hit this stage as early as possible, this is because the requirements shift begins to increase as the users actually start using the system. The effort you have put in prior to this creating a decent flexible, configurable back-end will now pay off coping with this ramp-up in requirements feedback. This is pretty predictable for a majority of our projects.
I've left quite a lot unsaid as it's a large topic and I know that Agile has vastly improved project delivery in many industries, but that's not comparing like for like. I would bet that very few LabVIEW projects are built using large teams where a customer does not see anything working for years. This is the environment that Agile has thrived in.
In short I like the adjective "agile" when used to describe a nimble way to run projects. I dislike the noun "Agile" where it is a commercial commodity to be sold as a self-help silver-bullet.
A discussion about the poor sods who get thrown onto complex projects undercooked and what can be done to educate project managers about the value of employing experience. I've witnessed this a few times this year.
Open up the design of the API for the ODS tool. It's working rather nicely in my application, but I know it's not sorted yet.
Or alternatively I could leave the ODS API half-cooked and I'm not in the mood to grizzle. So instead I'm going to talk about ODTs
ODTs are the output of Open Document Word Processors, including newer versions of Microsoft Word, Open-Office Writer and LibreOffice Writer.
Before I plough in and do the hard work I thought it might be good to request some input from other users.
Here's one use-case that would be marvellous for me.
Here I have my bug reporting tool, when I press commit I would like it to generate an issue report document, with all the details filled in for me. This would be way better than my current "Issue Saved" dialog confirmation.
Using a similar model to the one I employed for the spreadsheet I could load a template and update it.
Luckily both Word and Openoffice offer bookmarking facilities. We could organise our template with various bookmarks and then offer methods to insert things at the book mark point. In ODS I have fathomed tables, Images are just links and the images go into a directory called ..\Configurations2\images\Bitmaps, text is a direct replacement.
Main page data goes in content.xml
Header and Footer info goes into styles.xml so I would need to search that for bookmarks too.
The bookmark tags come in two flavours.
<text:bookmark text:name="headerbookmark"/> <--- this is just a single place point and your insertion goes either at the end or the beginning of this tag.
<text:bookmark-start text:name="my2ndbookmark"/>ddd<text:bookmark-end text:name="my2ndbookmark"/> <--- this is a range of text that should be replaced by whatever is inserted.
The only additional facility that I can think that will be useful is the ability to duplicate pages.
Here's the updated (version D1:01) example program screen
This demonstrates both ODS and ODT generation.
Here's another instance where I could use this type of functionality.
I generate a new document number for a project using this little utility, it would be splendid if it actually generated the document all numbered and dated.
Is there any other use-cases that would be handy?
Leave words of wisdom in the comments please.
04-May-2016 added ODF toolkit, this purely replaces/insert text into a template and saves the template, as a bonus there's a bit that then converts the ODT file to a PDF if you have LibreOffice (probably works with OpenOffice too). Created file works with Word too. Questions and use cases into the comments please. ODTExample.vi shows usage. Video to follow...
06-May-2016 Made the text entered XML safe (as far as I know) for ODS and ODT
19-05-2016 D1:01 added version numbering and prettied up the example, tested with provided templates and works OK. Used LibreOffice 5 for the pdf printing
08-06-2016 D1:02 ODTGetBookmarks removed table of contents "RefHeadings" bookmarks LV2014
Way back when I started a project to make a Open Document Tool, some words can be found here.
It came grinding to a halt due to memory leaks in the xpath tool used for navigating content.xml and pressure of work.
Well .... I've only gone and cracked it.
So attached to this article is a project with the following example.
And the block diagram looks like this.
And what it does is
opens a template spreadsheet (to get all the formatting, preload data etc etc).
Returns Sheet names
Returns Contents for a selected sheet
Stores the Spreadsheet
There's also couple of methods for replacing and inserting sheets. This is restricted to strings at the mo' but there's no reason we can't add formatting/polymorphism.
This should be all we need create spreadsheets from LabVIEW (obviously we can be adding more methods).
I would like to add some word processor functionality in the coming months as another class.
It's been tested with LibreOffice and OpenOffice and one converted excel file with the help of Jarobit, but this is a pretty early release so expect surprises.
I've used the open G ZLIB tools, but these are embedded in my project and namespace protected in an LVLIB so it should sit in a project and mind its own business.
Now I need to sleep.
night night Lovelies
Late night issue fix (always the recipe for awesome software), issues 1 and 2 sorted SW 09-04-2016
12-04-2016 I think I've sussed the Excel issues, so I've included a Template.ods with some data in it too. (there is a minor untidyness with some the repeat rows of emptyness)
It's now 4M because I've added all the ZLIB stuff, my thinking being that we can trim it back down when we're happy.
13-04-2016 disappointing result on the rebuilding of the ODS file, ZLIB is unzipping empty dirs as 0 value files. This is causing me some angst as I'm struggling to get the file correct for both LibreOffice and Excel.
ODS files are slightly simpler when created by Excel, so it all works OK using the template.ods provided <-- Well Well Well it appears there's an issue with ZLIB Read Compressed File__ogtk.vi where it is extracting an empty directory.
15-04-2016 I've tidied the project and improved the example.
New situations become more manageable when existing pattern knowledge can be applied to understanding how things work. Consistency is the key to helping users recognize and apply patterns.
It saves a lot of extra brain-work and the skills are more transferable.
The additional benefits - If there is a vibrant user-base you know that it will be supported, added to and will have an active eco-system. Within LabVIEW you know it's base purpose will be worked on and improved. If you are employing some peculiar edge-case you are risking no improvement and quite possibly deprecation.
Databases for Storing Data
There's a load of ways to store your results, but databases are the only one that's worth mastering. There's a great deal of support online and MySQL and SQLite have thriving communities with huge amounts of tools and help available. Type MySQL into Google and you'll get over 100 million hits, I think that's enough material for anyone.
SQL for Talking to Databases
It's all very well using a LabVIEW toolkit to plug data directly from cluster into a table, but if you want a skill you can build on have a bash at SQL. It will end up as one of the most useful tools in your toolkit. Again a search on Youtube for SQL Tutorial resulted in over 500k hits.
UDP for Communications
Inter-system communications is a rich area for toolkits, add on technologies, APIs etc etc. The biggest successes I have had are home-grown low-level protocols just chatting out of UDP (either broadcast, direct or a combination of both). It's robust, simple, low level and there are tools out there to sniff it. People get obsessed with security, but all of our systems are on secure networks and security can get added if you need it. Where-as it can make debugging and programming difficult when you don't. Google 58 Million hits.
TCP for Bulk Transfers of Data
One of the issues with UDP is that it is so basic, there's no handshaking and it can't handle large lumps of data. If you need this capability then you can use the TCP API. It's only marginally more difficult to use and we use UDP for inter-system messaging and TCP for transferring data.
OPC for Scada
In Industry pretty much everyone uses OPC for parking centralised data in distributed systems, it will be beneficial for us just to do the same. A write tag to OPC vi is not such a difficult abstraction to get my head around that I need to use a Network Shared Variable. 33 million hits on Google for OPC.
ODF for Documents
One day I'll find time to finish my ODF toolkit and when I have done that we should use ODF for all documents, spreadsheets and reporting.
Event Structures for Events
Events are really handy for passing data about, they have a great attribute in that they are strongly typed and yet still I have reservations (Chris Roebuck gives an extremely sound argument for the counter view and writes some lovely code based on this). For me Event Structures are for events, any other use is against the standard and therefore an additional burden.
Queues for Queuing
Back in the days of yore (BQ - Before Queues) and before Mr Mercer provided the wonderful queue API in LabVIEW we rolled our own. As we know we're now in the AQ period of history.
That was circa 2000 and we still have something similar in todays code.
This is how much we love Queues!
Globals for Global Data
As I've discussed many times Global data need's viewing with suspicion, however if you need it you'll find the most efficient and easy to understand method is just to use a global.
Feedback Nodes for Local Permanent Data Storage
I know they're a bit peculiar, named wrong and have a weird interface!
but compared to the previous technique of using shift registers in a while-loop (i.e. an accidental side-effect) I think it has a much clearer purpose.
So my friends the take out from this article is that it is better to use some technique or tool that is standard and popular even if it is not the "best".
I'm pretty sure I did a presentation on this some time ago, sadly I seem to have lost it so I can't just regurgitate it here!!
A few years back there was a marketing campaign based around how learning LabVIEW could lead you into different skill areas and work. Normally these fly right over my head, but this one made an impact.
So this article will be pretty self-indulgent, but the point is that the link between LabVIEW and the hardware it can drive can push your career into new and interesting areas.
Below is a slide from my presentation on immediacy, it maps the way SSDC has changed from a mainly Test based company to much more of a Distributed Machine control company.
All of our experience has stemmed from designing test and production equipment from the factory floor (to those of you that have never seen a factory, they are what industrial estates used to be full of back in the olden days).
A few years back we saw the test market becoming increasingly commoditised and so decided to look for work in other areas (a decision also based on the fact we were bored with designing test systems). RT came along and we embraced it for control work, both in Factory Control and Machine Control.
The first factory control job was for Mclaren where we designed the composite moulding plant for the body and chassis (worlds first!) for this beauty!
Another job in 2005 was focusing a blu-ray dvd mastering system to 40nm on a glass platter spinning at 5000rpm. Pretty damn challenging, but excellent fun.
Databases have always been central to our skill-set and we found ourselves doing Laboratory Management Systems, these are essentially large UI applications, connected to a database with lots of screens to manage reporting, scheduling etc etc.
If RT was good for us, cRIO was brilliant!!
The weird thing was that we weren't using it where we expected to. A large percentage of our work was actually using the FPGA to hack and convert old systems communications. The ability to do this fast (1 load/unload of the buffer) has been really useful. The best thing about this type of work is that our customers have never heard of NI or any of their technologies. It's fantastic to break an entirely new market.
So this week we will be fitting a system on this ship.
It's a proper engineering geekfest. The next picture is the ship balanced on its keel in the drydock!!
This job came from Adrian cold-calling the ferry company and telling them that we could hack the ships monitoring system. Kudos and employee of the year to him as there is a lot of ships bobbing about that need this work!
So who knows where we will be in 10 years time, all we know now is that the skills that currently pay the bills are....
Databases (MySQL, SQLite)
Communications - Serial, CAN, UDP, TCP
So looking at the original slide, a large percentage of our work is now in the bottom half of the triangle and who knows what will be added to the bottom.
I'm often asked "How do you keep your project so shiny, clean and smelling so minty fresh?"
Well first let's have a look at a clean project
This is our General Application template and you'll notice that the main VI (GenAppMain.vi) sits atop the hierarchy and that's because he is the king and everything should be relative to him. This is how LabVIEW like things organised, it will always look below itself in the hierarchy first as standard behaviour.
You'll also notice that we use auto-populating folders showing the entire hierarchy, the way we structure our projects lends itself to having no virtual folders (IMO it's an abstraction too far)
Dlls, config database file, startup VIs are also at this top level. All of this stops LabVIEW going searching when you move the project around. Keeping everything at this level also helps when you build an executable. It's much easier just to search in your own directory.
Like Unit Testing? well look in the TestVIs directory (we are trying to use Teststand if we need to do Unit Testing, this is quite new to us as we far prefer lots of functional testing)
Next we need to make sure there are no conflicts. Click on the exclamation mark and solve the problems.
What we are aiming for is a portable project. Ideally I want to dump my project directory on a new computer and the it to work. At the very least I want to be able to move the project around on the computer you are developing on.
All this preparatory work is to allow us to upload the project into our repository (SVN). If the project is portable we can then download the project onto any SSDC machine or even across customers machines.
We take an uncompromising view on dependencies and in truth anything outside of SCC (Source Code Control) is something that can change how your software works in unpredictable ways. See here for a discussion on this. With this in mind we can see that the Dependencies section is pretty light and we aim to keep it that way.
Over time scrappy bits of old project can get caught in the gaps, and these need cleaning. This is because they are indicative that a lack of control and understanding. One common area that needs regular flossing is in the Dependencies.
In this example we can see that although there are no conflicts, there is still some ugliness in the Dependencies section.
Cleaning these out is fairly straight-forward, you just need to find the unused VI in the project that is referring to these dependencies. But if you use dynamic VIs you should employ the following trick first.
Sticking any dynamic VIs in a non-called case of the main calling VI will not effect the running of the VI (LabVIEW will compile them out), but it will allow the IDE to tell you if they are broken. It will also allow LabVIEW searches to ascertain if the search VI is in the hierarchy.
Now you are ready to floss....
Right-click on the offending Dependency and select Find>>Callers, because the main calling vi is not broken and all dynamic VIs are on the main calling VIs block diagram we can be pretty sure that there is a broken, unused vi in the project. Find it and delete it (using the SCC delete function!)
Also notice the \user.lib folder, for us this is a sure sign of something untoward in our project structure. This is because we NEVER use user.lib. Also view instr.lib with suspicion, anything that is not part of the standard LabVIEW install should be moved out and put under the project (use lvlibs to protect the namespace).
Having a tidy project makes cross-linking and unpredictable loading a rare thing and it may even prevent gum disease.
I'm stupidly busy at the minute so forgive my lack of output!
This particular article is about numbers. It's the numbers we charge and the what they mean. I'm hoping it will help people starting out in the trade and help when a customer is considering hiring a company.
SSDC are a LabVIEW consultancy and as such we charge people for our services. There are 2 ways you can charge for your services, hourly (time and materials) or fixed price.
For hourly the customer takes on the risk (although there is always a limit to what your customer will pay), for fixed price the risk is on the supplier. As such the price you use to calculate a fixed price job should always be higher.
How much higher is the important question?
I'll drag up a story from the past to give some context.
When we started out our rate for fixed price work was £35 ($50)/hour and we struggled! Every job was hand-to-mouth, we couldn't offer the customer service we wanted and it was generally hard. If you can't offer decent levels of customer service you won't get the repeat business that makes life so much easier. Things were bad so I did some maths!
For my company I needed to know how efficient we are. The easiest calculation is hourly rate * number of hours worked in a year = £67200/person/year. If I got an hourly paid job at my hourly rate I'd earn £67k ($100k). Trouble is we were doing fixed price work and this dropped us down to a net turnover of £40k($57k), take out 20% tax, business expenses, insurance, transport and it all begins to look a little sad!
It dropped to 40k because we were 60% efficient, this means we were spending time on unpaid work -> Sales, Marketing, quotes, accounts etc etc. So even if we were fully employed we were still barely earning enough to pay the mortgage.
And this 60% efficiency shouldn't be regarded as a bad thing!, you should spend time on all of those things, you just need to account for them.
The other assumptions here is that you'll always win on fixed price work, I would suggest that even if you were good at software estimation you'd still only win 50% of the time (and I'm still waiting to meet THAT person!).
Lastly on fixed price is what happens if it all goes wrong....you don't get paid!
So when calculating a rate to apply to a fixed price job you need to take the hourly rate you would be happy to earn if you had a contract for the year, factor in your efficiency. You can then apply this rate to the estimation of hours and add in a contingency (we'll come back to this).
So for us we should have been aiming for an hourly rate of £58.3 ($83.3) when calculating fixed priced work.
Disclaimer: these numbers are circa 2000 and we are considerably more expensive now!
Coming back round to contingency, we use this help us cope with uncertainty. So if for example there are some requirements that are poorly defined, you could bully your customer into divulging all the details or you could just add some contingency. This is where experience really helps.
So the final fixed price equation should be at least
If like me you find programming the tree control to be a pain in the bum you may be interested in this little bit of code.
With it you can take a database table and create a tree hierarchy
So the table above will create a tree that looks like this when you use the query SELECT * FROM experimentTree
Going back the other way we want to generate SQL INSERT statements (this is SQLite flavour) or a table of data to parse back.
The code to generate this is included in the teststub. As a separate statement or as part of the main statement you should also empty the table.
The front panel of the teststub looks like this
To get a tree from the database the block diagram looks like this.
To plant a tree in your database, the block diagram looks like this.
For simplicity I have contrived the tags. For my purposes I will be linking the selection to an experiment, obviously your requirements may differ, in essence the experiment field will be a foreign key into another table.
It's in LabVIEW 2014, but I can down-version if required.
Perhaps this will help make tree-huggers of us all
Have a process, declare it, develop to it and improve it.
That way your customer knows how you operate, your developers know what you expect from them. If you go into a project without a process it will be haphazard and the more complicated projects will really struggle. If you are employing contractors, you should ensure that they understand your processes. We have seen several projects that suffer because developers are poorly selected and just thrown at a project with little else but a list of requirements (incomplete)
For me, the hardest part of any project is finishing it off. But this is actually the most important thing, I know it sounds ridiculous but we've made a pretty good living recovering abandoned projects. One key personal attribute for this is tenacity.
Do not give in when the hardware doesn't work as expected.
Do abandon a project because it is no longer cost effective.
Do not quit when your software is a bit flakey.
Try not to allow your personal problems affect your output.
Do not start a more interesting project before doing the hard bit of signing off existing projects.
All of this is a lot harder than you would think, but it is important if you're in this for the duration. Because abandoning projects is very, very bad for customer relations!
Not all projects go well, we are in the business of prototyping and bespoke software is difficult. It's why we charge the $$$$. Your customer should know if you are putting in the hard yards and often they appreciate it.
You will likely as not suffer a failed project by assigning 6 newly qualified CLAs, straight out of university, with no prior project experience, to anything complex.
If you assign a team of engineers that have successfully completed other projects, it will probably succeed.
Managers generally struggle with this concept and I have seen quite a few new LabVIEW engineers put under unbearable pressure because their company has paid for the training and LabVIEW is easy! I've also seen multi-million dollar projects nearly fail, because the organisation has just employed anyone with the word LabVIEW on their C.V. (actually it would probably be typed Labview).
There is too much discussion about how one methodology or another is better. The truth is that any methodology is better than none, a methodology your developers are comfortable with is an extremely valuable thing.
One thing I would add that your methodology needs to be able to cope with changes at the end of the project.
Open and Honest Communication
Keep your customer involved in the project.
Tell them if there are risks.
Tell them if you are struggling.
Do not lie to get the work.
Do not keep silent if you are unhappy about something.
Do not promise what you can't achieve.
This has the added bonus of making your life less stressful!
A healthy social life involves surrounding yourself with people that make you feel better and jettisoning people who make you feel crap. Business is usually a social activity and the same rules apply.
A good rule of thumb is that if the Accounts department are pleasant and pay you when they promise, then the company is usually a good company.
This relationship is conducive to project success.
Hardware costs are usually a very small part of the life-time costs of the system as I discussed here. Hardware that can easily satisfy the requirements at a projects inception and has the ability to be expanded will vastly improve the chances of project success. Second-hand, under-powered and under-specc'd equipment will add a significant risk of failure.
Talking of risk, your process should always push risk to the front. Always, always, always. This can be uncomfortable and natural human instinct is to get instant gratification by doing the easy stuff first. So if you suspect that the users requirements are not being expressed, then supply prototypes. If hardware issues need solving, solve them first.
A company fairly local to me had fully sorted it's CI method (Continuous Integration), bought loads of PXI racks, fully loaded with gear and then failed to deliver a single working test system, because no-one had actually built a prototype and run a test. Even after all my years in the business this still made my jaw hit the floor.
These points are pretty obvious, but it's sometimes good to have them in one place.
If you can think of any more, chuck them in the comments
Decided to go the micro-services route for post-processing of acquired data, that way I'm not locked into LabVIEW if customers have Matlab or other ways of doing it. The post-processor is then called by a command line and the TDMS file fed as an input.
Oh and it has a funky progress bar.
Blog of the Year
For consistently good technical content it has to be ....
Happy Christmas Christians and Happy Holidays non-Christians
I've done enough language stuff for the time being, it's time to look a bit closer at a subject that is far more interesting to me.
LabVIEW projects are getting bigger and more complex, generally SSDC is now working on distributed systems, large database enterprise systems or control systems. NI is wanting to sell system level hardware rather than single boards. All this means project failure can now impact on peoples jobs and company profitability.
I have referenced the Standish CHAOS study in presentations before and a quick look at the graphs will tell us that any improvement observed from 1994 to 2010 is marginal, statistically I would say that any improvement is within the general variance.
This is interesting because I am constantly inundated with propaganda promising me a brave new world of software development.
The take-outs from this are LabVIEW projects are getting more complex, and I would expect these complex projects to begin to follow the findings of the Standish CHAOS study.
If we now look at these two videos by Steve McConnell (If you don't know who he is then look him up!)
This webinar looks at case studies of 4 projects ranging from embarrassing failure to great success and the 4 judgement points are a nice simple way of evaluating a projects chances of success.
If you've not bothered to watch the video the 4 factors are Size, Uncertainty,Defects, Human Factors and how you manage each of them affects project success.
This next video is in a similar vein but looks at some common graphs associated with the software development process.
40 minutes in StevieMac (as he loves to be called) discusses human variations and it is a stunning bit of study.
Again if you couldn't be arsed to watch the vid, the spoiler is that human variations affect a project by factors way greater than methods or processes. So as an example filling your Agile team with idiots is likely to lead to failure.
Sometimes a project is JUST DIFFICULT! The likelihood of failure is greatly increased as you add function points to a project. If you have trouble defining a project or you just can't picture the design in your head I reckon it's a sure sign you have to put some hard miles in up-front.
2. Bad Judgement
As was shown in the 2nd video sometimes a project will fail just by intelligent people doing stupid things, in the case of the various medical failures it was bad business as much as bad software engineering. It's well worth understanding the financials of a project before you begin.
3. Lazy Estimation
If you are fairly professional you will have some very busy times (being professional is a really attractive sales technique). It can be very tempting at those times to take a bit of a punt on the estimate and often you will be caught out in spectacular fashion
The general rule in our business is that we very rarely make what we expect to make on a project. I would hazard a guess that we're not alone on this. What is the percentage? well taking the CHAOS report terminology I would say the the majority of our projects could be classed as challenged, very few fail (I think we've had one canceled and this was essentially political). How were they challenged? The majority would be late for a variety of reasons, generally it's over optimism! Very few would lose functionality. This is all indicative of poor or lazy estimation. For us it is mitigated by sticking rigidly to fixed price, absolute openness and early release of useful code.
4. Lack of Experience
I touched on this subject in Damn! My Silver Bullet Appears To Be Made of Crap. The graph showing how experienced teams have vastly improved productivity over inexperienced teams in the second video backs up my feelings here. That doesn't mean that a team should only contain gnarly old gits (SSDC defined), but often employers are less than discriminating when employing LabVIEW programmers. That poor decision is often the foundation of a failed project.
Experience gives you the flexibility and resilience to change that most projects need.
NOTE: Qualification <> Experience
5. Lack of User Interaction
One of the most best bits of the Agile Manifesto is this
"Our highest priority is to satisfy the customer
through early and continuous delivery
of valuable software."
We need to understand that the industries that most benefited from Agile processes were not actually delivering software to customers at all. The simple expedient of showing the customer stuff on a regular basis was deemed to be a huge breakthrough!
If you are not doing this with LabVIEW you really are in the wrong game!
6. Inappropriate Cost Saving
I discussed this in The Cost of Failure Article. The only addition could be applied to the people doing the work too, in any engineering discipline paying peanuts really does mean employing monkeys.
You could be happily working away on your project and management change everything. We've also had projects that were designed for a nefarious purpose (in our case to extract processes from the shop floor so the factory could be closed and shipped abroad).
Saying all is going well when it isn't is dishonest, similarly agreeing to unattainable delivery dates to win a job. Customers generally have fairly simple requirements with regards to projects, they expect you deliver what you you say you're going to deliver, when you said it would be done. Getting money for a project is usually a one-shot thing, going back for more money is incredibly difficult for all involved and is bad for customer relationships.
9. Offloading Responsibility
Quite often we find the customer expectation to be that not only do we design the software, we are expected to generate the requirements, specify the equipment and we have even been asked to redesign the customer hardware. In this type of job you need to be very sure of your ground because you are going to be taking all the blame.
The healthy situation is that the customer has the domain expertise and can communicate these as part of their requirements and feedback. Jobs are far less risky in this environment.
I've added Pascals comment into the main text as I think it more eloquently describes one of the points I am trying to make.
I also see this phenomenon in projects and I think that if we generate the requirements the costumer is less involved/impressed/excited about the work that is done. Because of him not having to think requirements through, he will change requirements if problems arise, just to move on with the project. Which leaves a bad tasted in my mouth.
10. Too Much Belief in Silver Bullets
This is a bit like self-help books that try to convince you that you're the most important person in the world and sheer force of positive thinking will make everything OK. It won't and probably the worst thing you can do to a complex project is add new processes or tools to it.
11. Inappropriate Design
If it doesn't fit the problem it's probably inappropriate. A large proportion of the jobs we rescue have had an unsupportable or just strange architectures applied to them. Similarly if the solution is radically more complex than the problem it is a sure sign of poor architectural decision making.
12. Trying New Things
An important, risky and time constrained project is probably not the best time to try out that new process or architecture. Essentially self-learning shouldn't be done on a customers paycheck (unless declared) and it also increases risk. A better way is to allow enough resource to attempt new stuff on internal projects or discuss it with your customer, sometimes they are willing to take the risk if there is a tangible benefit.
One reason I like fixed price work is that it forces you to analyse risk well and to then push these risks to the front.
I'll leave with a motivational poster to set you up for the new year.
We left our screen with a nice little undock button and the code to instantiate a graphscreen vi.
This article will concentrate in this graph screen display. If you skip to 6 minutes into the following video I demonstrate what I am aiming for.
The behavior I'm expecting here it for the graph to fill the available space without encroaching on any other objects on the screen. The only issues I've found are the palettes, titles etc seem to have a bit of a life of their own so give them a bit of space.
List Box for Channels
The behavior we expect from a list box is that it keeps it's width but extends in height, this can be achieved by adding a splitter and locking it.
As Sam Sharp says splitters are cool!
Measurements and Cursors
I wanted the measurements and cursors to be off-screen unless required. They also needed to stay the same size in a repeatable position. To become visible when selected. To achieve this we have a button that keeps to the edge of the screen and when selected comes onto the screen allowing interaction with the tabs.
This little tip was from Tim Hursts presentation at the CLD summit with a couple of modifications by me.
We don't want this part of the display to re-size, so I made the tab a strict type-def. when not required I push it to the side.
I've called the tab control settingsCarrier and the code to bring it in is as follows
The subvi just sets the position left or right depending on the SettingsHidden boolean (I have a separate button to push it back out of sight) and this vi is also used on a return button or resize event.
The only issue left was that if you had the tab visible it looked really ugly when you resized the panel, so my trick is to have a tab within a tab and make the carrier tab transparent, I could then flip it to a transparent window when it is hidden.
I've been a bit sneaky by embedding a graphic of a tab in the tab selector (it would be transparent too otherwise).
The only other tips I would give is to follow a rule of the simplest way to handle these screens is to remove as much functionality into menus and windows title bars as possible, so then you don't have to worry about them.
I hope this has been useful and if you have any questions post them in the comments (or pm me)
LabVIEW can be quite cumbersome when dealing with screen sizes, our stock way of dealing with this is to restrict the screen size of our applications. This hasn't been too much of an issue until now. My current project needs to be able to run on different resolutions and it also needs to undock graphs, these also need to be resizeable. I thought it may be useful to list out the techniques and challenges.
First lets think about what we want to happen when a screen size is changed.
We want some controls to change in proportion to the screen size, graphs for example.
We want some controls to stay the same size but be anchored to a position on the screen, buttons and drop down controls.
We want some controls to stay in proportion to one axis, so a list box to adjust vertically, a status string to adjust horizontally.
Some controls can be removed (tip: the easiest to manage control is one you don't need to manage!).
Some controls can be restricted.
First let's deal with undocking.....
The first challenge here is to make a button that keeps its proportions, and is locked to a position on the graph. I want it to always be in the top right hand corner of the graph. Aesthetically I want it to be unobtrusive (i.e. use transparency to make sure it doesn't get in the way of the graph). I made a line drawing in LibreOffice Present and cut and paste this into a vertical button (from the UI Control Suiteystem Controls Palette). The easiest way to restrict its size is to set it as a Strict Type Def. Finally we need to lock it into the top-right hand corner of the graph, sadly the only way to do this is writing code!
x1Undock is the button, so we take the width and move it relative to the Right of the Plot Bounds of the graph.
This is code worth noting as you'll be using it a fair bit, due to the fact that LabVIEW is natively a little random when left to its own devices!
Pressing this button will instantiate a graphscreen.vi instance, pre-loaded with the settings from the docked graph.
The code to instantiate the graph is as follows
Keeping control of all the dynamic references in this manner has worked out nice, it allows us easy housekeeping and access to control and indicator values.
As a multi-media project I've been recording myself designing a screen with a view of condensing the entire process into one small, sped-up video.
I was so pleased with the results I'm going to post it incomplete because it made me happy!.
Right back to work for me
I suppose I should give some details of the job.......
The idea here is that my esteemed colleague Adrian (Tark on here) is doing all the hard stuff to do with deciphering comms, designing servers and depositing data. In MVC terms he is handling the Model and the Controller. As is our norm we have decoupled the screen (essentially a seperate loop with a queue component) and he has sub-contracted that aspect to me (I like UI stuff). To keep it simple I don't have access to the code repository, I just give him my updates to incorporate.
So while not a complete MVC we can see the advantages of splitting off the UI.
For validation reasons we are attempting to keep the screens similar to those that are currently part of the system.
Lots of Love
Update 09-Feb-2016: Here's a picture of the system, just after it's Lloyds Register factory test was completed.
And I left it with the PXI Trigger Routing set up in MAX. I wanted to do this in software, because I don't like systems that need lots of different programs to work.
We use the PXI Trigger lines to route signals along the Racks backplane to connect all cards. An 18 slot chassis is split into 3 buses so we need to connect these buses to transfer signals along the backplane.
We have allocated PXI_Trig0 for the Start Trigger Line and this tells the cards to start acquiring data. This is generated from the trigger card (Card2) so needs routing from bus 1 along 2 and 3.
We also use PXI_Trig2 for the Sync Pulse. This comes from PFI2 of the 6674T in slot 10. So will need routing outwards from bus 2.
PXI_Trig1 and PXI_Trig3 are left-overs from development and can be ignored (but I'm at a fragile point of my testing so they stay in!)
So to do it in MAX you open up the rack like this.
In my opinion it has a pretty weird API and quite a few caveats, so stay with me.
First of all I wanted my software interface to look similar to the one provided by MAX.
It's been good practice to clear all of the trigger assignments before allocating new ones. This is because if a routing has already been assigned it will throw an error.
Here's a breakdown of the commands
VISA Search for something with BACKPLANE in its reference and that's our kiddy. Stick her into a shift register for further use.
Away From Bus 1
The VISA Unmap Trigger.vi is probably over-kill but for now it stays in until it's fully tested (it's pretty early release yet),
Away From Bus 2
Away From Bus 3
Without throwing errors just run through all combinations to clear any bad house-keeping up.
Finally close the reference when done.
Now the hard work starts in adding all of this to my server architecture.
Edit: This is version D1:04 (I'm working on D1:06 currently), but it shows the sync nicely.
WARNING: This is one boring-ass blog post! but it may just help a couple of people out if anyone could actually find it.
I'm lucky enough to be working on a multiple rack, high channel count oscilloscope project at the moment and I thought I would tell the story of how we got the various chassis synchronised to <1ns.
The requirement is pretty standard IMO, I want all the cards on all the chassis to trigger at the same time. The trigger signal can go to each of the chassis but they need to be synchronised and in phase.
When I saw the hardware list (8x 18 slot PXI chassis, full of oscilloscopes and each having a timing and sync card) I thought this is the equipment for the job.
Right tools for the job and a standard requirement, what could possibly go wrong? This was my 1st mistake. It was really hard!
In the end I had to lean on Sacha Emery, one of the patient and knowledgable System Engineers employed by NI and he sweated blood on this job too, there just didn't seem to be much information out there.
Anyways here is the solution and I hope is saves someone some time.
To test the system I send a pulse and read it on channels across the backplane of the PXI rack. We need to use the OCXO (Oven controlled crystal oscillator) as our clock and export this clock to all chassis, we need also need ensure the scopes start at the same time and are linked to this clock on the backplane of the rack. Additionally we need to make sure the clock etc go across all of the racks backplanes (18 slotters are split in 3 for flexibility).
Step 1 Route Master Clock
Route the OCXO from the PXIe-6674T on the master chassis to the CLKOut connector on the PXIe-6674T.
Gotcha: Dev1 is the 6674T in my system and Oscillator is OCXO, I didn't find any documentation that stated that this is so, but it is!.... apparently.
Step 2 Master Initial NI-Sync Settings
Turn on the PLL circuit and set the frequency to lock. To cope with the splitting of PFI signals we need to disable ClkIn Attenuation and reduce their threshold to 0.5V (or lower).
Step 3 Master Route Signals
To remove propagation delays due to cabling we bring the clock back in again from the CLKOut terminal via a matching length lead (matching to the slaves). This is routed to the 10MHz backplane clock.
The start trigger is routed out to PFI0, this is the signal to the other chassis to tell them to start logging data (for pre-sampling). This does not need to be matched in length.
We then use the Global Software Trigger as our Sync Pulse. Because this needs to compensate for lead length propagation delays it is output on PFI4 and routed back on PFI2.
Step 4 Set-up Slaves
The slaves should be set-up as follows and should be waiting for the PLL to lock prior to the master.
Step 5 Master and Slave Wait for PLL Lock
Pause and loop, waiting for PLL to lock on both master and slaves.
Step 6 Set-up Master Acquire
Card2 in each rack is designated as the Master Card and by this we mean the card that creates the trigger for the rack. The start trigger is routed here to the PXI Trigger line on the PXI backplane and then out of PFI0. The Sync Pulse is also routed from PXI_Trig2 so that NI-Tclk is synchronised across the backplane for all cards in the rack.
For cards that are not the trigger card the following code is run.
Step 7 Set-up Slave Acquire
Here the Start Trigger is sources from the VAL_RTSI_0, this is yet another name for the PXI backplane (PXI_Trig0). The sync pulse is routed from the backplane PXI_Trig2 as on the master chassis.
The other cards are set up as follows.
Step 8 Master Wait for Send Sync
All the NI-TClk references are collected and configures the properties commonly required for the TClk synchronization of device sessions and prepares the Sync Pulse Sender for synchronization. It's then held up waiting for the Slave to be in a state ready for the next stage.
The slave is set in exactly the same way except it has to be done after the Master.
Step 9 Slave Ready for Trigger
Step 10 Master Wait for Send Start Trigger
The first VI sends a pulse on the global software trigger terminal. Then we finalise the NI-TClk synchronising and wait to send the start trigger. When the button is pressed the start trigger is sent and the whole system is ready to acquire data.
Step 11 Acquire
The test was to send a pulse to Chan 0 on the first and last cards in 2 racks. The first channel on each rack will be used as the trigger source for the rack and the racks will be synchronised. The cabling has been carefully balanced to minimise propagation delays in the leads.
The test was done at 1.25GSamples/sec and 125000 samples were taken for each channel.
We then combined the graphs and compared the readings.
From this we can see a delay caused by the triggering system (essentially the detecting of the trigger and passing it to the backplane). This is <1ns and only applies to the trigger channel and can be post-processed out if required.
The synchronisation of Master Card 18 Chan 0 and Slave Card 14 Chan 0 is representative of all the other cards and channels in the system and should scale up to 8 racks (the trigger signals will be reduced by splitting them). The results here are 16ps which is astonishing!
The code and drawing is attached.
When I have completed the system I will probably post the code on-line, because I think there should be a ready made LabVIEW system available for this type of application (personally if I was buying $500k of hardware I would expect something to be available)
There is no way in hell I would have worked this out without the help of system engineering!
I think system engineering need better access to high value systems, because it felt like we were treading virgin territory here.
The support I received has been exceptional and should be something NI is proud of.
I promised I would write all this up as a response to a query on the forums.
From this point I need to package all this up as part of my Client-Server architecture and the jobs should be a good-un.
I'm updating my presentation on Immediacy for the CLD Summit on September 9th in Newbury. Part of this exercise is to right a wrong that I committed during the original presentation to no lesser a person than Jeff Kodosky (ooooooops). You'll have to check out the presentation video for the full confession or wait for somebody to snitch on me in the comments, but during my research I have dug out this little gem!.
(I've linked the image to the document)
It's a really interesting read because at this early version some of the design reasons are writ large.
And this segues nicely into 2 nice little bits of new functionality in LabVIEW and just to show I'm a new tech sort of guy I've made a video instead of typing it. ooooooooooooh multimedia baby!
I've seen a couple of demos and the filter hasn't been mentioned (and it's brilliant!) and the hyperlinks were only http, linking to local files is way more useful for me. Ideally I would like to be linking to files relative to the containing VI but this will cause all sorts of issues I would imagine.
Hope you all had a spiffing NIWeek if you attended. It looked splendid from my sad and lonely office!
I have reached perfect Twitter balance in that I have as many followers as people I follow on Twitter (it is on purpose and a bit of self analysis uncovered that it has to do with egalitarian OCD!). Someone I follow is James MacNally and he tweeted the following.
And it linked to the following excellent article and that article mentioned the Nirvana Fallacy which is a rather wonderful term for the subject of this article. (I was originally going to title it "People Ruin Everything").
The nirvana fallacy was given its name by economist Harold Demsetz in 1969 and refers to the informal fallacy of comparing actual things with unrealistic, idealized alternatives. In software it is characterised by requiring a "perfect" solution in areas where it is clearly unfeasible or even impossible.
Software can be flexible OR easy to use. It is rarely, if ever, both.
Deliver it in 2 weeks AND be completely robust. Radically shortened timescales will affect the robustness.
We don't have any requirements but we expect a low fixed price and delivery date.
In many cases the drive for impossible perfection actually inhibits usable improvements being made. It's way better to deliver something useful and improve it. Similarly it is important to manage stakeholder expectations.
Pareto had life sussed when in 1896 he noticed that 20% of the peapods in his garden contained 80% of the peas. (I know that's not accurate but it's way better than saying the he published a paper etc etc). The Pareto Principle or the Law of the Vital Few helps to concentrate effort into the areas of maximum benefit. If you can offer 80% of the functionality in 20% of the time you really are onto a winner.
So the article was originally titled "People Ruin Everything" and it was going to be a diatribe on how you can put in all these fantastic processes, APIs, frameworks, methods etc and they stay fantastic until people come stomping in, with their great fat feet and balls everything up. In the Nirvana Fallacy I see an explanation for this, as a provider of these things we expect the user to be as perfect at using it as the designer is. Well they won't be!
I had a customer who would break everything I gave her, it was uncanny. But rather than get grumpy I made sure she was the first person to see any software I wrote, she was the worlds best software tester.
The moral of this most rambly of ramblings is don't expect perfection, you will only receive misery and frustration for your efforts. Improvement is a good expectation. Expect improvement!
So the year is shooting past at a breakneck speed and here comes NIWeek 2015, sadly I blew my Jolly Fund in Rome and I won't be crossing the Pond this year. So this is a bit like me in a restaurant picking the meat dishes for my carnivore friends (I'm a veggie), here's what I would go and see if I was going....
As well as the keynotes, these are my picks (bear in mind I don't do many test systems any more so it's heavily biased on the software side of things)
TS6421 - Don’t Panic: LabVIEW Developer’s Guide to TestStand - Chris Roebuck
TS5740 - Computer Science for the G Programmer, Year 2 - Stephen Mercer and Jon McBee
TS7238 - Curing Cancer: Using a Modular Approach to Ensure the Safe and Repeatable Production of Medicine - Duncan MacIntosh ,Fabiola De la Cueva
HOL7309 - Hands-On: Introduction to Software Engineering and Source Code Control - Chris Cilino
TS6142 - Hidden Costs of Data Mismanagement - Pablo Giner,Stephanie Amrite
TS6408 - NI Linux Real-Time: RTOS Smackdown - Joshua Hernstrom
TS6720 - Effective Project Management of LabVIEW Projects - Ryan Smith,Paul Herrmann
TS6977 - 10 Differences Between LabVIEW FPGA and LabVIEW for Windows/Real-Time Programming - Erin Bray
TS6303 - LabVIEW FPGA Programming Best Practices - Zachary Hawkins
TS5898 - Inspecting Your LabVIEW Code With the VI Analyzer - Darren Nattinger
TS7477 - Using LabVIEW in Your Quality Management System - Maciej Kolosko
TS5698 - Augmenting Right-Click Menus in LabVIEW 2015 - Darren Nattinger,Stephen Loftus-Mercer
HOL5977 - Hands-On: Code Review Best Practices - Brian Powell
TS6420 - 5 Tips to Modularize, Reuse, and Organize Your Development Chaos - Fabiola De la Cueva
My interests here are software engineering, FPGA, and an increasing interest in data management. So I've chosen 13 hours of presentations + keynotes. Hopefully they will be video'd!
Talking of videos I've been busy tidying up some of our CSLUG presentations (more to come!)
If we manage to organise it, we're doing something a little different for the next CSLUG meeting on the 17th September, we're going to pick a subject and have a series of 4 slide presentations with multiple presenters. We should then get a spread of concepts for databases, error handling, customer sign-off etc etc. Each subject will be about 30 mins. We'll see how it works out!
Workwise: SSDC Maritime Division is beginning to blossom, we're currently waiting on orders that are worth 150% of last years turnover! I'm also working on a distributed oscilloscope program that seems to be generating some interest (I'm actually quite proud of it so far) and then we are waiting for the go-ahead on a data repository design and reporting system. Any of these jobs may have a profound effect on SSDC and I may finally get the company hovercraft I always wanted. The downside is I will probably be slower on my blog output!
If you are travelling to Austin; travel safe, if you are presenting; present well!