In my opinion the LabVIEW community is just full of lovely people!!
I missed going to the eCLA Summit as I'm presenting at NIWeek and very jealous I was sat in my office trying to get ODS documents to work (all sorted now hopefully).
So imagine my surprise when I was presented with this jacket all autographed up.
Some will see Jeff K but all I can see is Jeff X.... Well I you too Mr Kodosky.
I have one complaint about the beer!
It was a little dry! (I blame Mr Roebuck)
I hear it was Oli that organised it, so many thanks my friends, I was very touched!
Look here's a picture.
Hello My Sweets,
Way back when I started a project to make a Open Document Tool, some words can be found here.
It came grinding to a halt due to memory leaks in the xpath tool used for navigating content.xml and pressure of work.
Well .... I've only gone and cracked it.
So attached to this article is a project with the following example.
And the block diagram looks like this.
And what it does is
There's also couple of methods for replacing and inserting sheets. This is restricted to strings at the mo' but there's no reason we can't add formatting/polymorphism.
This should be all we need create spreadsheets from LabVIEW (obviously we can be adding more methods).
I would like to add some word processor functionality in the coming months as another class.
It's been tested with LibreOffice and OpenOffice and one converted excel file with the help of Jarobit, but this is a pretty early release so expect surprises.
I've used the open G ZLIB tools, but these are embedded in my project and namespace protected in an LVLIB so it should sit in a project and mind its own business.
Now I need to sleep.
night night Lovelies
Late night issue fix (always the recipe for awesome software), issues 1 and 2 sorted SW 09-04-2016
12-04-2016 I think I've sussed the Excel issues, so I've included a Template.ods with some data in it too. (there is a minor untidyness with some the repeat rows of emptyness)
It's now 4M because I've added all the ZLIB stuff, my thinking being that we can trim it back down when we're happy.
13-04-2016 disappointing result on the rebuilding of the ODS file, ZLIB is unzipping empty dirs as 0 value files. This is causing me some angst as I'm struggling to get the file correct for both LibreOffice and Excel.
ODS files are slightly simpler when created by Excel, so it all works OK using the template.ods provided <-- Well Well Well it appears there's an issue with ZLIB Read Compressed File__ogtk.vi where it is extracting an empty directory.
15-04-2016 I've tidied the project and improved the example.
If you went to Austin for the CLA summit I trust you had fun and returned home safe, sound and brainy.
If you picked up the tone of my previous blog posts correctly you'll appreciate that I don't really like acronyms. I apologise for subverting one of the better ones for the title!
One of the keys to simplicity in my opinion is to standardise, by this I mean employing techniques that are accepted as the norm. The theory behind this is pretty well researched.
In the Design of Everyday Things by Don Norman talks about
New situations become more manageable when existing pattern knowledge can be applied to understanding how things work. Consistency is the key to helping users recognize and apply patterns.
It saves a lot of extra brain-work and the skills are more transferable.
The additional benefits - If there is a vibrant user-base you know that it will be supported, added to and will have an active eco-system. Within LabVIEW you know it's base purpose will be worked on and improved. If you are employing some peculiar edge-case you are risking no improvement and quite possibly deprecation.
There's a load of ways to store your results, but databases are the only one that's worth mastering. There's a great deal of support online and MySQL and SQLite have thriving communities with huge amounts of tools and help available. Type MySQL into Google and you'll get over 100 million hits, I think that's enough material for anyone.
It's all very well using a LabVIEW toolkit to plug data directly from cluster into a table, but if you want a skill you can build on have a bash at SQL. It will end up as one of the most useful tools in your toolkit. Again a search on Youtube for SQL Tutorial resulted in over 500k hits.
Inter-system communications is a rich area for toolkits, add on technologies, APIs etc etc. The biggest successes I have had are home-grown low-level protocols just chatting out of UDP (either broadcast, direct or a combination of both). It's robust, simple, low level and there are tools out there to sniff it. People get obsessed with security, but all of our systems are on secure networks and security can get added if you need it. Where-as it can make debugging and programming difficult when you don't. Google 58 Million hits.
One of the issues with UDP is that it is so basic, there's no handshaking and it can't handle large lumps of data. If you need this capability then you can use the TCP API. It's only marginally more difficult to use and we use UDP for inter-system messaging and TCP for transferring data.
In Industry pretty much everyone uses OPC for parking centralised data in distributed systems, it will be beneficial for us just to do the same. A write tag to OPC vi is not such a difficult abstraction to get my head around that I need to use a Network Shared Variable. 33 million hits on Google for OPC.
One day I'll find time to finish my ODF toolkit and when I have done that we should use ODF for all documents, spreadsheets and reporting.
Events are really handy for passing data about, they have a great attribute in that they are strongly typed and yet still I have reservations (Chris Roebuck gives an extremely sound argument for the counter view and writes some lovely code based on this). For me Event Structures are for events, any other use is against the standard and therefore an additional burden.
Back in the days of yore (BQ - Before Queues) and before Mr Mercer provided the wonderful queue API in LabVIEW we rolled our own. As we know we're now in the AQ period of history.
That was circa 2000 and we still have something similar in todays code.
This is how much we love Queues!
As I've discussed many times Global data need's viewing with suspicion, however if you need it you'll find the most efficient and easy to understand method is just to use a global.
I know they're a bit peculiar, named wrong and have a weird interface!
but compared to the previous technique of using shift registers in a while-loop (i.e. an accidental side-effect) I think it has a much clearer purpose.
So my friends the take out from this article is that it is better to use some technique or tool that is standard and popular even if it is not the "best".
Lots of Love
I'm pretty sure I did a presentation on this some time ago, sadly I seem to have lost it so I can't just regurgitate it here!!
A few years back there was a marketing campaign based around how learning LabVIEW could lead you into different skill areas and work. Normally these fly right over my head, but this one made an impact.
So this article will be pretty self-indulgent, but the point is that the link between LabVIEW and the hardware it can drive can push your career into new and interesting areas.
Below is a slide from my presentation on immediacy, it maps the way SSDC has changed from a mainly Test based company to much more of a Distributed Machine control company.
All of our experience has stemmed from designing test and production equipment from the factory floor (to those of you that have never seen a factory, they are what industrial estates used to be full of back in the olden days).
A few years back we saw the test market becoming increasingly commoditised and so decided to look for work in other areas (a decision also based on the fact we were bored with designing test systems). RT came along and we embraced it for control work, both in Factory Control and Machine Control.
The first factory control job was for Mclaren where we designed the composite moulding plant for the body and chassis (worlds first!) for this beauty!
Another job in 2005 was focusing a blu-ray dvd mastering system to 40nm on a glass platter spinning at 5000rpm. Pretty damn challenging, but excellent fun.
Databases have always been central to our skill-set and we found ourselves doing Laboratory Management Systems, these are essentially large UI applications, connected to a database with lots of screens to manage reporting, scheduling etc etc.
If RT was good for us, cRIO was brilliant!!
The weird thing was that we weren't using it where we expected to. A large percentage of our work was actually using the FPGA to hack and convert old systems communications. The ability to do this fast (1 load/unload of the buffer) has been really useful. The best thing about this type of work is that our customers have never heard of NI or any of their technologies. It's fantastic to break an entirely new market.
So this week we will be fitting a system on this ship.
It's a proper engineering geekfest. The next picture is the ship balanced on its keel in the drydock!!
This job came from Adrian cold-calling the ferry company and telling them that we could hack the ships monitoring system. Kudos and employee of the year to him as there is a lot of ships bobbing about that need this work!
So who knows where we will be in 10 years time, all we know now is that the skills that currently pay the bills are....
Databases (MySQL, SQLite)
Communications - Serial, CAN, UDP, TCP
So looking at the original slide, a large percentage of our work is now in the bottom half of the triangle and who knows what will be added to the bottom.
Lots of Love
Howdy Doody my precious programmers
I'm often asked "How do you keep your project so shiny, clean and smelling so minty fresh?"
Well first let's have a look at a clean project
This is our General Application template and you'll notice that the main VI (GenAppMain.vi) sits atop the hierarchy and that's because he is the king and everything should be relative to him. This is how LabVIEW like things organised, it will always look below itself in the hierarchy first as standard behaviour.
You'll also notice that we use auto-populating folders showing the entire hierarchy, the way we structure our projects lends itself to having no virtual folders (IMO it's an abstraction too far)
Dlls, config database file, startup VIs are also at this top level. All of this stops LabVIEW going searching when you move the project around. Keeping everything at this level also helps when you build an executable. It's much easier just to search in your own directory.
Like Unit Testing? well look in the TestVIs directory (we are trying to use Teststand if we need to do Unit Testing, this is quite new to us as we far prefer lots of functional testing)
Next we need to make sure there are no conflicts. Click on the exclamation mark and solve the problems.
What we are aiming for is a portable project. Ideally I want to dump my project directory on a new computer and the it to work. At the very least I want to be able to move the project around on the computer you are developing on.
All this preparatory work is to allow us to upload the project into our repository (SVN). If the project is portable we can then download the project onto any SSDC machine or even across customers machines.
We take an uncompromising view on dependencies and in truth anything outside of SCC (Source Code Control) is something that can change how your software works in unpredictable ways. See here for a discussion on this. With this in mind we can see that the Dependencies section is pretty light and we aim to keep it that way.
Over time scrappy bits of old project can get caught in the gaps, and these need cleaning. This is because they are indicative that a lack of control and understanding. One common area that needs regular flossing is in the Dependencies.
In this example we can see that although there are no conflicts, there is still some ugliness in the Dependencies section.
Cleaning these out is fairly straight-forward, you just need to find the unused VI in the project that is referring to these dependencies. But if you use dynamic VIs you should employ the following trick first.
Sticking any dynamic VIs in a non-called case of the main calling VI will not effect the running of the VI (LabVIEW will compile them out), but it will allow the IDE to tell you if they are broken. It will also allow LabVIEW searches to ascertain if the search VI is in the hierarchy.
Now you are ready to floss....
Right-click on the offending Dependency and select Find>>Callers, because the main calling vi is not broken and all dynamic VIs are on the main calling VIs block diagram we can be pretty sure that there is a broken, unused vi in the project. Find it and delete it (using the SCC delete function!)
Also notice the \user.lib folder, for us this is a sure sign of something untoward in our project structure. This is because we NEVER use user.lib. Also view instr.lib with suspicion, anything that is not part of the standard LabVIEW install should be moved out and put under the project (use lvlibs to protect the namespace).
Having a tidy project makes cross-linking and unpredictable loading a rare thing and it may even prevent gum disease.
Lots of Love
I'm listening to King Gizzard and The Lizard Wizard as I write this and rather jolly they are too!
I'm stupidly busy at the minute so forgive my lack of output!
This particular article is about numbers. It's the numbers we charge and the what they mean. I'm hoping it will help people starting out in the trade and help when a customer is considering hiring a company.
SSDC are a LabVIEW consultancy and as such we charge people for our services. There are 2 ways you can charge for your services, hourly (time and materials) or fixed price.
For hourly the customer takes on the risk (although there is always a limit to what your customer will pay), for fixed price the risk is on the supplier. As such the price you use to calculate a fixed price job should always be higher.
How much higher is the important question?
I'll drag up a story from the past to give some context.
When we started out our rate for fixed price work was £35 ($50)/hour and we struggled! Every job was hand-to-mouth, we couldn't offer the customer service we wanted and it was generally hard. If you can't offer decent levels of customer service you won't get the repeat business that makes life so much easier. Things were bad so I did some maths!
For my company I needed to know how efficient we are. The easiest calculation is hourly rate * number of hours worked in a year = £67200/person/year. If I got an hourly paid job at my hourly rate I'd earn £67k ($100k). Trouble is we were doing fixed price work and this dropped us down to a net turnover of £40k($57k), take out 20% tax, business expenses, insurance, transport and it all begins to look a little sad!
It dropped to 40k because we were 60% efficient, this means we were spending time on unpaid work -> Sales, Marketing, quotes, accounts etc etc. So even if we were fully employed we were still barely earning enough to pay the mortgage.
And this 60% efficiency shouldn't be regarded as a bad thing!, you should spend time on all of those things, you just need to account for them.
The other assumptions here is that you'll always win on fixed price work, I would suggest that even if you were good at software estimation you'd still only win 50% of the time (and I'm still waiting to meet THAT person!).
Lastly on fixed price is what happens if it all goes wrong....you don't get paid!
So when calculating a rate to apply to a fixed price job you need to take the hourly rate you would be happy to earn if you had a contract for the year, factor in your efficiency. You can then apply this rate to the estimation of hours and add in a contingency (we'll come back to this).
So for us we should have been aiming for an hourly rate of £58.3 ($83.3) when calculating fixed priced work.
Disclaimer: these numbers are circa 2000 and we are considerably more expensive now!
Coming back round to contingency, we use this help us cope with uncertainty. So if for example there are some requirements that are poorly defined, you could bully your customer into divulging all the details or you could just add some contingency. This is where experience really helps.
So the final fixed price equation should be at least
Hopefully by using this equation you can avoid paying your mortgage with your credit card!
So it looks as if I'm coming to NIWeek 2016, I have submitted a presentation or two (more details to follow, but they will both be sensible!)
I do have a T-shirt design in mind and if there is any interest I'll get some made properly.
Here's a rough early concept.
Lots of Love
What-ho G-Programmers of the world!
If like me you find programming the tree control to be a pain in the bum you may be interested in this little bit of code.
With it you can take a database table and create a tree hierarchy
So the table above will create a tree that looks like this when you use the query SELECT * FROM experimentTree
Going back the other way we want to generate SQL INSERT statements (this is SQLite flavour) or a table of data to parse back.
The code to generate this is included in the teststub. As a separate statement or as part of the main statement you should also empty the table.
The front panel of the teststub looks like this
To get a tree from the database the block diagram looks like this.
To plant a tree in your database, the block diagram looks like this.
For simplicity I have contrived the tags. For my purposes I will be linking the selection to an experiment, obviously your requirements may differ, in essence the experiment field will be a foreign key into another table.
It's in LabVIEW 2014, but I can down-version if required.
Perhaps this will help make tree-huggers of us all
Towards the end of last year I did an article on 12 things that make a project fail so for balance here's some things that will help a project to succeed.
These are not in particular order.....
Have a process, declare it, develop to it and improve it.
That way your customer knows how you operate, your developers know what you expect from them. If you go into a project without a process it will be haphazard and the more complicated projects will really struggle. If you are employing contractors, you should ensure that they understand your processes. We have seen several projects that suffer because developers are poorly selected and just thrown at a project with little else but a list of requirements (incomplete)
For me, the hardest part of any project is finishing it off. But this is actually the most important thing, I know it sounds ridiculous but we've made a pretty good living recovering abandoned projects. One key personal attribute for this is tenacity.
All of this is a lot harder than you would think, but it is important if you're in this for the duration. Because abandoning projects is very, very bad for customer relations!
Not all projects go well, we are in the business of prototyping and bespoke software is difficult. It's why we charge the $$$$. Your customer should know if you are putting in the hard yards and often they appreciate it.
You will likely as not suffer a failed project by assigning 6 newly qualified CLAs, straight out of university, with no prior project experience, to anything complex.
If you assign a team of engineers that have successfully completed other projects, it will probably succeed.
Managers generally struggle with this concept and I have seen quite a few new LabVIEW engineers put under unbearable pressure because their company has paid for the training and LabVIEW is easy! I've also seen multi-million dollar projects nearly fail, because the organisation has just employed anyone with the word LabVIEW on their C.V. (actually it would probably be typed Labview).
There is too much discussion about how one methodology or another is better. The truth is that any methodology is better than none, a methodology your developers are comfortable with is an extremely valuable thing.
One thing I would add that your methodology needs to be able to cope with changes at the end of the project.
This has the added bonus of making your life less stressful!
A healthy social life involves surrounding yourself with people that make you feel better and jettisoning people who make you feel crap. Business is usually a social activity and the same rules apply.
A good rule of thumb is that if the Accounts department are pleasant and pay you when they promise, then the company is usually a good company.
This relationship is conducive to project success.
Hardware costs are usually a very small part of the life-time costs of the system as I discussed here. Hardware that can easily satisfy the requirements at a projects inception and has the ability to be expanded will vastly improve the chances of project success. Second-hand, under-powered and under-specc'd equipment will add a significant risk of failure.
Talking of risk, your process should always push risk to the front. Always, always, always. This can be uncomfortable and natural human instinct is to get instant gratification by doing the easy stuff first. So if you suspect that the users requirements are not being expressed, then supply prototypes. If hardware issues need solving, solve them first.
A company fairly local to me had fully sorted it's CI method (Continuous Integration), bought loads of PXI racks, fully loaded with gear and then failed to deliver a single working test system, because no-one had actually built a prototype and run a test. Even after all my years in the business this still made my jaw hit the floor.
These points are pretty obvious, but it's sometimes good to have them in one place.
If you can think of any more, chuck them in the comments
Lots of love
Hello Lovelies and Happy New Year,
Always happy to jump on a bandwagon here's my review of the year....Random Style..
So many to choose from, it's been an interesting year.
Flying to Amsterdam to fault-find Canbus on a flight simulator (didn't get a flight tho'!) or finding a great bar in Rome and being fed lots of locally brewed beer or getting ISO9001 certificate.
Challenge the Champions at NIDays UK (I might have been a little drunk, although not the drunkest!), my contribution was to re-arrange the bottles on the table.
Me, James Powell, Chris Roebuck, Adrian Brixton and Jon Conway all looking relaxed and happy, I like this photo.
Such a good dog!
Really loving the cRIOs with RT-Linux (we're using the 9035), such a big improvement on an already good product.
Best Non-NI Toy
IOSafe 214, waterproof and fireproof NAS. Expensive but really nice!
Decided to go the micro-services route for post-processing of acquired data, that way I'm not locked into LabVIEW if customers have Matlab or other ways of doing it. The post-processor is then called by a command line and the TDMS file fed as an input.
Oh and it has a funky progress bar.
For consistently good technical content it has to be ....
This year has been great for presentations, the presentations are more clinical and have a wide variety of subjects. Personally I love the show and tells and the project management ones......
All of them!, anyone who has the nads to stand up deserves this prize.
CSLUG won best advanced user group and had >50% of the European CLA Summit presentations.
The entire thread on the Singleton article. I think it became that rare thing where we were all learning off each other.
May your 2016 have no broken wires at all!
I've done enough language stuff for the time being, it's time to look a bit closer at a subject that is far more interesting to me.
LabVIEW projects are getting bigger and more complex, generally SSDC is now working on distributed systems, large database enterprise systems or control systems. NI is wanting to sell system level hardware rather than single boards. All this means project failure can now impact on peoples jobs and company profitability.
I have referenced the Standish CHAOS study in presentations before and a quick look at the graphs will tell us that any improvement observed from 1994 to 2010 is marginal, statistically I would say that any improvement is within the general variance.
This is interesting because I am constantly inundated with propaganda promising me a brave new world of software development.
The take-outs from this are LabVIEW projects are getting more complex, and I would expect these complex projects to begin to follow the findings of the Standish CHAOS study.
If we now look at these two videos by Steve McConnell (If you don't know who he is then look him up!)
This webinar looks at case studies of 4 projects ranging from embarrassing failure to great success and the 4 judgement points are a nice simple way of evaluating a projects chances of success.
If you've not bothered to watch the video the 4 factors are Size, Uncertainty,Defects, Human Factors and how you manage each of them affects project success.
This next video is in a similar vein but looks at some common graphs associated with the software development process.
40 minutes in StevieMac (as he loves to be called) discusses human variations and it is a stunning bit of study.
Again if you couldn't be arsed to watch the vid, the spoiler is that human variations affect a project by factors way greater than methods or processes. So as an example filling your Agile team with idiots is likely to lead to failure.
Sometimes a project is JUST DIFFICULT! The likelihood of failure is greatly increased as you add function points to a project. If you have trouble defining a project or you just can't picture the design in your head I reckon it's a sure sign you have to put some hard miles in up-front.
As was shown in the 2nd video sometimes a project will fail just by intelligent people doing stupid things, in the case of the various medical failures it was bad business as much as bad software engineering. It's well worth understanding the financials of a project before you begin.
If you are fairly professional you will have some very busy times (being professional is a really attractive sales technique). It can be very tempting at those times to take a bit of a punt on the estimate and often you will be caught out in spectacular fashion
The general rule in our business is that we very rarely make what we expect to make on a project. I would hazard a guess that we're not alone on this. What is the percentage? well taking the CHAOS report terminology I would say the the majority of our projects could be classed as challenged, very few fail (I think we've had one canceled and this was essentially political). How were they challenged? The majority would be late for a variety of reasons, generally it's over optimism! Very few would lose functionality. This is all indicative of poor or lazy estimation. For us it is mitigated by sticking rigidly to fixed price, absolute openness and early release of useful code.
I touched on this subject in Damn! My Silver Bullet Appears To Be Made of Crap. The graph showing how experienced teams have vastly improved productivity over inexperienced teams in the second video backs up my feelings here. That doesn't mean that a team should only contain gnarly old gits (SSDC defined), but often employers are less than discriminating when employing LabVIEW programmers. That poor decision is often the foundation of a failed project.
Experience gives you the flexibility and resilience to change that most projects need.
One of the most best bits of the Agile Manifesto is this
"Our highest priority is to satisfy the customer
through early and continuous delivery
of valuable software."
We need to understand that the industries that most benefited from Agile processes were not actually delivering software to customers at all. The simple expedient of showing the customer stuff on a regular basis was deemed to be a huge breakthrough!
If you are not doing this with LabVIEW you really are in the wrong game!
I discussed this in The Cost of Failure Article. The only addition could be applied to the people doing the work too, in any engineering discipline paying peanuts really does mean employing monkeys.
You could be happily working away on your project and management change everything. We've also had projects that were designed for a nefarious purpose (in our case to extract processes from the shop floor so the factory could be closed and shipped abroad).
Saying all is going well when it isn't is dishonest, similarly agreeing to unattainable delivery dates to win a job. Customers generally have fairly simple requirements with regards to projects, they expect you deliver what you you say you're going to deliver, when you said it would be done. Getting money for a project is usually a one-shot thing, going back for more money is incredibly difficult for all involved and is bad for customer relationships.
Quite often we find the customer expectation to be that not only do we design the software, we are expected to generate the requirements, specify the equipment and we have even been asked to redesign the customer hardware. In this type of job you need to be very sure of your ground because you are going to be taking all the blame.
The healthy situation is that the customer has the domain expertise and can communicate these as part of their requirements and feedback. Jobs are far less risky in this environment.
I've added Pascals comment into the main text as I think it more eloquently describes one of the points I am trying to make.
I also see this phenomenon in projects and I think that if we generate the requirements the costumer is less involved/impressed/excited about the work that is done. Because of him not having to think requirements through, he will change requirements if problems arise, just to move on with the project. Which leaves a bad tasted in my mouth.
This is a bit like self-help books that try to convince you that you're the most important person in the world and sheer force of positive thinking will make everything OK. It won't and probably the worst thing you can do to a complex project is add new processes or tools to it.
If it doesn't fit the problem it's probably inappropriate. A large proportion of the jobs we rescue have had an unsupportable or just strange architectures applied to them. Similarly if the solution is radically more complex than the problem it is a sure sign of poor architectural decision making.
An important, risky and time constrained project is probably not the best time to try out that new process or architecture. Essentially self-learning shouldn't be done on a customers paycheck (unless declared) and it also increases risk. A better way is to allow enough resource to attempt new stuff on internal projects or discuss it with your customer, sometimes they are willing to take the risk if there is a tangible benefit.
One reason I like fixed price work is that it forces you to analyse risk well and to then push these risks to the front.
I'll leave with a motivational poster to set you up for the new year.
See you at CSLUG on the 17th if you're attending.
Very Much Love
Hello My Darlings
This article is continues the discussion about making re-sizable screens in LabVIEW. Part 1 here. Check out the links and videos courtesy of Sam Sharp and Jeff Habets.
While on the subject of links check out this description of the LabVIEW dimensional model. Extremely useful!
We left our screen with a nice little undock button and the code to instantiate a graphscreen vi.
This article will concentrate in this graph screen display. If you skip to 6 minutes into the following video I demonstrate what I am aiming for.
The behavior I'm expecting here it for the graph to fill the available space without encroaching on any other objects on the screen. The only issues I've found are the palettes, titles etc seem to have a bit of a life of their own so give them a bit of space.
The behavior we expect from a list box is that it keeps it's width but extends in height, this can be achieved by adding a splitter and locking it.
As Sam Sharp says splitters are cool!
I wanted the measurements and cursors to be off-screen unless required. They also needed to stay the same size in a repeatable position. To become visible when selected. To achieve this we have a button that keeps to the edge of the screen and when selected comes onto the screen allowing interaction with the tabs.
This little tip was from Tim Hursts presentation at the CLD summit with a couple of modifications by me.
We don't want this part of the display to re-size, so I made the tab a strict type-def. when not required I push it to the side.
I've called the tab control settingsCarrier and the code to bring it in is as follows
The subvi just sets the position left or right depending on the SettingsHidden boolean (I have a separate button to push it back out of sight) and this vi is also used on a return button or resize event.
The only issue left was that if you had the tab visible it looked really ugly when you resized the panel, so my trick is to have a tab within a tab and make the carrier tab transparent, I could then flip it to a transparent window when it is hidden.
I've been a bit sneaky by embedding a graphic of a tab in the tab selector (it would be transparent too otherwise).
The only other tips I would give is to follow a rule of the simplest way to handle these screens is to remove as much functionality into menus and windows title bars as possible, so then you don't have to worry about them.
I hope this has been useful and if you have any questions post them in the comments (or pm me)
Lots of Love Steve
Hope your wiring is going well!
LabVIEW can be quite cumbersome when dealing with screen sizes, our stock way of dealing with this is to restrict the screen size of our applications. This hasn't been too much of an issue until now. My current project needs to be able to run on different resolutions and it also needs to undock graphs, these also need to be resizeable. I thought it may be useful to list out the techniques and challenges.
First lets think about what we want to happen when a screen size is changed.
First let's deal with undocking.....
The first challenge here is to make a button that keeps its proportions, and is locked to a position on the graph. I want it to always be in the top right hand corner of the graph. Aesthetically I want it to be unobtrusive (i.e. use transparency to make sure it doesn't get in the way of the graph). I made a line drawing in LibreOffice Present and cut and paste this into a vertical button (from the UI Control Suiteystem Controls Palette). The easiest way to restrict its size is to set it as a Strict Type Def. Finally we need to lock it into the top-right hand corner of the graph, sadly the only way to do this is writing code!
x1Undock is the button, so we take the width and move it relative to the Right of the Plot Bounds of the graph.
This is code worth noting as you'll be using it a fair bit, due to the fact that LabVIEW is natively a little random when left to its own devices!
Pressing this button will instantiate a graphscreen.vi instance, pre-loaded with the settings from the docked graph.
The code to instantiate the graph is as follows
Keeping control of all the dynamic references in this manner has worked out nice, it allows us easy housekeeping and access to control and indicator values.
The next article will discuss the undocked graph and give some more design pointers.
Lots of Love
As a multi-media project I've been recording myself designing a screen with a view of condensing the entire process into one small, sped-up video.
I was so pleased with the results I'm going to post it incomplete because it made me happy!.
Right back to work for me
I suppose I should give some details of the job.......
The idea here is that my esteemed colleague Adrian (Tark on here) is doing all the hard stuff to do with deciphering comms, designing servers and depositing data. In MVC terms he is handling the Model and the Controller. As is our norm we have decoupled the screen (essentially a seperate loop with a queue component) and he has sub-contracted that aspect to me (I like UI stuff). To keep it simple I don't have access to the code repository, I just give him my updates to incorporate.
So while not a complete MVC we can see the advantages of splitting off the UI.
For validation reasons we are attempting to keep the screens similar to those that are currently part of the system.
Lots of Love
Update 09-Feb-2016: Here's a picture of the system, just after it's Lloyds Register factory test was completed.
I have some more LabVIEW features from the olden days for all you young G-Stringers today.
I presented this feature in my CLD summit presentation here, but thought it worthy of special mention and better quality video.
So here it is mentioned in the wonderful LabVIEW 1.2 manual (1989).
And here it is in LabVIEW 2014 under Operate>>Data Logging
Very useful for unit testing I would think and this should draw a line under my pennance to Jeff Kodosky!
My next article will be on undocking and resizing user interfaces and some useful techniques to make them a bit easier.
Lots of Love
And I left it with the PXI Trigger Routing set up in MAX. I wanted to do this in software, because I don't like systems that need lots of different programs to work.
We use the PXI Trigger lines to route signals along the Racks backplane to connect all cards. An 18 slot chassis is split into 3 buses so we need to connect these buses to transfer signals along the backplane.
We have allocated PXI_Trig0 for the Start Trigger Line and this tells the cards to start acquiring data. This is generated from the trigger card (Card2) so needs routing from bus 1 along 2 and 3.
We also use PXI_Trig2 for the Sync Pulse. This comes from PFI2 of the 6674T in slot 10. So will need routing outwards from bus 2.
PXI_Trig1 and PXI_Trig3 are left-overs from development and can be ignored (but I'm at a fragile point of my testing so they stay in!)
So to do it in MAX you open up the rack like this.
In my opinion it has a pretty weird API and quite a few caveats, so stay with me.
First of all I wanted my software interface to look similar to the one provided by MAX.
It's been good practice to clear all of the trigger assignments before allocating new ones. This is because if a routing has already been assigned it will throw an error.
Here's a breakdown of the commands
VISA Search for something with BACKPLANE in its reference and that's our kiddy. Stick her into a shift register for further use.
The VISA Unmap Trigger.vi is probably over-kill but for now it stays in until it's fully tested (it's pretty early release yet),
Without throwing errors just run through all combinations to clear any bad house-keeping up.
Finally close the reference when done.
Now the hard work starts in adding all of this to my server architecture.
Edit: This is version D1:04 (I'm working on D1:06 currently), but it shows the sync nicely.
Hope it's been of use to someone.
Lots of Love
WARNING: This is one boring-ass blog post! but it may just help a couple of people out if anyone could actually find it.
I'm lucky enough to be working on a multiple rack, high channel count oscilloscope project at the moment and I thought I would tell the story of how we got the various chassis synchronised to <1ns.
The requirement is pretty standard IMO, I want all the cards on all the chassis to trigger at the same time. The trigger signal can go to each of the chassis but they need to be synchronised and in phase.
When I saw the hardware list (8x 18 slot PXI chassis, full of oscilloscopes and each having a timing and sync card) I thought this is the equipment for the job.
Right tools for the job and a standard requirement, what could possibly go wrong? This was my 1st mistake. It was really hard!
In the end I had to lean on Sacha Emery, one of the patient and knowledgable System Engineers employed by NI and he sweated blood on this job too, there just didn't seem to be much information out there.
Anyways here is the solution and I hope is saves someone some time.
To test the system I send a pulse and read it on channels across the backplane of the PXI rack. We need to use the OCXO (Oven controlled crystal oscillator) as our clock and export this clock to all chassis, we need also need ensure the scopes start at the same time and are linked to this clock on the backplane of the rack. Additionally we need to make sure the clock etc go across all of the racks backplanes (18 slotters are split in 3 for flexibility).
Route the OCXO from the PXIe-6674T on the master chassis to the CLKOut connector on the PXIe-6674T.
Gotcha: Dev1 is the 6674T in my system and Oscillator is OCXO, I didn't find any documentation that stated that this is so, but it is!.... apparently.
Turn on the PLL circuit and set the frequency to lock. To cope with the splitting of PFI signals we need to disable ClkIn Attenuation and reduce their threshold to 0.5V (or lower).
To remove propagation delays due to cabling we bring the clock back in again from the CLKOut terminal via a matching length lead (matching to the slaves). This is routed to the 10MHz backplane clock.
The start trigger is routed out to PFI0, this is the signal to the other chassis to tell them to start logging data (for pre-sampling). This does not need to be matched in length.
We then use the Global Software Trigger as our Sync Pulse. Because this needs to compensate for lead length propagation delays it is output on PFI4 and routed back on PFI2.
The slaves should be set-up as follows and should be waiting for the PLL to lock prior to the master.
Pause and loop, waiting for PLL to lock on both master and slaves.
Card2 in each rack is designated as the Master Card and by this we mean the card that creates the trigger for the rack. The start trigger is routed here to the PXI Trigger line on the PXI backplane and then out of PFI0. The Sync Pulse is also routed from PXI_Trig2 so that NI-Tclk is synchronised across the backplane for all cards in the rack.
For cards that are not the trigger card the following code is run.
Here the Start Trigger is sources from the VAL_RTSI_0, this is yet another name for the PXI backplane (PXI_Trig0). The sync pulse is routed from the backplane PXI_Trig2 as on the master chassis.
The other cards are set up as follows.
All the NI-TClk references are collected and configures the properties commonly required for the TClk synchronization of device sessions and prepares the Sync Pulse Sender for synchronization. It's then held up waiting for the Slave to be in a state ready for the next stage.
The slave is set in exactly the same way except it has to be done after the Master.
The first VI sends a pulse on the global software trigger terminal. Then we finalise the NI-TClk synchronising and wait to send the start trigger. When the button is pressed the start trigger is sent and the whole system is ready to acquire data.
The test was to send a pulse to Chan 0 on the first and last cards in 2 racks. The first channel on each rack will be used as the trigger source for the rack and the racks will be synchronised. The cabling has been carefully balanced to minimise propagation delays in the leads.
The test was done at 1.25GSamples/sec and 125000 samples were taken for each channel.
We then combined the graphs and compared the readings.
From this we can see a delay caused by the triggering system (essentially the detecting of the trigger and passing it to the backplane). This is <1ns and only applies to the trigger channel and can be post-processed out if required.
The synchronisation of Master Card 18 Chan 0 and Slave Card 14 Chan 0 is representative of all the other cards and channels in the system and should scale up to 8 racks (the trigger signals will be reduced by splitting them). The results here are 16ps which is astonishing!
The code and drawing is attached.
When I have completed the system I will probably post the code on-line, because I think there should be a ready made LabVIEW system available for this type of application (personally if I was buying $500k of hardware I would expect something to be available)
I promised I would write all this up as a response to a query on the forums.
From this point I need to package all this up as part of my Client-Server architecture and the jobs should be a good-un.
Hope this helps somebody in a similar situation
lots of love
Added Backplane routing stuff here
Hello My Lovelys,
I've posted an uncut version of my Immediacy presentation on youtube.
Where I'm allowed I will also be posting NER CLD Summit 2015 vids on the CSLUG youtube channel and there's some nice ones!
Enjoy and sorry about my problems with the equipments (I blame age!)
Lots of Love
Happy Friday My Lovelies
I'm updating my presentation on Immediacy for the CLD Summit on September 9th in Newbury. Part of this exercise is to right a wrong that I committed during the original presentation to no lesser a person than Jeff Kodosky (ooooooops). You'll have to check out the presentation video for the full confession or wait for somebody to snitch on me in the comments, but during my research I have dug out this little gem!.
(I've linked the image to the document)
It's a really interesting read because at this early version some of the design reasons are writ large.
And this segues nicely into 2 nice little bits of new functionality in LabVIEW and just to show I'm a new tech sort of guy I've made a video instead of typing it. ooooooooooooh multimedia baby!
I've seen a couple of demos and the filter hasn't been mentioned (and it's brilliant!) and the hyperlinks were only http, linking to local files is way more useful for me. Ideally I would like to be linking to files relative to the containing VI but this will cause all sorts of issues I would imagine.
See you at the CLD summit if you are going
Bonjour Mon Amis,
Hope you all had a spiffing NIWeek if you attended. It looked splendid from my sad and lonely office!
I have reached perfect Twitter balance in that I have as many followers as people I follow on Twitter (it is on purpose and a bit of self analysis uncovered that it has to do with egalitarian OCD!). Someone I follow is James MacNally and he tweeted the following.
And it linked to the following excellent article and that article mentioned the Nirvana Fallacy which is a rather wonderful term for the subject of this article. (I was originally going to title it "People Ruin Everything").
The nirvana fallacy was given its name by economist Harold Demsetz in 1969 and refers to the informal fallacy of comparing actual things with unrealistic, idealized alternatives. In software it is characterised by requiring a "perfect" solution in areas where it is clearly unfeasible or even impossible.
Software can be flexible OR easy to use. It is rarely, if ever, both.
Deliver it in 2 weeks AND be completely robust. Radically shortened timescales will affect the robustness.
We don't have any requirements but we expect a low fixed price and delivery date.
In many cases the drive for impossible perfection actually inhibits usable improvements being made. It's way better to deliver something useful and improve it. Similarly it is important to manage stakeholder expectations.
Pareto had life sussed when in 1896 he noticed that 20% of the peapods in his garden contained 80% of the peas. (I know that's not accurate but it's way better than saying the he published a paper etc etc). The Pareto Principle or the Law of the Vital Few helps to concentrate effort into the areas of maximum benefit. If you can offer 80% of the functionality in 20% of the time you really are onto a winner.
So the article was originally titled "People Ruin Everything" and it was going to be a diatribe on how you can put in all these fantastic processes, APIs, frameworks, methods etc and they stay fantastic until people come stomping in, with their great fat feet and balls everything up. In the Nirvana Fallacy I see an explanation for this, as a provider of these things we expect the user to be as perfect at using it as the designer is. Well they won't be!
I had a customer who would break everything I gave her, it was uncanny. But rather than get grumpy I made sure she was the first person to see any software I wrote, she was the worlds best software tester.
The moral of this most rambly of ramblings is don't expect perfection, you will only receive misery and frustration for your efforts. Improvement is a good expectation. Expect improvement!
Lots of Love
So the year is shooting past at a breakneck speed and here comes NIWeek 2015, sadly I blew my Jolly Fund in Rome and I won't be crossing the Pond this year. So this is a bit like me in a restaurant picking the meat dishes for my carnivore friends (I'm a veggie), here's what I would go and see if I was going....
As well as the keynotes, these are my picks (bear in mind I don't do many test systems any more so it's heavily biased on the software side of things)
TS6421 - Don’t Panic: LabVIEW Developer’s Guide to TestStand - Chris Roebuck
TS5740 - Computer Science for the G Programmer, Year 2 - Stephen Mercer and Jon McBee
TS7238 - Curing Cancer: Using a Modular Approach to Ensure the Safe and Repeatable Production of Medicine - Duncan MacIntosh ,Fabiola De la Cueva
HOL7309 - Hands-On: Introduction to Software Engineering and Source Code Control - Chris Cilino
TS6142 - Hidden Costs of Data Mismanagement - Pablo Giner,Stephanie Amrite
TS6408 - NI Linux Real-Time: RTOS Smackdown - Joshua Hernstrom
TS6720 - Effective Project Management of LabVIEW Projects - Ryan Smith,Paul Herrmann
TS6977 - 10 Differences Between LabVIEW FPGA and LabVIEW for Windows/Real-Time Programming - Erin Bray
TS6303 - LabVIEW FPGA Programming Best Practices - Zachary Hawkins
TS5898 - Inspecting Your LabVIEW Code With the VI Analyzer - Darren Nattinger
TS7477 - Using LabVIEW in Your Quality Management System - Maciej Kolosko
TS5698 - Augmenting Right-Click Menus in LabVIEW 2015 - Darren Nattinger,Stephen Loftus-Mercer
HOL5977 - Hands-On: Code Review Best Practices - Brian Powell
TS6420 - 5 Tips to Modularize, Reuse, and Organize Your Development Chaos - Fabiola De la Cueva
My interests here are software engineering, FPGA, and an increasing interest in data management. So I've chosen 13 hours of presentations + keynotes. Hopefully they will be video'd!
Talking of videos I've been busy tidying up some of our CSLUG presentations (more to come!)
These can be found here https://decibel.ni.com/content/thread/32010
I think it will make a nice archive over time.
If we manage to organise it, we're doing something a little different for the next CSLUG meeting on the 17th September, we're going to pick a subject and have a series of 4 slide presentations with multiple presenters. We should then get a spread of concepts for databases, error handling, customer sign-off etc etc. Each subject will be about 30 mins. We'll see how it works out!
Finally I'll be presenting on immediacy at the CLD summit in Newbury on 9 or 10th of September, so I'm busy updating my Rome presentation, I'm actually adding more technical content and taking some of the jokes out! I've been applying some of the techniques and can show some of these off now.
Workwise: SSDC Maritime Division is beginning to blossom, we're currently waiting on orders that are worth 150% of last years turnover! I'm also working on a distributed oscilloscope program that seems to be generating some interest (I'm actually quite proud of it so far) and then we are waiting for the go-ahead on a data repository design and reporting system. Any of these jobs may have a profound effect on SSDC and I may finally get the company hovercraft I always wanted. The downside is I will probably be slower on my blog output!
If you are travelling to Austin; travel safe, if you are presenting; present well!
Lots of Love
I'm going to be very candid in this article and once again no LabVIEW so bonus random points for me.
There will be no meditation, sobriety or jogging in this article so rest easy.
I'm pretty unemotional and shallow. In fact below are photos of me with my various moods on display
And I have an iron constitution, so essentially for 40 odd years my body was just for moving my brain from point A to point B and for filling with beer.
My twitter feed has a few comic book writers on it and these guys are doing a job they are intensely enthusiastic about, they are also doing a job they can carry around with them. Most are slightly younger than me and have been doing it for years.
It sounds idyllic and not dissimilar to my situation.
So it really interested me when one of these writers complained about feeling anxious and what could he do it about (it was anxious to the point of not being able to work, not just a bit fretful)
And then came the flood of rubbish advice.
And I really wanted to chip in, but not on twitter.
A few years back the same thing happened to me. Here's the details.
I love writing software, just love it. I'd do my 9-5 work and go home and have some food, say hello to my family. I'd then sit down and while they watched rubbish TV I would start programming again and this would then go on until 10ish and then bed. Every gap I had I would fill with software.
This was fine until one day I was driving home and I felt most odd, like I was having a heart attack. Bear in mind my body had never spoken to me for 45 years so I wasn't very good at listening.
Trip to the Docs, Heart going too fast, asked me if I was stressed, working too hard, told me to stop being stressed and working too hard.
So I looked at my life and felt surely doing something I loved wouldn't have this effect on me would it!. Well actually it did.
The issue was I wasn't differentiating my work time and home time. Essentially I was never finishing my working day.
I just separated the things I think as work (i.e. programming), with the things I don't regard as work. The things I do as work I do 9-5, the other stuff I do when it entertains me.
Luckily I don't regard research, writing blogs, making presentations as work so I could still manage to pack in some extra hours, which is useful when running your own business.
Did it work? Almost instantly and following these rules have kept me very happy for the last few years.
Just look at my little face....
So my friends be nice to your brain, it's kind of useful!
Lots of Love
I'm very enthusiastic about the subject of the last article and will attempt to flesh out some details in the months to come (sorry but my brain does not work in a linear fashion and neither do my enthusiasms).
And these grand statements may be part of another problem, a problem that we have contributed to in our innocence.
In fact we kind of predicted it by using the following quote in our book.
"If this is true, building software will always be hard. There is inherently no silver bullet"
- Frederick P. Brooks, Jr 1987 Computer Vol 20
And I really want everyone to understand this. Please, please understand it!
We presented LCOD as a method and carefully explained modular design, thinking that by describing the whys and wherefores the job was done. We then went back to our day jobs and didn't really have time or energy to follow it up.
I see all the frameworks, methodologies, processes, techniques and theories enthusiastically evangelized (ours included) and I want to shout "a badly implemented Silver Bullet will still be crap"
What do I mean by badly implemented? Taking LCOD as an example we can separate the techniques of implementation (what we place on the block diagram) and the hard work of design. So we could quite easily create a god component that does everything we want our program to do, but this is obviously a bad design decision. With LVOOP we could similarly have god objects and I expect we could have a god actor (Morgan Freeman).
Someone said the following to me at the CLA summit "OOP is the best way to write software!", it elicited the following slightly snarky response... "Define BEST". Statements like that de-value all the great software written that isn't OOP and doesn't really add anything to the conversation. OOP is a great and valuable tool to be added to your toolbox and used where appropriate. You do not become a great carpenter by using great tools, they help but it's not the full story.
And with all our best intentions you can put the silver bullet in a dirty weapon and have it back-fire on you (as Fab put it). Now I have some bad news for buyers of self-help books, I think good software design comes with hard work, study and experience, and pretty hard won experience at that! Also good software design does not depend on the methodology or process you favor, it depends on using a technique appropriate to the requirements.
"Rules, guidelines, and principles are gems of distilled experience that should be studied and respected. But they’re never a substitute for thinking critically about your work."
Jeff Atwood The Ferengi Programmer
I found this quote from a spat between Joel Spolsky and Uncle Bob Martin, 2 people who are very loud in the software design community and have very differing views on Test Driven Design. Both can't be right, but their opinions are espoused as absolute truth.
And here is an uncomfortable truth: you won't create decent designers by teaching techniques, offering frameworks and imposing rules. Good designers understand their tools, understand design and can assess what they have been told and reject what doesn't work for them. Also they generally get better with each job they do!
Software is a practical subject the best way to learn how to design is to design stuff. (<---Mr Mercer has made me re-evaluate this statement!)
Software is a practical subject the best way to learn how to design is to design stuff, while being taught how to design stuff. (<-- this is how I learnt and it served me pretty well, but I needed the experience of writing lots of code prior to being taught to really hammer home the lessons)
As one of the loudest people in our little community I have one message for you all to take away
Curmudgeonly yours with love.
I had a go at SMORES a while back in this article
(actually it was really an attack on mindlessly following acronymns)
and like most things it's not quite as black and white as I originally put it (time, conversation and thought have added colour and shade).
The conversation on this subject has mostly been with AristosQueue and I think we have stumbled on a point of convergence (I'm only speaking for myself here). Link to one part of the conversation is here
The breakthrough statement is as follows.
To back up my theory I've split a project into 3 stages
In reverse order let's look at the Low Level API/Drivers part. So this would be hardware drivers, reporting, data handling, communications.
Next we come to the framework there are plenty available (Actor Framework, Delacor, JKI, James Powells, TLB, Aloha to name a few), this should handle all the common tasks associated with most applications.
Finally we have the Customer Facing/Final Application software, this is the bit that brings together all of other parts and makes you money.
Here's a table of important considerations for good software design. And as an exercise I've selected 5 of the most important for each stage.
A bit of Acronym/Anagram fun gives us
Low Level API/Driver = STORE
Framework = FRERC
Final Application = FRUMF
It's important to note that the considerations I picked are the ones that are important to me, my company and my customers. They may not be important to you, so pick your bloody own!.
So if the requirements at each stage vary so much it stands to reason that we should apply different design tactics, even different designers.
There is one extra consideration that I would like to put in for frameworks (with thanks to Thoric) and that's Inobtrusive. I want a framework to be visible but not get in the way. I would like to see what it is doing but I don't want it masking the solution to problem I'm solving.
The other aspect to think about is the size of each section, so if you only ever tackle one type of job you may find that the customer facing part is small compared to a big ol' framework. In short they are variable in size.
More questions than answers as usual, but it's food for thought.
And to quote Leslie Lamport for the second time this week "Thinking does not guarantee that you will not make mistakes. But not thinking guarantees that you will."
Lots of Love
I'm such an adult! but the title made me laugh so it stays.
AristosQueue presented "Various Tangents Around The Circle of Design Patterns" and as usual it was a well thought out, extremely well presented and original piece of work. Sadly it wasn't recorded so you will have to take my word for it.
Now I've said all these nice things I'm going to discuss one of the ideas presented and my starting position is that I'm not entirely sure a component as defined in LCOD is actually a singleton pattern at all....
I have a very early edition of the GoF book and I actually was pretty keen on patterns when I was researching our book. Sadly my edition of Design Patterns is in almost perfect condition. This tells me only one thing....I didn't find the book very useful or applicable to LabVIEW, this is where AQs talk of idioms comes into play I think. My opinion is that the patterns described are idiomatic to C++/Java and they help deal with a lot of the heartache and nause related to those languages. I strongly believe LabVIEW has it's own idioms.
In software engineering, the singleton pattern is a design pattern that restricts the instantiation of a class to one object. This is useful when exactly one object is needed to coordinate actions across the system. The concept is sometimes generalized to systems that operate more efficiently when only one object exists, or that restrict the instantiation to a certain number of objects. The term comes from the mathematical concept of a singleton.
There is criticism of the use of the singleton pattern, as some consider it an anti-pattern, judging that it is overused, introduces unnecessary restrictions in situations where a sole instance of a class is not actually required, and introduces global state into an application
and here's a pretty good definition..
"Ensure a class has one instance, and provide a global point of access to it." (Note: I was too lazy to read the book again so swiped it from this excellent article http://gameprogrammingpatterns.com/singleton.html)
For us a component/action engine is purely there as a high level construct to restrict access to related modules in a program. This is design construct aimed purely at simplifying the block diagram. So for example here is a component as defined by us, this is an oscilloscope and the customer requires the ability to have either NI racks or Agilent with a variety of oscilloscopes.
Note: I'm still wrestling with the Acqiris Cards, they will hopefully be an AcqirisScope subclass as per the NIScope class.
As we can see there is little relationship to a singleton pattern here, rather the component is a restricted interface to a group of related objects. So what is the point of this? I think this is where AQ and I are converging (this is not a static situation, because we are analyzing designs at greater depth, publishing our ideas and discussing them).
I will talk about this convergence in greater depth in my next article, for a preview sneak a peak at the conversation here
Lots of Love
Well Hello programming chums
At the European CLA 2015 summit Sacha Emery presented "Actor and benefits of different frameworks" where he described his journey through various design choices when applied to a standard set of requirements. This segued nicely into various topics that were being discussed throughout the summit and it I also hijacked it selfishly to promote one of my interests.
I'm after finding some way of measuring the complexity of a system, now ideally I would like this to be something tangible like function points.
Internal logical files
External interface files
The complex problem is how to identify these items within a LabVIEW Block Diagram (I haven't given up on this)
One of the advantages of function points is that you can can get them from requirements and use them for estimation. My primary interest is not for estimation tho' I want a common measure of the the complexity of a project so we can better assess and discuss our design choices. The disadvantage is that it is hard work and takes a lot of time to undertake a study.
So how does Sacha's presentation link to this?, like all good test engineers I like my processes to be calibrated and I think this might be the perfect tool to do this. Here's how we think it will work (and I say we because this is based on conversations with Jack Dunaway, Sach and various other luminaries).
We have a common set of requirements.
We apply a framework/design/architecture to provide a solution.
We tag all the project items that are original items.
We analyze all the items that are in the problem domain (i.e. separate the solution from the architecture).
We apply a change/addition in requirements.
We analyze the additional items that have been generated to satisfy the change.
This should give us several numbers
Items in Basic Architecture (A)
Items to Solve the Problem (B)
Items to Make an Addition (C)*
And this is what we call the Emery scale.
The preconception is that generally the code to solve the problem should be pretty repeatable/similar.
So dumping Function Points for the time being and going back to a more simple idea (care of Mr Dunaway), we should tag the architectural parts and count the nodes using block diagram scripting. As Chris Relf has stated there is some good data to be had from LOC (lines of code) comparisons in similar domains.
And what use is all this work? For 1 it should give us an estimate about the number of VIs our design will have on a given target (The ratio of B:C(#requirements) should give us a size) . This is an essential bit of design information.
Early stages and a long term project. It could be part of my CS thesis if I was doing one...
This is only one potential benefit of the kind of baselining that Sacha has started (from an educational perspective seeing the same problem solved with different frameworks will be invaluable)
*A late addition to give us some concept of how a change affects the project
This years event was in Rome and in my opinion the quality of the presentations was higher than ever, it was held in the same hotel as where most people were staying and this improved the networking greatly. One of the things I look forward to most is meeting old friends and new.
To summarise :-
Presentations: Highest quality
Access and involvement of NI People: Marvelous
Networking: What a nice bunch of people the LabVIEW community is, I didn't meet a single person I didn't like (apart from a certain Mr Mercer who I have gone right off!, I got my revenge tho' oh yes..)
I think I have garnered enough material for 6 or more articles which is a real bonus.
So to summarise the presentations (and this is all from my perspective)
You are in a rich place indeed if you are in the market for a framework thanks to the efforts of these folks.
James Powell - A Dynamically Reconfigurable Actor System
Fabiola De La Cueva & Chris Roebuck - Delacor QMH (mentioned in their presentations)
Jim Kring - State Machine Objects with Events
It was interesting to me as well because it got me to thinking about what makes a good framework.
Dmitry Sagatelyan - Going Agile, Applying Agile OO Design Principles
Dmitry was showing off a vintage boat refurbishment (Al Capones) and it was very cool. Beautiful code on a georgious ship.
Richard Thomas - Code Reveal – Swish a UI toolkit for Windows 8 type touchscreen
Dr Thomas does the BEST UI stuff!
I really liked the discursive nature of a lot of these presentations.
Fabiola de la Cueva - To Model or not to Model
As always a great presentation from Fab, I completely agree about using models to clarify your designs.
My Conclusion not Fabs: use modelling to help your own brain, help your customers brains but don't have them as your controlled documentation if at all possible as it's another thing to maintain.
Stephen Loftus-Mercer - LabVIEW and Classic Design Patterns
Oh he got me hook, line and sinker. Reeled me right in. Once again this was a brilliantly researched and fantastically presented piece. If you ever get a chance to see him present you should do it!
Sacha Emery - Actor System Benefits of Different Frameworks
Sacha has done important work here by baselining a common set of requirements into various frameworks and techniques. This has important ramifications that I'm really excited about. We just need Sacha to stop be self-effacing, this could be the dawning of the Emery Scale!
Arnoud de Kuijper - An Architectural debate: How to Abstract Hardware?
Always good, talking about when and where to abstract? No conclusions but a led discussion, Arnould is a star presenter IMO and another one to just go and see if you can.
Malcom Myers - The Trusted Advisor
Well presented and extremely useful, I feel we should see more of this type of presentation in future events.
Chris Roebuck - Architectural Design Processes and Methodologies For LabVIEW
Light and interesting talk on how Chris and Fab (for indeed it was them) tried to reconcile very different ways of developing code.
Jarobit Piña - Revealing The Secrets Of The Packed Project Libraries
A nicely clinical review on an area most of us hadn't really deep-dived.
Peter Horn - Assertions in LV
This is based on a section in Code Complete and is a really good piece of work and damn useful. Good software engineering!
James McNally - LabVIEW and the Web
Useful to know and very well presented. Now I need to review his slides to see what to get up to speed on!
Sam Sharp - WebSockets - Bringing LabVIEW to the Web
Extremely well designed tool-kit, one I will be using.
and of course the peerless Jeff K showing us how to visualise asynchronous data between loops, I thought it was particularly effective with producer/consumer loops set side-by-side.
Any techniques presented were usually shown with advantages and disadvantages, I really applaud this clinical approach.
After that there was the discussions that fleshed out the ideas and formulated new ones.
This is a valuable event and I am so lucky to be able attend, if you get the chance give it a go! It's also great fun.
All I can say is thanks Nancy Jones, you done good!
Lots of Love
Right then clever clogs's
Here's a blast from the past and I was quite pleased with it. I'll show the code later...
I have the following characters (0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F,G,H,I,J,K,L,M,N,O
I want to transmit 20 characters but only have 14bytes to do it.
How is it done?
PS. As a technique this would be useful in the FPGA world I think.
If you're coming to Rome next week come and say hello and travel safely
Sorry I've not been very talkative lately, I've been busy putting together my presentation for the CLA Summit in Rome and it's been a considerable effort (although you'll find it hard to believe when you see it!).
So all the slides are complete (sort of), and now I'm just running through the story I want to tell.
All in all I'm fairly pleased with it, although I'm suffering the usual pre-presentation angst.
In my strange mind if I wasn't asking the above questions I guess I wouldn't bother doing the presentation in the first place.
My research was hampered by there being very little computer science on graphical programming, this is as surprising as it is interesting. Why is LabVIEW ignored by academia?
I joined the ACM to help with the research and I have to say it's pretty good value at $99/year, although it does seem full of articles like
"META II: Digital Vellum in the Digital Scriptorium - Revisiting Schorre's 1962 compiler-compiler "
A fascinating read, I highly recommend it!
The webcasts are brilliant tho' involving industry giants like Steve McConnell et al
Business wise we've been really busy this year and the decision to step away from industrial test systems seems to be paying off, we're now breaking into maritime systems and we hope/believe this will be a major source of work for us. The best thing about it is that it is pretty much virgin territory with regards NI. Keep a look out for the SSDC emergency response yacht. Thanks cRIO for giving us the tools to do the job!
Finally a shout out to the LabVIEW community, you are so generous and helpful. I needed a problem solving quickly and at little cost (a demo we're not being paid for) and on the off-chance someone had done something similar I posted it on the forums, within 48 hours I had a working VI in my grubby mitt. Kudos to all! I don't post enough on the forums (my wierd little niche interests about LabVIEW design aren't really relevant to a lot of the specific questions being asked, but I need to pay back my debt so will now try and answer at least 3 questions)
I will do at least one article based on my presentation after the summit and I also have one on Agile in the pipeline, I've been watching some quite interesting videos on it lately.
Here's a sneak preview of a howl of anguish in powerpoint form
Travel safe anyone coming to Rome, I look forward to seeing you there.
Lots of Love
Tomorrow the CLA Summit in Austin starts and I really wish I was there. Ah well I'll have fun in Roma.
So just to bring everyone up to speed.....
CLA = Certified LabVIEW Architect <--- highest LabVIEW certification.
Every year there are 2 CLA summits, one in Austin and one in Europe. A subject is picked (which I usually studiously ignore for my presentation), and 2 days of presentations and a day of round table discussions follow.
It's full on! and I have to admit I begin to flag after day 2 and while I'm in confession I'll also admit that sometimes the discussions go right over my head, but that's actually a really good thing. I want to be challenged by the discussions, it's brilliant. For me the main education I now get is from the community.
The software process is coming to the fore and currently there's a focus on Agile, LabVIEW fits very nicely with the Agile Manifesto. In fact Kent Beck would explode with joy if he saw the inclusive way most LabVIEW projects are developed. The only thing I would like, is for us to push our own manifesto, I strongly believe that LabVIEW is sufficiently different and rapid that it bends standard processes somewhat.
The focus in Europe seems to be more at looking into architectural issues and pulling them apart. Also 6 of the presenters come from our user group CSLUG, it's something of which I'm pretty proud.
I'll also doff my cap to the new CLD summits, I attended the latter part of the inaugural summit in Newbury and it was excellent.
Austin in March smells nice to my European nose, is it the Cedar?
I'm enjoying preparing my presentation this year, it's something a bit different again, hopefully it will sit in the back of peoples mind and niggle away at them. It's been a long time in gestation and explores some thoughts I've been mulling for years now. The start of formalising the concepts started here and in the LabVIEW Rocks section of our book. I really want to get a handle on why I found LabVIEW to be different to the other languages I've used.
So if you need an excuse to get certified, it's hard to get and easy to keep and you get to meet up with clever people in nice places, as a bonus you get your brain filled up with ideas.
So for everyone who is attending travel carefully and enjoy!
Lots of Love