From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

Random Ramblings on LabVIEW Design

Community Browser
Labels
cancel
Showing results for 
Search instead for 
Did you mean: 

Function Point Analysis - The Emery Scale

swatts
Active Participant

Well Hello programming chums

At the European CLA 2015 summit Sacha Emery presented "Actor and benefits of different frameworks" where he described his journey through various design choices when applied to a standard set of requirements. This segued nicely into various topics that were being discussed throughout the summit and it I also hijacked it selfishly to promote one of my interests.

I'm after finding some way of measuring the complexity of a system, now ideally I would like this to be something tangible like function points.

Data Functions:

Internal logical files

External interface files

Transactional Functions:

External Inputs

External Outputs

External Inquiries

FPA.png

The complex problem is how to identify these items within a LabVIEW Block Diagram (I haven't given up on this)

One of the advantages of function points is that you can can get them from requirements and use them for estimation. My primary interest is not for estimation tho' I want a common measure of the the complexity of a project so we can better assess and discuss our design choices. The disadvantage is that it is hard work and takes a lot of time to undertake a study.

So how does Sacha's presentation link to this?, like all good test engineers I like my processes to be calibrated and I think this might be the perfect tool to do this. Here's how we think it will work (and I say we because this is based on conversations with Jack Dunaway, Sach and various other luminaries).

We have a common set of requirements.

We apply a framework/design/architecture to provide a solution.

We tag all the project items that are original items.

We analyze all the items that are in the problem domain (i.e. separate the solution from the architecture).

We apply a change/addition in requirements.

We analyze the additional items that have been generated to satisfy the change.

This should give us several numbers

Items in Basic Architecture (A)

Items to Solve the Problem (B)

Items to Make an Addition (C)*

And this is what we call the Emery scale.

The preconception is that generally the code to solve the problem should be pretty repeatable/similar.

So dumping Function Points for the time being and going back to a more simple idea (care of Mr Dunaway), we should tag the architectural parts and count the nodes using block diagram scripting. As Chris Relf has stated there is some good data to be had from LOC (lines of code) comparisons in similar domains.

And what use is all this work? For 1 it should give us an estimate about the number of VIs our design will have on a given target (The ratio of B:C(#requirements) should give us a size) . This is an essential bit of design information.

Early stages and a long term project. It could be part of my CS thesis if I was doing one...

This is only one potential benefit of the kind of baselining that Sacha has started (from an educational perspective seeing the same problem solved with different frameworks will be invaluable)

Thoughts appreciated

Love Steve

*A late addition to give us some concept of how a change affects the project

Steve


Opportunity to learn from experienced developers / entrepeneurs (Fab,Joerg and Brian amongst them):
DSH Pragmatic Software Development Workshop


Random Ramblings Index
My Profile

Comments