The main purpose of Software Design is to break a large problem into smaller more manageable problems. Up until now we've concentrated on actual Computer Science based design techniques and metrics like Coupling, Cohesion and various other principles. We also apply heuristics like globals=bad or a block diagram should be a single page.....
I think there is a concept specific to graphical programming and it's to do with block diagram time.
If you look at an SSDC design it will consist of several loops, most will be doing something in a very short period of time.
1. Initialisation structure.
2. Event Loop
3. UI Loop
5. Error Handling Loop
SSDC General Template
The only one that is being busy is the state machine loop. This means that 3 of the loops can pretty much be ignored... The UI Queue loop is only updating the display, if that works it can be ignored. The event loop is updating a display or firing off a transition. The error loop is only called on an error and that is displayed on the front panel. One of the reasons they can be ignored is that the thing they are doing is so trivial...
Straight away this simplifies debugging.
This can even be applied to the state machine part - breaking it into meaningful states that are only doing one thing and in a predictable time period. (i.e. don't put your massive diagram in a running state and then do everything in there!)
One heuristic we always break is to do with local variables, LabVIEW programmers wire themselves in knots to avoid using them. Which is a shame because they are a tidy and descriptive way to get access to local data. We can break this rule because of these short time periods. The difficulty with accessing a local variable is knowing that the data you are acting on is relevant and timely. If you keep your accessing of a local variable to a short period of block diagram time you will make it more predictable and therefore easy to use.
This is where the concept of block diagram time equates nicely with the concept of cohesion, we've all seen the large block diagram that's doing lots of stuff. Here's a link here where you can find plenty of examples.
They become complex because we cannot fit everything that's going on in the diagram in our heads. So if a diagram encapsulates seconds of computer time, knowing what is happening at one point relative to another is very difficult. Breaking our block diagram into defined sections that run in microseconds allows our brains to keep up and also simplifies certain operations to the point where we can ignore them.
I'm at a very early stage of thinking about this, but I think it may really help understand why a design gets complex. The next part I need to get my head around is applying time based interactions across an entire system.