I came across this document, and have adapted the design for JKI SMO and LabVIEW interfaces. This work was done as a proof-of-concept to test out some new features and tech.
The application is a simple Audio Controller, something like you might see on MS Teams where you can select your active Speaker and active Microphone in a video call. Given how much I'm using that screen nowadays, I thought it would make for a good real-world example to try and mimic.
I tried to keep this as SOLID as possible, let's see how I did...
Goals for this design:
The JKI SMO library, with some modifications to support messaging without having to subclass the parent SMO. This needed to be built into a PPL so that every dependent class would be able to have the same parents (e.g. if I didn't build this into a library, then plugins at runtime wouldn't have the same parent as the SMOs that were built into the main executable)
This library provides the definition (abstract class) and interfaces for an audio device (as defined in my application - your definition of an audio device may differ, this was just a contrived example...)
The abstract class Audio Plugin holds the data that is common to all audio devices (Name, Volume, whether or not the device is muted). The write accessors use protected events to notify any subclasses (child SMOs) that the values have been changed.
This interface is used to isolate the Audio Device in most instances
This interface extends the basic Audio Interface and adds additional functionality specific to Audio In/Out devices (e.g. playing a stream). These interfaces also have a method with a default implementation to retrieve the Audio Device given an interface. This is necessary when launching/stopping the Device SMO. The default implementation simply type casts the interface to the device. This is able to be overridden to support the Composite Device, which is not an Audio Device itself, but it composes two audio devices.
This library provides the definitions (abstract class) for some basic audio devices, like a speaker and a microphone.
The Microphone and Speaker are basic abstract classes that don't really do anything on their own, other than provide a layer of isolation between the Abstract Audio Device and the specific device implementations. They are more used as a definition and a common place to anchor interfaces to.
The Composite device is the interesting case here. It isn't actually an Audio Device on its own, but it does contain two devices inside it. The 'Audio Out Device' and 'Audio In Device' methods from the Audio In and Audio Out interfaces are overridden here and call the accessor methods 'Read Speaker' and 'Read Microphone' to retrieve the actual audio devices.
This library extends the basic devices and provides a simple simulated version with a UI that you can interact with to demonstrate that data can flow from the main application to the devices, and vice versa.
These are implemented as an SMO with a User Interface. When run, the UI is shown to show a speaker and a Microphone with some buttons to adjust the level and mute status.
The Simulated Headset overrides the Read Speaker and Read Microphone accessor functions and hard-wires the simulated speaker and simulated microphone. This one class satisfies both the Audio In and Audio Out interfaces, so it appears as two devices to the client application.
This library contains an interface that specifies all the functions that an Audio Controller must implement, as well as provides the messages that call these functions to support asynchronous communication from devices to the controller.
The Main Controller implements all the functions of the Audio Controller interface, so it can act as an audio controller. The UI of this allows a user to Load devices and control the volume/mute status. The Main Controller allows for one output and one input device to be active at a time.
Each library is built in its own project, and each project is added to a solution that chains the PPL builds together: