On Measuring Software Design

Good design leads to good engineering quality and products that are useful, fun to use, simple, efficient and safe. But what is a good design? Is it purely subjective and opinionated or are there specific attributes of a design that lend themselves to introspection and measurement?

Why measure?

Measuring helps us quantify something at a point in time. Measuring a design should let us quantify design quality against the key challenges we want to solve with that design. Quite a few design quality assessments evaluate against attributes such as usability, accessibility, safety and event maintainability to validate a design or even compare and select from a pool of design choices

Continuous Measurement

Measuring and evaluating something regularly gives us a view of change-over-time so that we can do interesting things like look for trends so that we can evolve and adapt better. Regular / continuous evaluation of software design especially for integrated software is a good practice as it provides insight into trends related to the design inputs and constraints leading to proactive change vs reactive change.

For example, regular reviews can reveal a rapidly changing ecosystem (number of users, volume of data, responsiveness of dependent services etc) so that the design can be tweaked to meet the new demands and adapt the software in a proactive manner. Doing this from errors or customer complaint tickets etc would be reactive and provides a negative customer experience

Attributes to measure

Given the desire to measure design, I started with personal experience and noted down what I looked for in a good design. My initial list took shape as I put myself into a product owner’s shoes and thought about software products in the long term. Here is my list as I noted them down over the years

  1. Functional: Does what it was asked to do
  2. Accessible: Users or systems can access it and use it without requiring additional tools
  3. Reliable: Do what it needs to do continuously and predictably
  4. Reasonable Resource Usage: Achieves outcome desired using reasonable amount of resources
  5. Simple to Understand: Design is simple to comprehend without requiring additional details
  6. Unequivocal: There is no ambiguity so that the design is interpreted differently in changing context
  7. Safe: Does no harm to the consumer or provider or others in the ecosystem
  8. Sustainable: Does not cause long term to its eco-system such that it is strategically viable
  9. Implementable: The resources and materials we have can be used to implement this design in reasonable amount of time
  10. Maintainable and Operable: The solution from design is easy to operate and maintain
  11. Interoperable: This is key for distributed software as we want faster integration and interoperability is key to good interface design

What about industry standards? Well ISO/SEC 25010 [1] is the standard for Software Product Quality and this site provides details into the standard

Source: ISO/SEC 25010

Process of measuring

The how in the process of measuring can be achieved through self-introspection post design or through open channels such as assessment via “pull-request’ (you version control your design right?) or internal surveys or in some cases committees

The pull-request (PR) and survey process is great because it helps us to socialise the design with the team (engineering, operations, business analysts etc) and seeks inputs to evolve early. We just need to be careful and time-box the exercise

Measuring attributes using a survey

Formal committees can use formats such as this which describes the design attributes against program complexity (see [2])

Measure the constraints and environment with the design

The attributes above only consider the design but not the problem itself. If we want to keep a snapshot of our needs over time and then compared the design to this then could we perhaps paint a more accurate picture of the evolving ecosystem?

Source: https://ausemergencyservices.com.au/

Measuring the environment is key when design remains static to proactively evaluate for potential opportunities or threats


Measuring design is possible if you can start with a set of functional and non-functional attributes and score your design for it. Measuring continuously and measuring the ambient environment is key to knowing efficacy of good design

We have not talked about measuring Distributed systems designs as these contain inherent complexity due to the graph of connections, the operations per connection (commands, queries and events), the sent/received attributes and the data-mapping

In a future post we will cover measuring distributed systems design


[1] ISO/SEC 25010

[2] “The measurement of Software Design Quality” – James K. Blundell et al

[3] https://link.springer.com/article/10.1023/A:1018914711050

[4] Liskov, B. and Guttag, J., “Program Development in Java: Abstraction,
Specification and Object-Oriented Design”, 2000, Addison-Wesley.

[5] van Vliet, H. “Software Engineering: Principles and Practice (2nd Edition)”
Wiley, 1999

[6] Budgen, D. “Software Design”, 1994

[7] Pirsig, R. M., “Zen and the Art of Motorcycle Maintenance : An Inquiry
into Values”, 1974, William Morrow & Company

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s