Predictive Models of Development Teams and the Systems They Build

Rob Smallshire from Good With Computers

In 1968 Melvin Conway pointed out a seemingly inevitable symmetry between organisations and the software systems they construct. Organisations today are more fluid than 40 years ago, with short developer tenure, and frequent migration of individuals between projects and employers. In this article we’ll examine data on the tenure and productivity of programmers and use this to gain insight into codebases, by simulating their growth with simple stochastic models. From such models, we can make important predictions about the maintainability and long-term viability of software systems, with implications for how we approach software design, documentation and how we assemble teams.

Legacy systems

I've always been interested in legacy software systems, primarily because legacy software systems are those which have proven to be valuable over time. The reason they become legacy – and get old – is because they continue to be useful.

I have an interest in that as a software engineer, having worked on legacy systems as an employee for various software product companies, and more recently as a consultant with Sixty North, helping out with the problems that such systems inevitably raise.

I called myself a "software engineer", although I use the term somewhat loosely. To call what many developers do "engineering" is a bit of a stretch. Engineer or not, my academic training was as a scientist, which is perhaps reflected in the content of this article. Most readers will be familar with the structure of the scientific method: We ask questions. We formulate hypotheses which propose answers to those questions. We design experiments to test our hypotheses. We collect data from the experiments. And we draw conclusions from the data. This done, having learned something about how our world works, we go round again.

I would like to be able to apply this powerful tool to what we do as "software engineers" or developers. Unfortunately for our industry, it's very difficult to do experimental science - still less randomised controlled trials – on the process of software development, for a whole host of reasons: Developers don't like to be watched. We can't eliminate extraneous factors. The toy problems we use in experiments aren't realistic. No two projects are the same. The subjects are often students who have little experience.

Kids in a science experiment.

Even on the rare occasions we do perform experiments, there are many threats to validity of such experiments, so the results tend not be to taken very seriously. Addressing the weaknesses of the experimental design would be prohibitively expensive, if possible at all.

The role of models

Fortunately, there's another way of doing science, which doesn't rely on the version of the scientific method just mentioned. It's the same type of science we do in astronomy, or geology where we can't run experiments because we don't have enough time, we don't have enough money, or we just don't have anywhere big enough to perform the experiment. Experimentally colliding galaxies, or experimenting with the initiation of plate tectonics are simply in the realms of science fiction on the money, time and space axes.

In such cases, we have to switch to a slightly different version of the scientific method, which looks like this: We make a prediction about how the universe works, where our 'universe' could be galactic collisions, or the more prosaic world of software development. We then make a model of that situation either through physical analogy or in a computer. By executing this model we can predict the outcome based on the details of a specific scenario. Lastly, we compare the results from the model with reality and either reject the model completely if it just doesn't work, or tune the model by refining, updating or tweaking it, until we have a model that is a good match for reality.

The aim here, is to come up with a useful model. Models are by their very nature simplifications or abstractions of reality. So long as we bear this in mind, even a simple (although not simplistic) model can have predictive power.

Essentially, all models are wrong, but some are useful

—George E. P. Box [Box, G. E. P., and Draper, N. R., (1987), Empirical Model Building and Response Surfaces, John Wiley & Sons, New York, NY.]

A model of software development

A key factor in modelling the development of legacy software systems is the fact that although such systems may endure for decades - we developers tend not to endure them for decades. In other words, the tenure of software developers is typically much shorter than the life span of software systems.

But how much shorter?

Whenever I speak publically on this topic with an audience of developers, I like to perform a simple experiment with my audience. My assumption is that the turnover of developers can be modelled as if developers have a half-life within organizations. The related concept of residence time [1] is probably a better approach, but most people have a grasp of half-life, and it avoids a tedious digression into explaining something that is ultimately tangiential to the main discussion. In any case, a catchy hook is important when you're going for audience participation, so half-life it is.

I start by asking everyone in the audience who has moved from working on one codebase – for example a product – to another (including the transition to working on their first codebase), at any time in the preceding 32 years to raise their hands. This starting point is intended to catch the vast majority of typical tech conference audience members, and indeed it does, although arguably in the spirit of inclusiveness I should start with 64 years. Now all of the audience have raised hands.

Next I ask the audience to keep their hands raised if this is still true for 16 years: Most of the hands are still raised. Now eight years: Some hands are going down. Now four years: A small majority of hands are still raised. Then two years: At this point, a large minority still have raised hands, but we have crossed the half-way threshold and established that the 'half-life' of developers is somewhere between two and four years. This result has been consistent on over 90% of the occasions I've performed this experiment. The only notable deviation was a large Swedish software consultancy where we established the half-life was around six months!

In fact, what little research there has been into developer tenure indicates that the half-life across the industry is about 3.2 years, which fits nicely with what I see out in the field.

One way to think about this result concretely is as follows: If you work on a team that numbers ten developers in total, you can expect half of them - five in this case - to leave at some point in the next 3.2 years. Obviously, if the team size is to remain constant, they will need to be replaced.

Note that saying that turnover of half of a population of developers will take 3.2 years is not the same as claiming that the average tenure of a developer is 3.2 years. In fact, mean tenure will be \(3.2 / \\ln 2\) which is about 4.6 years. You might want to compare that figure against your own career so far.

If you're concerned that developers don't behave very much like radionucleides then rest assured that the notion of half-life follows directly from an assumption that the decay of a particle (or departure of a developer) follows exponential decay, which again follows from the notion of constant probability density with respect to time. All we're saying is that in a given time interval there is a fixed probability that are particle will decay (or a developer will depart), so it is actually a very simple model.

Notably, the half-life of developers is shorter than the half-life of almost anything else in our industry, including CEOs, lines of code, megacorps or classes.

Half-lives of various entities

Half-lives in years of various entities in and around the software industry. Developers are one of the most short-lived.

Productivity

If we're going to have a stab a modelling software developers as part of the software development process, we're going to need some measure of productivity. I'm going to use - and you can save your outrage for later - lines of code. To repurpose a famous phrase by Winston Churchill: "Lines of code is the worst programmer productivty metric, except for all the others". Now I know, as well as you do, that what ultimately matters to the customers of software systems is value for money, return on investment, and all those other good things. The problem is, that it's notoriously hard to tie any of those things back in a rigourous way to what individual developers do on a day-to-day basis, which should be design and code software systems based on an understanding of the problem at hand. On the other hand, I think I'm on fairly safe ground in assuming that software systems with zero lines of code deliver no value, and proportionally more complex problems can be solved (and hence more value delivered) by larger software systems.

Furthermore, there's some evidence that the number of lines of code cut by a particular developer per day is fairly constant irrespective of which programming language they're working in. So five lines of F# might do twice as much 'work' as 10 lines of Python or 20 lines of C++. This is an alternative phrasing of the notion of 'expressiveness' in programming languages. This is why we tend to feel that expressiveness - or semantic density - is important in programming languages. We can often deliver as much value with 5 lines of F# as with 20 lines of C++, yet it will take a quarter of the effort to put together.

Now, however you choose to measure productivity, not all developers are equally productive on the same code base, and the same developer will demonstrate different productivity on different code bases, even if they are in the same programming language. In fact, as I'm sure many of us have experienced, the principle control on our productivity is simply the size of the code base at hand. Writing a hundred lines of code for a small script is usually much less work than adding 100 lines to a one million line system.

We can capture this variance by looking to what little literature there is on the topic [2], and using this albeit sparse data to build some simple developer productivity distributions.

For example, we know that on a small 10,000 line code base, the least productive developer will produce about 2000 lines of debugged and working code in a year, the most productive developer will produce about 29,000 lines of code in a year, and the typical (or average) developer will produce about 3200 lines of code in a year. Notice that the distribution is highly skewed toward the low productivity end, and the multiple between the typical and most productive developers corresponds to the fabled 10x programmer.

Given only these three numbers and in the absence of any more information on the shape of the distribution, we'll follow a well-trodden path and use them to erect a triangular probability density function (PDF) characterised by the minimum, modal and maximum productivity values. Based on this PDF it's straightforward to compute the corresponding cumulative distribution function (CDF) which we can use to construct simulated "teams" of developers, by using the CDF to transform uniformly distributed samples on the cumulative probability axis into samples on the producivity axis. In a real simulation where we wanted to generate many typical teams, we would generate uniform random numbers between zero and one and transform them into productivity values using the CDF, although for clarity in the illustration that follows, I've used evenly distributed samples from which to generate the productivity values.

Programmer productivity

Programmer productivity in lines of code per year for a team of ten developers on a 10000 line project.

As you can see the resulting productivity values for a team of ten developers cluster around the modal productity value, with comparitavely few developers of very high productivity.

Perhaps more intuitively, software development of teams comprising ten developers look like this:

Programmer productivity as circles

A typical team of ten developers would look like this, if their contributions in lines of code were represented as circular areas.

This typical team has a only a couple of people being responsible for the majority of the output. Again, it might be interesting to compare this to your own situation. At the very least, it shows how the 'right' team of two developers can be competitive with a much larger team; a phenomenon you may have witnessed for yourselves.

Overall, this team produces about 90,000 lines of code in a year.

Incorporating growth of software

Of course, the story doesn't end there. Once our team has written 90,000 lines of code, they're no longer working on a 10,000 line code base, they're working on a 100,000 line code base! This causes their productivity to drop, so we now have a modified description of their productivities and a new distribution from which to draw a productivity if somebody new joins the team. But more of that in a moment. We don't have much in the way of published data for productivity on different sizes of code base, but we can interpolate between and extrapolate from the data we do have, without any of the assumptions involved in such extrapolation looking too outlandish. As you can see, we can put three straight line through the minimums, modes and maximums respectively to facilitate determination of a productivity distribution for any code size up to about 10 million lines of code. (Note that we shouldn't infer from these straight lines anything about the mathematical nature of how productivity declines with increasing code size - there be dragons! [3])

Productivity again codebase size

Productivity is a function of codebase size. Developers are dramatically less productive on larger bodies of code.

When performing a simulation of growth of software in the computer, we can get more accurate results by reducing the time-step on which we adjust programmer productivity downwards from once per year as in the example above, to just once per day: At the end of every simulated day, we know how much code we have, so we can predict the productivity of our developers on the following day, and so on.

Incorporating turnover

We've already stated our assumption that the probability of a developer departing is constant per unit time, together with our half-life figure of 3.2 years. Given this, it's straightforward to compute the probability of a developer leaving on any given day, which is about 0.001, or once in every thousand days. As we all know, when a particular developer leaves an organisation and is replaced by a new recruit, there's no guarantee that their replacement will have the same level of productivity. In this event, our simulation will draw a new developer at random from the distribution of developer productivity for the current code base size, so it's likely that a very low productivity developer will be replaced with a higher productivity developer and that a very high productivity developer will be replaced with a lower productivity developer; an example of regression to mediocrity. [4]

Simulating a project

With components of variance in developer productivity, its relationship to code base size and a simple model of developer turnover we're ready to run a simulation of a project. To do so, we initialize the model with the number of developers in the development team, and set it running. The simulator starts by randomly drawing a team of developers of the required size from the productivity distribution for a zero-size code base, and computes how much code they will have produced after one day. At the end of the time step, the developer productivities are updated to the new distribution; each developer's quantile within the distribution remains fixed, but the underlying CDF is updated to yield a new productivity value. The next time step for day two then begins, with each developer producing a little less code than on the previous day.

On each day, there is a fixed probability that a developer will leave the team. When this occurs, they are immediately replaced the following day by a new hire whose productivity will be drawn anew from the productivity distribution. For small teams, this case shift the overall team productivity significantly and more often than not towards the mean.

Let's look at an example: If we configure a simulation with a team of seven developers, and let it run for five years, we get something like this:

Streamed code contributions

Streamed code contributions of a team of seven developers over five years. A total of 19 people contribute to this codebase.

This figure has time running from left to right, and the coloured streams show the growing contributions over time of individual developers. We start on the left with no code and the original seven developers in the team, from top to bottom sporting the colours brown, orange, green, red, blue, purple and yellow. The code base grows quickly at first, but soon slows. About 180 days into the project the purple developer quits, indicated by a black, vertical bar across their stream. From this point on, their contribution remains static and is shown in a lighter shade. Vertically below this terminator we see a new stream beginning, coloured pink, which represents the contribution of the recruit who is purple's replacement. As you can see, they are about three times more productive (measured in lines of code at least), than their predecessor, although pink only sticks around for around 200 days before moving on and being replaced by the upper blue stream.

In this particular scenario, at the end of the five year period, we find our team of seven has churned through a grand total of 19 developers. In fact the majority of the extant code was written by people no longer with the organisation; only 37% of the code was programmed by people still present at the end. This is perhaps motivation for getting documentation in place as systems are developed, while the people who are doing the development are still around, rather than at the end of the effort - if at all - as is all to common.

Monte Carlo simulation

Being drawn randomly, each scenario such as the one outlines above is different, although in aggregate they vary in a predictable way according to the distributions we are using. The above scenario was typical, insofar as it produced, compared to all identically configured simulations, an average amount of code, although it did happen to get through a rather high number of developers. Of course, individual scenarios such as this, although interesting, can never be indicative of what will actually happen. For that, we need to turn to Monte Carlo modelling: Run many thousands of simulations - all with configurations drawn randomly from identical distributions - and look at the results in aggregate either graphically or using various statistical tools.

When we run 1000 simulations of a seven person project run over three years, the following statistics emerge: We can expect our team of seven to see four people leave and be replaced during the project. In fact, the total number of contributors will be 11 ± 2 at one standard deviation (1σ). The total body of code produced in three years will be 157,000 ± 23,000 @ 1σ. The proportion of the code written by contributors present at the end will be 70% ± 14% @ 1σ.

Perhaps a more useful question might be to ask "How long is it likely to take to produce 100,000 lines of code?" By answering this question for each simulation, we can build a histogram (actually we use a kernel density estimate here, to give a smooth, rather than binned, result).

Time to deliver 100,000 lines of code

How long does it take a team of seven to deliver one-hundred thousand lines of code?

Although this gives a good intuitive sense of when the team will reach the 100 k threshold, a more useful chart is the cumulative distribution of finishing time, which allows us to easily recognise that while there is a probability of 20% of finishing in 330 days, for a much more secure 80% probability, we should allow for 470 days - some 42% longer and correspondingly more costly.

Cumulative distribution of 100,000 LOC delivery times

Cumulative distribution function showing probability of delivery of one-hundred thousand lines of code before a particular day. Based on 10 000 simulations.

Finally, looking at the proportion of the code base that was, at any time, written by the current team, we see an exponential decline in this fraction, leaving us with a headline figure of 20% after 20 years.

Proportion of code written by current team

The proportion of code written by the current team. In other words, for how much of your code can you easily talk to the author? Blue and green lines show plus and minus one standard deviation around the mean. Based on 10,000 simulations.

That's right, on a 20 year old code base only one fifth of the code will have been created by the current team. This resonates with my own experience, and quantitatively explains why working on large legacy systems can be a lonely, disorienting and confusing experience.

A refinement of Conway's Law?

Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure

—Melvin Conway [5]

This remark, which has become known as Conway's Law, was later interpreted, a little more playfully, by Eric Raymond as "If you have four groups working on a compiler, you'll get a 4-pass compiler". My own experience is that Conway's Law rings true, and I've often said that "Conway's Law is the one thing we know about software engineering that will still be true 1000 years from now".

However, over the long term development efforts which lead to large, legacy sofware systems the structure and organisation of the system isn't necessarily congruent with the organisation at present. After all, we all know that reorganisations of people are all too frequent compared to major reorganisation of software! Rather, the state of a system reflects not only the organisation, but the organisational history and the flow of people through those organisations over the long term. What I mean is that the structure of the software reflects the organisational structure integrated over time.

Simulations such as those presented in this article allow to to get a sense of how large software systems as we see them today are the fossilised footprints of developers past. Perhaps we can use this improved, and quantitative, understanding to improve planning, costing and ongoing guidance of large software projects.

[1]Residence time article on Wikipedia.
[2]COCOMO II: COnstructive COst MOdel II
[3]Contrary to popular conception, straight lines on log-log plots don't necessarily indicate power-law relationships. See the excellent So You Think You Have a Power Law — Well Isn't That Special?
[4]Francis Galton (1886). "Regression towards mediocrity in hereditary stature". The Journal of the Anthropological Institute of Great Britain and Ireland (The Journal of the Anthropological Institute of Great Britain and Ireland, Vol. 15) 15: 246–263.
[5]Melvin Conway (1968) How do Committees Invent?

The Primacy of Testability: Modularity

Austin Bingham from Good With Computers

In the first post in this series I set the stage for a discussion of how testability can serve as a proxy or enabler for other, more directly desirable qualities in a software system. In this post I'd like to look at the first such quality, modularity.

Modularity is perhaps the most obvious quality to correlate with testability. Modularity is a bit like motherhood and apple pie, and we generally recognize it as a "good thing." Most good developers instinctively foster modularity and reject non-modular designs, both because they've been taught that modularity is good and because they've learned through experience that it's important for many, many reasons.

What do we mean by modularity? A simple definition of modularity is "breaking designs into pieces", but that obviously misses important points. We don't just pull a system into arbitrary pieces, but instead we create modules with specific qualities in order to support and foster the goals of whatever it is we're building. A typical (and important) goal with modules is to have high internal cohesion and low external coupling. A highly cohesive module is one in which all of the module elements are related, work together, and are necessary for performing the work that the module does. A module with low coupling is one that doesn't rely unnecessarily or too extensively on other modules.

There are plenty of ways to ways to discuss and even measure modularity in software designs [1], but for our purposes the basic concepts of cohesion and coupling suffice. So how does modular code - code structured as modules with high internal cohesion and low external coupling - correlate with testability? The short answer is that modular code is generally very testable, so let's see how and why.

One way to see the relationship between testability and modularity is to recognize that modular code, by virtue of its low coupling, is easier to instantiate and execute in isolation. The fewer things (classes, services, whatever) that a module needs, the less work it is to write tests for it. That is, you simply have to type less for each test. Likewise, tests for modules with fewer external dependencies often execute more quickly - at least marginally so - because there's less stuff to do for each test. The result is that developers can run the tests more often and will be more inclined to do so.

If you use mocks in your testing system, low coupling means that you generally have to design fewer mock objects (because your modules have fewer dependencies that need mocking). Moreover, your mocks will often be simpler because they won't have to simulate interactions between far-flung dependencies.

The relationship between modularity and testability goes even deeper, though. Beyond the somewhat mechanical questions of instantiation times, test length, and so forth, there are serious cognitive problems associated with testing poorly-modularized code. The first of these problems is that your tests can end up testing the wrong thing. If a test for some bit of functionality has to invoke code in other modules, the success or failure of the test depends not just on the code under test but on all of those other modules. On some level this is unavoidable [2], but unnecessary dependencies unnecessarily increase the risk that the results you're seeing for your tests are caused by faults in other modules. These kinds of external failures take time to debug, can distort tests as developers try to work around them, or - in the case of incorrectly passing tests - can mask actual defects. [3] Ideally, the output from your tests should give you precise and unambiguous information about just the element under test, and reducing coupling helps developers approach that ideal.

A second way that poorly modularized code increases cognitive load for test developers is that it increases the state space with which a test developer needs to contend. For every external element that plays a role in a module's functionality, the state space that needs to be considered for testing grows to include the potential states of that element. For example, if a test relies on the some global state managed by an external module - perhaps something like an application object managed by the GUI library - then test developers need to ensure that this external object is in the correct state when the test is executed. This means that no matter what other tests have been run, what order they've been run in, or their results, the test developer needs to ensure that this external state is acceptable. As external test dependencies grow, this management of external state can become a significant or even dominant aspect of test development. In other words, the more unnecessary dependencies a module has, the more time a developer will need to spend thinking about those dependencies rather than focusing on testing the functionality at hand. Babysitting unnecessary dependencies doesn't generate information or value. [4]

So we can see that poorly modularized code is generally harder to test, and that's good to know. But the goal of this series is to explore how testability reflects other qualities, not the other way around. How, then, can the testability of our code inform us about its modularity?

The important point is this: if you find that your code is easy to test, then that's a good indication that your code is modular! If you sit down to write some tests and you find that your fixtures are easy to write, the extent of the tests is easy to define, and you don't find that you're constantly needing to consider distant side-effects, then you're probably testing modular code. To carry on with the barometer metaphor, you've got high-pressure, and you can see blue skies and little white puffy clouds. It's time for a picnic.

On the other hand, if your testing efforts involve lots of ceremony - instantiating objects, wiring them together, and bootstrapping subsystems - or if you feel like your really just shotgunning tests at the code rather than pinpointing the "obvious" testing points, then chances are that you've got some modularity issues.

Like I said, modularity is perhaps the most obvious quality that we can associate with testability. And the way to leverage testability in this case is to pay attention to what your testing efforts tell you. There's a lot of information in the work involved in developing tests, and drawing insight from this work is a valuable doubling-up of that effort.

A corollary to this insight is that, when you find yourself having difficulty writing some tests, step back and ask yourself if these problems stem from poor modularity. Testing can be a bit repetitive, though often with minor tweaks, and this is a wonderful environment in which to find repeated bad patterns. And sometime consideration of these patterns will show where you might have an opportunity for better modularization of your code.

[1]See for example some of Martin Fowler's thoughts on the topic, discussions about the interaction between type-systems and modularity, and even some information-theoretic approaches.
[2]Operating systems and standard libraries are "other modules" by most measures.
[3]Consider a test for a specific exception. If some dependency is throwing that exception and the function under test is, in fact, not throwing the exception, then the test is passing erroneously.
[4]And as Gerald Weinberg says, tests are about producing information.

The Primacy of Testability

Austin Bingham from Good With Computers

The job of a software architect [1] is difficult, just like almost every role in software development. They have to keep track of many subtly interacting quality attributes, often on multiple projects, any one of which may be too big or evolving too quickly to meaningfully keep in mental cache. To make matters worse, architects don't have near the level of tool support - compilers, static analysis tools, auto-completion - available to developers. They are much more reliant on experience, awareness, intuition, and heuristics.

In light of this, it's interesting and useful to consider what tools are available to help architects. In particular, I want to look at the role of testability in the architect's job, and to try to show how it can serve as a meaningful proxy for other, perhaps more important qualities in a software system. Testability is a quality that can promote the health of other desirable qualities, and it can serve as an indicator of whether these requirements are being met. The metaphor I like to use is that testability is a kind of barometer for software architects. A barometer only really tells you the air-pressure, but you can often use this to determine if there's going to be rain. Testability only really tells you how amenable your code is to useful testing, but you can often use this to help determine if your system is modular, organizationally scalable, and so forth.

What is testability anyway?

To meaningfully discuss testability as a tool, we need to establish some definition of what it means. Like "software architect", there is no perfect answer. On some level all software is testable in that you can test it. By hook or by crook you can write some code that verifies the behavior of pretty much anything with a specification. So clearly just "being testable" isn't a sufficient definition.

At the same time, it's also pretty clear that it's simpler to test some software than other. It may be easy to test for a number of reasons. Perhaps it's easy to understand, so that you have a clear understanding of how to test it thoroughly and properly. Perhaps the chunk of code is easy to instantiate without requiring a whole bunch of scaffolding and support objects. This not only saves on keystrokes but it also has other big benefits: it isolates behavior, it may mean your tests are faster, and it generally means that your tests are easier to understand and thus maintain.

If you do a little poking around you'll find that people have hit upon certain code qualities that generally influence the testability of a piece of code. [2] But in the end we don't really have a "testability-o-meter" that we can point at a piece of code. There's no accepted way to assign a "testability rating" to software that tells you if code is more or less testable than other code, or even if it's "easy to test" or "hard to test". We can sometimes get these kinds of numbers for other qualities like modularity or complexity, and things like "scalability" also lend themselves to being measured, but testability isn't (yet) in that realm.

Instead, determining if something is testable is a decision that people need to make, and it's a decision that you can only make in an informed way if you understand code. And this is why my definition of "software architect" - from a practical standpoint - includes being able to understand code well at many levels. You have to be able to recognize when, say, dependency injection could replace local object construction to reduce coupling in a system. You need to be able to spot - or at least know to be on the lookout for - circular dependencies between modules. And in general you're going to need to be able to do this not only with code that you're writing but with code that you only see in reviews or maybe only see described in documents.

Why testability?

So I've just told you that testability is hard to measure or even to define. In fact, I've told you that to make heads or tails of it you need to be an experienced programmer. On its face, then, it sounds like the cure is worse than the disease: yes, you've got complexity in your projects to deal with, but now I want to you do something even harder to make those problems go away.

On some level that's true! Gauging testability isn't simple and it's not perfect, but by targetting testability we get a couple of important benefits because testability is special.

Testability represents your first customer, your first users: your tests! Tests are very often the first place your code is used outside of your head. This means that this is where you'll first spot difficult APIs or awkward relationships that slipped through your design.

Tests force us to use code, and they force us to consider it at many different zoom levels - from unit tests to functional tests to integration tests, we get to see it all. And tests can - and should - happen early and often in the development process. This is how you get maximum benefit from them.

If you're paying attention you'll notice that I just made a significant shift in terminology. I went from talking about "testability" to "tests", from "code that can be tested" to "code with tests". I guess it's arguable that you can have testable code without actually having tests, but that seems a bit academic to me. I've gone on and on about how difficult it is to measure testability, but one of most effective and practical ways to asses testability is to simply test your code!

So for my purposes, testable code is also tested code. I won't quibble able precisely how much testing is enough, or at what level it should be done; there are plenty of other people who are happy to tell you that. [3]

But if your tests add value to your software system, then I'd wager that they exercise your code enough to highlight a lot of the software qualities for which testability is a barometer.

Qualities correlated with testability

This article lays the foundation for the rest of this series in which we'll look at various software qualities that correlate with testability. Some of these qualities, such as modularity, are directly reflected in the testability of a system. Other qualities, like performance, are supported or enabled by testability but aren't directly related to it.

[1]This is an ill-defined term, to be sure, but I'm essentially talking about the person tasked with shepherding the so-called non-functional requirements...whether their job title is "software architect" or not.
[2]Wikipedia's got a nice, non-controversial list of things like "observability", "heterogeneity", and "understandability", and all of these things certainly would influence how easy or hard it is to test a piece of code.
[3]See for example TDD, BDD, or James Coplien's thoughts.