I thought I’d open 2021 year with a personal story of how I got where I am today (no, I’m not in San Francisco, although that is the Golden Gate in the picture)
I started programming when I was 12 (ZX81 then BBC) and then led a very successful career into my 30s – including a spell in California. Increasingly I found the code was not the challenge, I could make the code do what I wanted. The problem was the way we were managed, or mismanaged, the things we were ask to do and the way we were organised.
So began my journey into “management”. Determined to be a better manager than those I had worked for I took myself back to school. During a year in business school I learned a lot of good stuff, I discovered “organisational learning” and I reconnected with my dyslexia.
“Agile” was just breaking at the time and in agile I saw the same ethos of learning I was getting so excited about. The reports of agile teams I read described the best aspects of the developments I had worked on. For me, managing software delivery and enhancing agile are the same thing.
My mission became to help my younger self: help technologists deliver successful products and enjoy satisfying work. Most of what I do falls under the “agile” banner but really it is about creating the processes and environments were people can learning, thrive and excel.
When people are getting satisfaction from their work delivering great products, businesses succeed and grow. And as software has come to underpin every digital initiative my work has expanded.
The great, unspoken, divide in agile is between those who believe the individual is all powerful and the centre of everything, and those who believe the individual is the product of the system.
Weinberg’s “law” is taken as unquestionable truth by most people in the agile community. Whatever the conversation, whatever the problem a wise old-sage will stand back and say “It’s always a people problem.” And in a way they are right.
People made the modern society and economy. People work in organisations, people make the rules, people enforce the rules, and ultimately someone in that organization made it the way it is. If only those people would act differently, make different rules, if only those people had greater foresight.
The problem is, once people have put all the pieces together the result is a system, not necessarily a technology system but a system of rules, norms, standards, accepted practices and “the way things work around here.” Which puts me in mind of Winston Churchill:
“We shape our buildings, and afterwards our buildings shape us.” Churchill, October 1943.
Yes, people shape our organisations, our processes and our culture, so it is always a people problem. But people are as much prisoner to those decisions as they are controllers of those decisions. Changing those things means changing the system, while that change needs to be made by people – and therefore is a people problem – the power to change that is distributed.
For example, Donald Trump has tried to change the US system in so many ways during the last four years. Often he has been frustrated by the rules of the system. He’s made some changes, and some of his actions will create changes in future. Some of us are glad the system constrained him, others are unhappy. Despite being the most powerful man in the world Trump was constrained.
So while it is “always a people problem” in that people created the system and operate the system doing something about isn’t just a case of asking the system operators to do things differently.
This is the great agile divide.
There are many, perhaps most, in the agile community who believe that every change, every improvement, is rooted in people. People are the centre of agile and all energies should be directed to help them do great work and create a system they can work within.
Then there are others who believe that it is the system which needs to be centre stage: because people work within a system.
Watch Stockless production and ask yourself: in the first simulation, when the waste is piling up, is there anything the men on the production line can do to improve it? I don’t think so because they are inside the system and the system is controlled by others.
I see the great divide again and again in the “agile” advice given out. The community doesn’t recognise the divide but every speaker, writer, consultant and coach is biased one way or another. Actually, “coaches” tend to put people first while “consultants” work with the system. Regurgitating “its a people problem” hides the divide.
Generally I align myself with the second group – its one of the reasons I associate with the Kanban community. But the process group has a problem.
In the days before agile there was a widespread belief that one could define the process, turn the handle and everything would be alright. That logic led to much of what fell under ISO-9000, TickIT, CMM(I) and other “process improvement initiatives.” I suffered under some of those regimes and I see the scars on others.
The problem was that this thinking lead to process experts who decided the process for others to follow, and process police who enforced it. So again, it is a “people problem”. But those process people are as much prisoners to the process as prison guards are. (I don’t want to be one of those people but I probably look like one of them on occasions.)
So, where does that leave us?
I no longer agree with Jerry Weinberg: people may create the problem, people may be key to fixing the problem but fixing a system is more than people.
I see my role, as an Agile Guide, as creating systems people can thrive in, where are not just cogs in a process, places where people can do their best work, where people problems can be seen and addressed. The system needs changing to respect the people, equally, those people deserve respected and involvement while changing the system.
My old ACCU friend Derek Jones has been beavering away at his Evidence Based Software Engineering book for a few years now. Derek takes an almost uniquely hard nosed evidence driven view of software engineering. He works with data. This can make the book hard going in places – and I admit I’ve only scratched the surface. Fortunately Derek also blogs so I pick up many a good lead there.
At first this finding worried me: so much of what I’ve been preaching about software living for a long time is potentially rubbish. But then I remembered: what I actually say, when I have time, when I’m using all the words is “Successful software lives” – or survives, even is permanent. (Yes its “temporary” at some level but so are we, as Keynes said “In the long run we are all dead”).
My argument is: software which is successful lives for a long time. Unsuccessful software dies.
Successful software is software which is used, software which delivers benefit, software that fills a genuine need and continues filling that need; and, most importantly, software which delivers more benefit than it costs to keep alive survives. If it is used it will change , that means people will work on it.
So actually, Derek’s observation and mine are almost the same thing. Derek’s finding is almost a corollary to my thesis: Most software isn’t successful and therefore dies. Software which isn’t used or doesn’t generate enough benefit is abandoned, modifications cease and it dies.
Actually, I think we can break Derek’s observation into two parts, a micro and a macro argument.
At the micro level are lines of code and functions. I read Derek’s analysis as saying: at the function level code changes a lot at certain times. An awful lot of that change happens at the start of the code’s life when it is first written, refactored, tested, fixed, refactored, and so on. Related parts of the wider system are in flux at the same time – being written and changed – and any given function will be impacted by those changes.
While many lines and functions come and go during the early life of software, eventually some code reaches a stable state. One might almost say Darwinian selection is at work here. There is a parallel with our own lives there: during our first 5 years we change a lot, we start school, things slow down but still, until about the age of 21 our lives change a lot, after 30 things slow down again. As we get older life becomes more stable.
Assuming software survives and reaches a stable state it can “rest” until such time as something changes and that part of that system needs rethinking. This is Kevlin Henney’s “Stable Intermediate Forms” pattern again (also is ACCU Overload).
At a macro level Derek’s observation applies to entire systems: some are written, used a few times and thrown away – think of a data migration tool. Derek’s data has little to say about whether software lifetimes correspond to expected lifetimes; that would be an interesting avenue to pursue but not today.
There is a question of cause and effect here: does software die young because we set it up to die young or because it is not fit enough to survive? Undoubtedly both cases happen but let me suggest that a lot of software dies early because it is created under the project model and once the project ends there is no way for the software to grown and adapt. Thus it stops changing, usefulness declines and it is abandoned.
The other question to pondering is: what are the implications of Derek’s finding?
The first implication I see is simply: the software you are working on today probably won’t live very long. Sure you may want it to live for ever but statistically it is unlikely.
Which leads to the question: what practices help software live longer?
Or should we acknowledge that software doesn’t live long and dispense with practices intended to help it live a long time?
Following our engineering handbook one should: create a sound architecture, document the architecture, comment the code, reduce coupling, increase cohesion, and other good engineering practices. After all we don’t want the software to fall down.
But does software die because it fails technically? Does software stop being used because programmers can no longer understand the code? I don’t think so. Big ball of mud suggests poor quality software is common.
When I was still coding I worked on lots of really crummy software that didn’t deserve to live but it did because people found it useful. If software died because it wasn’t written for old age then one wouldn’t hear programmers complaining about ‘technical debt” (or technical liabilities as I prefer).
Let me suggest: software dies because people no longer use it.
Thus, it doesn’t matter how many comments or architecture documents one writes, if software is useful it will survive, and people will demand changes irrespective of how well designed the code is. Sure it might be more expensive to maintain because that thinking wasn’t put in but…
For every system that survives to old age many more systems die young. Some of those systems are designed and documented “properly”.
I see adverse selection at work: systems which are built “properly” take longer and cost more but in the early years of life those additional costs are a hinderance. Maybe engineering “properly” makes the system more likely to die early. Conversely, systems which forego those extra costs stand a better chance of demonstrating their usefulness early and breaking-even in terms of cost-benefit.
Something like this happened with Multics and Unix. Multics was an ambitious effort to deliver a novel OS but failed commercially. Unix was less ambitious and was successful in ways nobody ever expected. (The CPL, BCPL, C story is similar.)
Finally, what about tests – is it worth investing in automated tests?
Arguably writing test so software will be easier to work on in future is waste because the chances are your software will not live. However, at the unit test level, and even at the acceptance test level, that is not the primary aim of such tests. At this level tests are written so programmers create the correct result faster. Once someone is proficient writing test-first unit tests is faster than debug-later coding.
To be clear: the primary driver for writing automated unit tests in a test first fashion is not a long term gain to test faster, it is delivering working code faster in the short term.
However, writing regression tests probably doesn’t make sense because the software is unlikely to be around long enough for them to pay back. Fortunately, if you write solid unit and acceptance tests these double as regression tests.
From time to time I come across software platform team – also called infrastructure teams. Such teams provide software which is used by other teams rather than end customers as such they are one step, or even more, removed from customers.
Now I will admit part of me doesn’t want these teams to exist at all but let’s save that conversation for another day. I acknowledge that in creating these teams organisations act with the best intentions and there is a logic to the creation of such teams.
It is what happens with the Product Owners that concerns me today.
Frequently these teams struggle with product owners.
Sometimes the teams don’t have product owners at all: after all these teams don’t have normal customers, they exist to do work which will enhance the common elements and therefore benefit other teams who will benefit customers. So, the thinking goes, coders should just do what they think is right because they know the technology best.
Sometimes an architect is given the power of product ownership: again the thinking is that as the team is delivering technology to technologists someone who understand the technology is the best person to decide what will add value.
And sometimes a product owner exists but they are a developer, they may even still have development responsibilities and have to split their time between the two roles. Such people obtain their role not because of their marketing skills, their knowledge of customers or because they are good at analysing user needs. Again it is assumed that they will know what is needed because they know the technology.
In my book all three positions are wrong, very wrong.
A platform team absolutely needs a customer focused product owner. A product owner who can appreciate the team have two tiers of customers. First other technology teams, and then beyond them actual paying customers. This means understanding the benefit to be delivered is more difficult, it should not be the case of ducking the issue, it should be a case of working harder.
If the platform team are to deliver product enhancements that allow other teams to deliver benefit to customers then it is not a case of “doing what the technology needs.” It is, more than ever, a case of doing things that will deliver customer benefit.
Therefore, platform teams need the strongest and best product owners who have the keenest sense of customer understanding and the best stakeholder management skills because understanding and prioritising the work of the platform team is a) more difficult and b) more important.
A platform team that is not delivering what other teams need does more damage to more teams and customers – in terms of benefit not delivered – than a regular team that just delivers to customers. Sure the PO will need to understand the technology and the platform but that is always the case.
So, to summarise and to be as clear as possible: Platform teams need the best Product Owners you have available; making a technical team member, one without marketing and/or product ownership experience, the product owner is a mistake.
“I’m frankly amazed at how far the #NoProjects throwaway Twitter comment travelled. But even today, in the bank where I work, the same problems caused by project-oriented approach to software are manifest as the problems I saw at xxxx xxx years ago.” Joshua Arnold
Once upon a time, 2 or 3 years back, #NoProjects was a hot topic – so hot it was frequently in flames on Twitter. For many of the #NoProjects critics it was little different from #NoEstimates. It sometimes felt that to mention either on Twitter was like pulling the pin and tossing a hand grenade into a room.
I never blocked anyone but I did mentally tune out several of those critics and ignore their messages. However I should say thank you to them, in the early days they did help flesh out the argument. In the later days were a great source of publicity. If we wanted to publicise an event one only had to add #NoProjects to a tweet and stand back.
The hashtag still gets used but far less often, the critics have fallen back and rarely give battle and as I’ve said before #NoProjects won. But, as a recent conversation on the old #NoProject Slack channel asked: why do we still have projects? why does nobody activity say they do #NoProjects?
In part that is because No doesn’t tell you what to do, it tells you what not to do, so what do you do?
In retrospect we didn’t have the language to express what we were trying to say, over time with the idea floating around we found that language: Outcome oriented, Teams over Projects, Products over projects, Product centric, Stable teams and similar expressions all convey the same idea: its not about doing a project, its not even about doing agile, it is about creating sustainable outcomes and business advantage.
The same thinking is embedded in AgendaShift, “The Spotify Model”, SAFe and other frameworks. These are continuity models rather than the stop-go project model. One might call all these ideas and models post-project thinking.
In many ways the hashtag died because we found better, and less confrontational, language to express ourselves.
There was a growing, if implicit, understanding that this is digital not IT, it is about digital business, and that means continuity. The project model of IT is dead.
Which begs the question: why aren’t these approaches more widespread?
The thinking is there, the argument has been made against projects and for alternative models, and you would be hard pressed to find a significant advocate of agile who would argue differently but companies are still, overwhelmingly, project oriented.
When I’m being cynical I’d say, like agile, it is a generational thing. The current generation of leaders – or at least those in positions of management authority – build their success on delivering IT projects. Only as this generation relinquishes leadership will things change.
Optimistically I remember what science fiction author William Gibson once said:
“The future is here, its just unevenly spread around”
For digital start-ups this isn’t an issue: they are born post-project, they create digital products, the business and technology are inseparable. The project model is counter to their DNA.
Some legacy companies have consciously gone post-project and are recognising the benefits: the capitalist model suggests these early movers 9 risk takers – will gain the most. Other legacy companies have adopted parts of the continuous model but cling to the project model too, some will make the full jump, some, most?, will fall back.
Unfortunately Covid, the hang over of bail-outs from the 2007-8 financial crash and failure to break up monopolies (Google, Facebook, Amazon specifically) mean capitalism is not exerting its usual Darwinian force.
Projects will exist for a long time yet, #NoProjects will continue small scale disruption but in the long term the post-project organizations will win out. Hopefully I’ll be alive to see it but I have no illusion, the rest of my career will be spent undoing the damage the project model does.
“Much of the writing I’ve seen assumes that software can be shipped directly into the hands of customers to create value (hence the “smaller packages, more often” approach). My experience has been that especially with new launches or major releases, there needs to be a threshold of minimum functionality that needs to be in place.”
Check your phone. Is it set to auto-update apps? Is your desktop OS set to auto-update? Or do you manual choose when to update?
Look at the update notes on phone apps from the likes of Uber, Slack, SkyScanner, the BBC and others. They say little more than “we update our apps regularly.”
Today people are used to technology auto-changing on them. They may not like it but do they like a big change any more?
My guess is that most people don’t even notice those updates. When you batch up software releases users see lots of changes at once, when you release them as a regular stream of small updates then most go unnoticed.
Still, users will see some updates change things, and they will not like some of these. But how long do you want to hide these updates from your users?
The question that needs asking is: what is the cost of an update? The vast majority of updates are quick, easy, cheap and painless.
Of course people don’t like updates which introduce a new UI, a new payment model or which demand you uninstall an earlier app but when updates are easy and bring benefits – even benefits you don’t see – they happily accept them.
And remember, the alternative to 100 small updates is one big update where people are more likely to see changes.
If your updates are generally good why hold them back? And if your updates are going in the wrong direction shouldn’t you change something? If you run scared of giving your users changes then something is wrong.
Nor is it just apps. Most people (in Europe at least) use telco supplied handsets and when the telco calls up and says “Would you like a new handset at no additional cost?” people usually say Yes. That is how telcos keep their customers.
The question continues,
“there needs to be coordination across the company (e.g. training people from marketing, sales, channel partners, customer/ internal support, and so on). There is also the human element – the capacity to absorb these changes. As a user of tech, I’m not sure I could work (well) with a product where features were changing, new ones being added frequently (weekly or even monthly), etc.”
If every software update was introducing a big change then these would be problems. But most updates don’t. Most introduce teeny-tiny changes.
Of course sometimes things need to change. The companies which do this best invest time and energy in making these painless. For example, Google often offers a “try our new beta version” for months before an update. And for months afterwards they provide a “use the old interface option.”
The best companies invest in user experience design too. This can go along way to removing the need for training.
Just because a new feature is released doesn’t mean people have to use it. For starters new changes can be released but disabled. Feature toggles are not only a way of managing source code branches but they also allow new features to be released silently and switched on when everyone is ready. This allows for releases to be de-risked without the customer seeing.
And when they are switched on they can be switched on for a few users at a time. Feedback can be gathered and improvements made before the next release.
That can be co-ordinated with training: make the feature toggle user switchable, everyone gets the new software and as they complete the training they can choose to switch it on.
Now marketing… yes, marketeers do like the big bang release – “look at us, we have something shiny and new!”
You could leave lots of features switched off until your marketeers are ready to make a big bang. That also reduces the problem of marketers needing to know what will be ready when so they known when to launch a campaign.
Or you could release updates without any fuss and market when you have the critical mass.
Or you could change your marketing approach: market a stream of constant improvements rather than occasional big changes.
Best of all market the capabilities of your thing without mentioning features: market what the app or system allows you to do.
For years I’ve been hearing “business people” bemoan developers who “talk technical” but I see exactly the same thing with marketeers. Look at Sony Televisions, what is the “picture processor X1” ? And why should I care? I can’t remember when I last changed the contrast on my television so the “Backlight master drive” (what ever that is) means nothing to me.
Or, look at Samsung mobile phones, 5G, 5G, 5G – what do I care about 5G? What does 5G allow me to do that I can’t with my current phone?
Drill down, look at the Samsung Galaxy lineup: CPU speed, CPU type, screen size, resolution, RAM, ROM – what do I care? How does any of that help me? – Stop throwing technical details at me!
Don’t market features market solution. Tell me what “job to be done” the product the addresses, tell me how my life will be improved. Marketing a solution rather than features decouple marketing from the upgrade cycle.
So sure, people don’t like technology change – I’ll tell you a story in my next blog. But when technology change brings benefits are they still resistant?
Now, with modern technology, with agile and continuous delivery, technology can change faster than business functions like training and marketing. We can either choose to slow technology down or we can change those functions to work differently – not necessarily faster but differently in a way that is compatible with agile technology change.
These kind of tensions are common in businesses which move across to agile style working. A lot of company think agile applies to the “software engine room” and the rest of the business can carry on as before. Unfortunately they have released the Agile Virus – agile working has introduced a set of tensions into the organization which must either be embraced or killed.
Once again technology is disruptive.
Perhaps, if the marketing or training department are insisting on big-bang releases maybe it is them who should be changing. Maybe, just maybe, they need to rethink their approach, maybe they could learn a thing or two about agile and work with differently with technology teams.
“If you’re not embarrassed by the product when you launch, you’ve launched too late.” Reid Hoffman, founder LinkedIn
Years ago I worked for a software company supplying Vodafone, Verizon, Nokia, etc. The last thing those companies wanted was to update the software on their engineers PC every months, let alone every week!
I was remembering this episode when I was drafting what will be my next post (“But our users don’t want to change”) and thought it was worth saying something about how regular releases change the risk-reward equation.
When you only release occasionally there is a big incentive to “get it right” – to do everything that might be needed and to remove every defect whether you think those changes are needed or not. When you release occasionally second chances don’t happen for weeks or months. So you err on the side of caution and that caution costs.
Regularly releases changes that equation. Now second chances come around often, additions and fixes are easy. Now you can err on the side of less and that allows you to save time and money.
The ability to deliver regularly – every two weeks as a baseline, every day for high performing teams – is more important than the actual deliveries. Releasable is more important than released. The actual question of whether to release or not is ultimately a question for business representatives to decide.
But, being releasable on a very regular basis is an indicator of the teams technical ability and the innate quality of the thing being built. Teams which are always asking for “more time” may well have a low quality product (lots of bugs to fix) or have something to hide.
The fact that a team can, and hopefully do, release (to live) massively reduces the risk involved. When software is only released at the end – and usually only tested before that end – then risk is tail loaded. Having releasable – and especially released – software reduces risk. The risk is spread across the work.
Actually releasing early further reduces risk because every step in the process is exercised. There are no hidden deployment problems.
That offsets sunk-cost and combats commitment escalation. Because at any time the business stakeholders can say “game over” and walk away with a working product means that they are no longer held captive by the fear of sunk-costs, suppliers and career threatening failures.
It is also a nice side effect that releasing new functionality early – or just fixing bugs – increases the return on investment because benefits are delivered earlier and therefore start earning a return sooner.
Just because new functionality is completed and even released early does not mean users need to see it. Feature-toggles allows feature and changes to be hidden from users – or only enabled for specified users. Releasing changed software with no apparent change may look pointless but it actually reduces risk because the changes are out there.
That also means testing is simplified. Rather than running tests against software with many changes tests are run against software with few changes which makes changes more efficient even if the users don’t see it. And it removes the “we can’t roll back one fix” problem when one of 10 changes don’t pass.
Back with Vodafone engineers who don’t want their laptops updated: that was then, that was the days of CD installs. Today the cloud changes that, there is only one install to do, it isn’t such an inconvenience. So they could have the updates but with disruptive changes hidden. At the same time they could have non-disruptive changes, e.g. bug fixes.
In a few cases regular deliveries may not be the right answer. The key thing though is to change the default answer from “we only deliver occasionally (or at the end)” to “we deliver regularly (unless otherwise requested).”