ResOrg 2.0.9.29 has been released

Products, the Universe and Everything from Products, the Universe and Everything

ResOrg 2.0.9.29 has now been released. This a recommended maintenance update for ResOrg 2.0, and includes the following changes:

  • When installing/uninstalling the Visual Studio 2017/2019 extension (ResOrgPackage.vsix) the installer now invokes VSIXInstaller.exe silently rather than interactively, writing a logfile to %TEMP%.

  • Fixed a bug in the installer which could prevent the Visual Studio plug-in from being correctly installed to Visual Studio 2019.

  • Fixed a bug in the installer which could prevent the Visual Studio plug-in from being correctly uninstalled from Visual Studio 2017 or 2019.

  • Replaced the "invalid key entered" balloon tip in the Registration Key Dialog with an inline text field.

    The ResOrg plug-in included in this build is not compatible with Visual Studio 2022 Preview.

    Support for Visual Studio 2022 Preview is currently being worked on in the ResOrg development branch, and should become available in due course, but in the meantime ResOrgApp can edit VS2022 resource symbol files.

Download ResOrg 2.0.9.29

Actual implementation times are often round numbers

Derek Jones from The Shape of Code

To what extent do developers consciously influence the time taken to actually complete a task?

If the time estimated to complete a task is rather generous, a developer has the opportunity to follow Parkinson’s law (i.e., “work expands so as to fill the time available for its completion”), or if the time is slightly less than appears to be required, they might work harder to finish within the estimated time (like some marathon runners have a target time)?

The use of round numbers are a prominent pattern seen in task estimation times.

If round numbers appeared more often in the actual task completion time than would be expected by chance, it would suggest that developers are sometimes working to a target time. The following plot shows the number of tasks taking a given amount of actual time to complete, for project 615 in the CESAW dataset (similar patterns are present in the actual times of other projects; code+data):

Number of tasks taking a given amount of time to complete, for project 615.

The red lines are a fitted bi-exponential distribution to the ‘spike’ (i.e., round numbers, circled in grey) and non-spike points (spikes automatically selected, see code for details), green and purple lines are the two components of the non-spike fit.

Tasks are not always started and completed in one continuous work session, work may be spread over multiple work sessions; the CESAW data includes the start/end time of every work session associated with each task (85% of tasks involve more than one work session, for project 615). The following plots are based on work sessions, rather than tasks, for tasks worked on over two (left) and three (right) sessions; colored lines denote session ordering within a task (code+data):

Number of sessions taking a given amount of time to complete, for project 615.

Shorter sessions dominate for the last session of task implementation, and spikes in the counts indicate the use of round numbers in all session positions (e.g., 180 minutes, which may be half a day).

Perhaps round number work session times are a consequence of developers using round number wall-clock times to start and end work sessions. The plot below shows (left) the number of work sessions starting at a given number of minutes past the hour, and (right) the number of work sessions ending at a given number of minutes past the hour; both for project 615 (code+data):

Rose diagrams for minutes past the hour of work session wall clock start (left) and end (right).

The arrow (green) shows the direction of the mean, and the almost invisible interior line shows that the length of the mean is almost zero. The five-minute points have slightly more session starts/ends than the surrounding minute values, but are more like bumps than spikes. The start of the hour, and 30-minutes, have prominent spikes, which might be caused by the start/end of the working day, and start/end of the lunch break.

Five-minutes is a convenient small rounding interval to either expand implementation time, or to target as a completion time. The following plot shows, for each of the 47 individuals working on project 615, the number of actual session times and the number exactly divisible by five. The green line shows the case where every actual is divisible by five, the purple line where 20% are divisible by five (expected for unbiased timing), the dashed purple lines show one standard deviation, the blue/green line is a fitted regression model (0.4*Actual^{0.94 pm 0.04}) (code+data):

Number of sessions against number of sessions whose actual time is divisible by five, for 47 people working on project 615.

It appears that on average, five-minute session times occur twice as often as expected by chance; two individuals round all their actual session times (ok, it’s not that unlikely for the person with just two sessions).

Does it matter that some developers have a preference for using round numbers when recording time worked?

The use of round numbers in the recording of actual work sessions will inflate the total actual time for most tasks (because most tasks involve more than one session, and assuming that most rounding is not caused by developers striving to meet a target). The amount of error introduced is probably a lot less than the time variability caused by other implementation factors (I have yet to do the calculation).

I see the use of round numbers as a means of unpicking developer work habits.

Given the difficulty of getting developers to record anything, requiring them to record to minute-level accuracy appears at best optimistic. Would you work for a manager that required this level of effort detail (I know there is existing practice in other kinds of jobs)?

Actual implementation times are often round numbers

Derek Jones from The Shape of Code

To what extent do developers consciously influence the time taken to actually complete a task?

If the time estimated to complete a task is rather generous, a developer has the opportunity to follow Parkinson’s law (i.e., “work expands so as to fill the time available for its completion”), or if the time is slightly less than appears to be required, they might work harder to finish within the estimated time (like some marathon runners have a target time)?

The use of round numbers are a prominent pattern seen in task estimation times.

If round numbers appeared more often in the actual task completion time than would be expected by chance, it would suggest that developers are sometimes working to a target time. The following plot shows the number of tasks taking a given amount of actual time to complete, for project 615 in the CESAW dataset (similar patterns are present in the actual times of other projects; code+data):

Number of tasks taking a given amount of time to complete, for project 615.

The red lines are a fitted bi-exponential distribution to the ‘spike’ (i.e., round numbers, circled in grey) and non-spike points (spikes automatically selected, see code for details), green and purple lines are the two components of the non-spike fit.

Tasks are not always started and completed in one continuous work session, work may be spread over multiple work sessions; the CESAW data includes the start/end time of every work session associated with each task (85% of tasks involve more than one work session, for project 615). The following plots are based on work sessions, rather than tasks, for tasks worked on over two (left) and three (right) sessions; colored lines denote session ordering within a task (code+data):

Number of sessions taking a given amount of time to complete, for project 615.

Shorter sessions dominate for the last session of task implementation, and spikes in the counts indicate the use of round numbers in all session positions (e.g., 180 minutes, which may be half a day).

Perhaps round number work session times are a consequence of developers using round number wall-clock times to start and end work sessions. The plot below shows (left) the number of work sessions starting at a given number of minutes past the hour, and (right) the number of work sessions ending at a given number of minutes past the hour; both for project 615 (code+data):

Rose diagrams for minutes past the hour of work session wall clock start (left) and end (right).

The arrow (green) shows the direction of the mean, and the almost invisible interior line shows that the length of the mean is almost zero. The five-minute points have slightly more session starts/ends than the surrounding minute values, but are more like bumps than spikes. The start of the hour, and 30-minutes, have prominent spikes, which might be caused by the start/end of the working day, and start/end of the lunch break.

Five-minutes is a convenient small rounding interval to either expand implementation time, or to target as a completion time. The following plot shows, for each of the 47 individuals working on project 615, the number of actual session times and the number exactly divisible by five. The green line shows the case where every actual is divisible by five, the purple line where 20% are divisible by five (expected for unbiased timing), the dashed purple lines show one standard deviation, the blue/green line is a fitted regression model (0.4*Actual^{0.94 pm 0.04}) (code+data):

Number of sessions against number of sessions whose actual time is divisible by five, for 47 people working on project 615.

It appears that on average, five-minute session times occur twice as often as expected by chance; two individuals round all their actual session times (ok, it’s not that unlikely for the person with just two sessions).

Does it matter that some developers have a preference for using round numbers when recording time worked?

The use of round numbers in the recording of actual work sessions will inflate the total actual time for most tasks (because most tasks involve more than one session, and assuming that most rounding is not caused by developers striving to meet a target). The amount of error introduced is probably a lot less than the time variability caused by other implementation factors (I have yet to do the calculation).

I see the use of round numbers as a means of unpicking developer work habits.

Given the difficulty of getting developers to record anything, requiring them to record to minute-level accuracy appears at best optimistic. Would you work for a manager that required this level of effort detail (I know there is existing practice in other kinds of jobs)?

Actual implementation times are often round numbers

Derek Jones from The Shape of Code

To what extent do developers consciously influence the time taken to actually complete a task?

If the time estimated to complete a task is rather generous, a developer has the opportunity to follow Parkinson’s law (i.e., “work expands so as to fill the time available for its completion”), or if the time is slightly less than appears to be required, they might work harder to finish within the estimated time (like some marathon runners have a target time)?

The use of round numbers are a prominent pattern seen in task estimation times.

If round numbers appeared more often in the actual task completion time than would be expected by chance, it would suggest that developers are sometimes working to a target time. The following plot shows the number of tasks taking a given amount of actual time to complete, for project 615 in the CESAW dataset (similar patterns are present in the actual times of other projects; code+data):

Number of tasks taking a given amount of time to complete, for project 615.

The red lines are a fitted bi-exponential distribution to the ‘spike’ (i.e., round numbers, circled in grey) and non-spike points (spikes automatically selected, see code for details), green and purple lines are the two components of the non-spike fit.

Tasks are not always started and completed in one continuous work session, work may be spread over multiple work sessions; the CESAW data includes the start/end time of every work session associated with each task (85% of tasks involve more than one work session, for project 615). The following plots are based on work sessions, rather than tasks, for tasks worked on over two (left) and three (right) sessions; colored lines denote session ordering within a task (code+data):

Number of sessions taking a given amount of time to complete, for project 615.

Shorter sessions dominate for the last session of task implementation, and spikes in the counts indicate the use of round numbers in all session positions (e.g., 180 minutes, which may be half a day).

Perhaps round number work session times are a consequence of developers using round number wall-clock times to start and end work sessions. The plot below shows (left) the number of work sessions starting at a given number of minutes past the hour, and (right) the number of work sessions ending at a given number of minutes past the hour; both for project 615 (code+data):

Rose diagrams for minutes past the hour of work session wall clock start (left) and end (right).

The arrow (green) shows the direction of the mean, and the almost invisible interior line shows that the length of the mean is almost zero. The five-minute points have slightly more session starts/ends than the surrounding minute values, but are more like bumps than spikes. The start of the hour, and 30-minutes, have prominent spikes, which might be caused by the start/end of the working day, and start/end of the lunch break.

Five-minutes is a convenient small rounding interval to either expand implementation time, or to target as a completion time. The following plot shows, for each of the 47 individuals working on project 615, the number of actual session times and the number exactly divisible by five. The green line shows the case where every actual is divisible by five, the purple line where 20% are divisible by five (expected for unbiased timing), the dashed purple lines show one standard deviation, the blue/green line is a fitted regression model (0.4*Actual^{0.94 pm 0.04}) (code+data):

Number of sessions against number of sessions whose actual time is divisible by five, for 47 people working on project 615.

It appears that on average, five-minute session times occur twice as often as expected by chance; two individuals round all their actual session times (ok, it’s not that unlikely for the person with just two sessions).

Does it matter that some developers have a preference for using round numbers when recording time worked?

The use of round numbers in the recording of actual work sessions will inflate the total actual time for most tasks (because most tasks involve more than one session, and assuming that most rounding is not caused by developers striving to meet a target). The amount of error introduced is probably a lot less than the time variability caused by other implementation factors (I have yet to do the calculation).

I see the use of round numbers as a means of unpicking developer work habits.

Given the difficulty of getting developers to record anything, requiring them to record to minute-level accuracy appears at best optimistic. Would you work for a manager that required this level of effort detail (I know there is existing practice in other kinds of jobs)?

Will They Blend? – a.k.

a.k. from thus spake a.k.

Last time we saw how we can create new random variables from sets of random variables with given probabilities of observation. To make an observation of such a random variable we randomly select one of its components, according to their probabilities, and make an observation of it. Furthermore, their associated probability density functions, or PDFs, cumulative distribution functions, or CDFs, and characteristic functions, or CFs, are simply sums of the component functions weighted by their probabilities of observation.
Now there is nothing about such distributions, known as mixture distributions, that requires that the components are univariate. Given that copulas are simply multivariate distributions with standard uniformly distributed marginals, being the distributions of each element considered independently of the others, we can use the same technique to create new copulas too.

Importing/migrating from one peertube server to another

Andy Balaam from Andy Balaam's Blog

My Peertube server is shutting down, so I need to move my videos to another one. The official scripts don’t seem to cover this case very well, so here is what I did.

My script fetches videos and their details and uploads them to the new server via the Peertube API.

Contributions welcome: I was not able to copy video descriptions across, and I was too lazy so I hard-coded the number of tags. Also, I didn’t make a git repo for all this because I felt it needs more thought, but feel free to start one and I will happily contribute this.

This script copies all videos in a single Peertube channel to a different server. You must find the numeric ID of the channel on the new server to copy into, which I did by looking at the responses in the Network tab of Firefox’s developer tools when I clicked on its name in the web interface. It requires bash, curl, youtube-dl and jq.

Here’s peertube-migrate-channel.bash:

#!/bin/bash

set -u
set -e

# Modify these for your setup
OLD_SERVER="INSERT OLD SERVER e.g. https://peertube.social"
OLD_CHANNEL="INSERT CHANNEL URL-NAME e.g. trials_fusion"
NEW_SERVER="INSERT NEW SERVER e.g. https://video.hardlimit.com"
NEW_CHANNEL="INSERT NEW CHANNEL ID e.g. 4103"
USERNAME="INSERT_USERNAME for new server e.g. trials"
PASSWORD="INSERT PASSWORD for new server"
TAG1="INSERT_A_TAG e.g trials"
TAG2="INSERT_A_TAG e.g. gaming"
TAG3="INSERT_A_TAG e.g. gameing"

DIR=$(mktemp -d)
API_PATH="${NEW_SERVER}/api/v1"

# Find out how many videos are in the channel
curl -s \
    "${OLD_SERVER}/api/v1/video-channels/${OLD_CHANNEL}/videos/?count=1" \
    > "${DIR}/videos-total.json"

TOTAL=$(jq .total < "${DIR}/videos-total.json")
CURRENT=0
PAGE_SIZE=10

# Get a list of URLS for all the videos

echo -n "" > "${DIR}/urls.txt"

while (( CURRENT < TOTAL )); do
    FILE="${DIR}/videos-page-${CURRENT}.json"

    curl -s \
        "${OLD_SERVER}/api/v1/video-channels/${OLD_CHANNEL}/videos/?start=${CURRENT}&count=${PAGE_SIZE}&skipCount=true" \
        > "${FILE}"

    jq ".data | map(.uuid) | .[]" -r < "${FILE}" >> "${DIR}/urls.txt"

    CURRENT=$((CURRENT + PAGE_SIZE))
done

# Log in to the new server

client_id=$(curl -s "${API_PATH}/oauth-clients/local" | jq -r ".client_id")
client_secret=$(curl -s "${API_PATH}/oauth-clients/local" | jq -r ".client_secret")
token=$(curl -s "${API_PATH}/users/token" \
  --data client_id="${client_id}" \
  --data client_secret="${client_secret}" \
  --data grant_type=password \
  --data response_type=code \
  --data username="${USERNAME}" \
  --data password="${PASSWORD}" \
  | jq -r ".access_token")

# Download and upload each video

tac "${DIR}/urls.txt" \
    | while read ID; do
        URL="${OLD_SERVER}/api/v1/videos/${ID}"
        curl -s "${URL}" > "${DIR}/info-${ID}.json"
        NAME=$(jq -r .name < "${DIR}/info-${ID}.json")
        CATEGORY=$(jq -r .category.id < "${DIR}/info-${ID}.json")
        LICENCE=$(jq -r .licence.id < "${DIR}/info-${ID}.json")
        LANGUAGE=$(jq -r .language.id < "${DIR}/info-${ID}.json")
        PRIVACY=$(jq -r .privacy.id < "${DIR}/info-${ID}.json")

        OLD_VIDEO="https://peertube.social/videos/watch/${ID}"
	mkdir "${DIR}/dl-${ID}"
	youtube-dl "${OLD_VIDEO}" --output="${DIR}/dl-${ID}/dl.mp4"

        echo "Uploading ${OLD_VIDEO}"
        curl "${API_PATH}/videos/upload" \
	  --silent \
          -H "Authorization: Bearer ${token}" \
          --output "${DIR}/curl-out-${ID}.txt" \
          --max-time 6000 \
	  --form name="${NAME}" \
	  --form videofile=@"${DIR}/dl-${ID}/dl.mp4" \
          --form channelId=${NEW_CHANNEL} \
          --form category=${CATEGORY} \
          --form licence=${LICENCE} \
          --form description="TODO VIDEO DESCRIPTION" \
          --form language=${LANGUAGE} \
          --form privacy=${PRIVACY} \
          --form tags="${TAG1}" \
          --form tags="${TAG2}" \
          --form tags="${TAG3}"
    done

Have Google made a $billion skateboard mistake?

Allan Kelly from Allan Kelly Associates

How do you design a car? – It is one of the most famous diagrams in the agile world drawn by Henrik Kniberg.

I’m guessing many readers know this already: one approach (top of the diagram) is to iteratively design and build all the pieces, put them together and you have a car. This is one interpretation of iterative but until you put the pieces together there is no feedback and no value.

After all, who wants half a car? We know what we want from a car, right? Who needs feedback? Who wants a car with three wheels? – Why waste time experimenting?

The alternative, advocated by minimally viable product people everywhere is to redefine the problem; we don’t want a car so much as transportation, we could start with a very simple – and quick to deliver – transportation system (the skateboard). Because it is delivered sooner we get feedback sooner. We see how people use it, and evolve it over a series of iterations into a motorised car.

Since I know this, you know this and it is on the back of every agile cereal packet one can rest assured that Larry Page, Tim Cook and Jeff Bezos know this, right?

Well no, not according to the Financial Times recently. The FT carried a piece about the development of autonomous cars – “Robotaxis: have Google and Amazon backed the wrong technology?” – paywalled. Since we already have the sort taxi someone drives the development effort goes into advancement.

For the last few years Google/Waymo, Apple and others have sunk billions of dollar – yes billions – into developing self-driving cars. And of course, we all know what we want from a car, even a self-driving car so this is an engineering problem.

These cars now work but there are a number of challenges before the achieve world dominance. Most of the challenges now are less to do with the technology and more to do with the market: customer acceptance, insurance, pricing, etc. Still, billions more are needed before any return can be achieved. In other words: Google etc. took the first approach, build the pieces.

What is less well know, and what the FT writes about, is that another group of companies has take the other approach. Component suppliers like Bosch and Magna, and tech companies like Mobileye (Intel) have been developing desecrate technologies that can be incorporated into existing cars which evolve towards self-driving dream. Not only is this cheaper but it is easier to market and clear regulatory hurdles because humans are assisted in driving not replaced. (Tesla is also in this camp as they add more and more capabilities to their auto-pilot features and have been getting feedback from real customers for years.)

Now it seems evolutionary approach may win-out against the big clean sheet of paper. The race is not over yet but the evolutionary suppliers are making money while the new designer are still burning cash. The evolutionary suppliers are integrating their tech into cars and getting regulatory approach while the new designers have to navigate many regulators.

Engineers often object to the evolutionary model because: “we need to see the big picture”, “you need an architecture”, “you can’t evolve a skateboard into a car”. And indeed, one of the Google engineers, quoted in the FT saying:

“Conventional wisdom would say that we’ll just take driver assistance systems and we’ll kind of push them and … over time, they’ll turn into self-driving cars … well that’s like saying that if I work really hard at jumping, one day I’ll be able to fly.”

Chris Urmson, 2015

When you consider the problem purely in engineering terms this rings true, but while one needs to respect engineering it is not the only frame of reference. One need to consider the commercial and marketing aspects, as well as customer acceptance and other factors. To give any single line of thought a privileged position is to expose yourself to risks from the others.

At the end of the day, as I have argued repeatedly: engineering is about creating solutions within a context, within constraints. To any given problem there are many potential solutions – many ways to slice the onion. The “best” solution is the solution which best fits those constraints.

The evolutionary approach allows you get feedback sooner which allows you to uncover those constrains sooner, that allows for course corrections and it also means less time and money has been spent on solutions which don’t meet the constraints.


Subscribe to my blog newsletter and download Continuous Digital for free – normal price $9.99/£9.95/€9.95

The post Have Google made a $billion skateboard mistake? appeared first on Allan Kelly Associates.

What can be learned from studying long gone development practices?

Derek Jones from The Shape of Code

Current ideas about the best way of building a software system are heavily influenced by the ideas that captured the attention of previous generations of developers. Can anything of practical use be learned from studying long gone techniques for building software systems?

During the writing of my software engineering book, I was spending a lot of time researching the development techniques used during the twentieth century, and one day I suddenly realised that this was a waste of time. While early software developers tend to be eulogized today, the reality is that they were mostly people who had little idea what they were doing, who through personal competence of being in the right place at the right time managed to produce something good enough. On the whole, twentieth century software development techniques are only of historical interest. Yes, some timeless development principles were discovered, and these can be integrated into today’s techniques (which may also turn out to be of their-time).

My experience of software development in the late 1970s and 1980s is that there was rarely any connection between what management told the world about the development process, and how those reporting to the manager actually did the development.

If you are a manager in a world where software development is still very new, and you are given the job of managing the development of a software system, how do you go about it? A common approach is to apply the techniques that are already being used to run the manager’s organization. On a regular basis, managers came up with the idea of applying techniques from the science of industrial production (which is still happening today).

In the 1970s and 1980s there were usually very visible job hierarchies, and sharply defined roles. Organizations tended to use their existing job hierarchies and roles to create the structure for their software development employees. For years after I started work as a graduate, managers and secretaries were surprised to see me typing; secretaries typed, men did not type, and women developers fumed when they were treated like secretaries (because they had been seen typing).

The manual workers performed data entry, operated the computer (e.g., mounted tapes, and looked after the printer). The junior staff often started with the job title programmer, or perhaps junior programmer and there might be senior programmers; on paper these people wrote the code to implement the functionality specified by a systems analyst (or just analyst, or business analyst, perhaps with added junior or senior). Analysts did not to write code and programmers only coded what the specification they were given, at least according to management.

Pay level was set by the position in the job hierarchy, with those higher up earning more than those below them, and job titles/roles were also mapped to positions in the hierarchy. This created, in theory, a direct correspondence between pay and job title/role. In practice, organizations wanted to keep their productive employees, and so were flexible about the correspondence between pay and title, e.g., during their annual review some people were more interested in the status provided by a job title, while others wanted more money and did not care about job titles. Add into this mix the fact that pay/title levels rarely matched up between organizations, it soon became obvious to all that software job titles were a charade.

How should the people at the sharp end go about building a software system?

Structured programming was the widely cited technique in the 1970s. Consultants promoted their own variants, with Jackson structured programming being widely known in the UK, with regular courses and consultants offering to train staff. Today, structured programming appears remarkably simplistic, great for writing tiny programs (it has an academic pedigree), but not for anything larger than a thousand lines. Part of its appeal may have been this simplicity, many programs were small (because computer memory was measured in kilobytes) and management often thought that problems were simple (a recurring problem). There were a few adaptations that tried to address larger scale issues, e.g., Warnier/Orr structured programming.

The military were major employers of software developers in the 1960s and 1970s. In the US Work Breakdown Structure was mandated by the DOD for project development (for all projects, not just software), and in the UK we had MASCOT. These mandated development methodologies were created by committees, and have not been experimentally tested to be better/worse than any other approach.

I think the best management technique for successfully developing a software system in the 1970s and 1980s (and perhaps in the following decades), is based on being lucky enough to have a few very capable people, and then providing them with what is needed to get the job done while maintaining the fiction to upper management that the agreed bureaucratic plan is being followed.

There is one technique for producing a software system that rarely gets mentioned: keep paying for development until something good enough is delivered. Given the life-or-death need an organization might have for some software systems, paying what it takes may well have been a prevalent methodology during the early days of major software development.

To answer the question posed at the start of this post. What might be learned from a study of early software development techniques is the need for management to have lots of luck and to be flexible; funding is easier to obtain when managing a life-or-death project.

What can be learned from studying long gone development practices?

Derek Jones from The Shape of Code

Current ideas about the best way of building a software system are heavily influenced by the ideas that captured the attention of previous generations of developers. Can anything of practical use be learned from studying long gone techniques for building software systems?

During the writing of my software engineering book, I was spending a lot of time researching the development techniques used during the twentieth century, and one day I suddenly realised that this was a waste of time. While early software developers tend to be eulogized today, the reality is that they were mostly people who had little idea what they were doing, who through personal competence of being in the right place at the right time managed to produce something good enough. On the whole, twentieth century software development techniques are only of historical interest. Yes, some timeless development principles were discovered, and these can be integrated into today’s techniques (which may also turn out to be of their-time).

My experience of software development in the late 1970s and 1980s is that there was rarely any connection between what management told the world about the development process, and how those reporting to the manager actually did the development.

If you are a manager in a world where software development is still very new, and you are given the job of managing the development of a software system, how do you go about it? A common approach is to apply the techniques that are already being used to run the manager’s organization. On a regular basis, managers came up with the idea of applying techniques from the science of industrial production (which is still happening today).

In the 1970s and 1980s there were usually very visible job hierarchies, and sharply defined roles. Organizations tended to use their existing job hierarchies and roles to create the structure for their software development employees. For years after I started work as a graduate, managers and secretaries were surprised to see me typing; secretaries typed, men did not type, and women developers fumed when they were treated like secretaries (because they had been seen typing).

The manual workers performed data entry, operated the computer (e.g., mounted tapes, and looked after the printer). The junior staff often started with the job title programmer, or perhaps junior programmer and there might be senior programmers; on paper these people wrote the code to implement the functionality specified by a systems analyst (or just analyst, or business analyst, perhaps with added junior or senior). Analysts did not to write code and programmers only coded what the specification they were given, at least according to management.

Pay level was set by the position in the job hierarchy, with those higher up earning more than those below them, and job titles/roles were also mapped to positions in the hierarchy. This created, in theory, a direct correspondence between pay and job title/role. In practice, organizations wanted to keep their productive employees, and so were flexible about the correspondence between pay and title, e.g., during their annual review some people were more interested in the status provided by a job title, while others wanted more money and did not care about job titles. Add into this mix the fact that pay/title levels rarely matched up between organizations, it soon became obvious to all that software job titles were a charade.

How should the people at the sharp end go about building a software system?

Structured programming was the widely cited technique in the 1970s. Consultants promoted their own variants, with Jackson structured programming being widely known in the UK, with regular courses and consultants offering to train staff. Today, structured programming appears remarkably simplistic, great for writing tiny programs (it has an academic pedigree), but not for anything larger than a thousand lines. Part of its appeal may have been this simplicity, many programs were small (because computer memory was measured in kilobytes) and management often thought that problems were simple (a recurring problem). There were a few adaptations that tried to address larger scale issues, e.g., Warnier/Orr structured programming.

The military were major employers of software developers in the 1960s and 1970s. In the US Work Breakdown Structure was mandated by the DOD for project development (for all projects, not just software), and in the UK we had MASCOT. These mandated development methodologies were created by committees, and have not been experimentally tested to be better/worse than any other approach.

I think the best management technique for successfully developing a software system in the 1970s and 1980s (and perhaps in the following decades), is based on being lucky enough to have a few very capable people, and then providing them with what is needed to get the job done while maintaining the fiction to upper management that the agreed bureaucratic plan is being followed.

There is one technique for producing a software system that rarely gets mentioned: keep paying for development until something good enough is delivered. Given the life-or-death need an organization might have for some software systems, paying what it takes may well have been a prevalent methodology during the early days of major software development.

To answer the question posed at the start of this post. What might be learned from a study of early software development techniques is the need for management to have lots of luck and to be flexible; funding is easier to obtain when managing a life-or-death project.