Historians of computing

Derek Jones from The Shape of Code

Who are the historians of the computing? The requirement I used for deciding who qualifies (for this post), is that the person has written multiple papers on the subject over a period that is much longer than their PhD thesis (several people have written history of some aspect of computing PhDs and then gone on to research other areas).

Maarten Bullynck. An academic who is a historian of mathematics and has become interested in software; use HAL to find his papers, e.g., What is an Operating System? A historical investigation (1954–1964).

Martin Campbell-Kelly. An academic who has spent his research career investigating computing history, primarily with a software orientation. Has written extensively on a wide variety of software topics. His book “From Airline Reservations to Sonic the Hedgehog: A History of the Software Industry” is still on my pile of books waiting to be read (but other historian cite it extensively). His thesis: “Foundations of computer programming in Britain, 1945-55″, can be freely downloadable from the British Library; registration required.

James W. Cortada. Ex-IBM (1974-2012) and now working at the Charles Babbage Institute. Written extensively on the history of computing. More of a hardware than software orientation. Written lots of detail oriented books and must have pole position for most extensive collection of material to cite (his end notes are very extensive). His “Buy The Digital Flood: The Diffusion of Information Technology Across the U.S., Europe, and Asia” is likely to be the definitive work on the subject for some time to come. For me this book is spoiled by the author towing the company line in his analysis of the IBM antitrust trial; my analysis of the work Cortada cites reaches the opposite conclusion.

Nathan Ensmenger. An academic; more of a people person than hardware/software. His paper Letting the Computer Boys Take Over contains many interesting insights. His book The Computer Boys Take Over Computers, Programmers, and the Politics of Technical Expertise is a combination of topics that have been figured and back with references and topics still being figured out (I wish he would not cite Datamation, a trade mag back in the day, so often).

Michael S. Mahoney. An academic who is sadly no longer with us. A historian of mathematics before becoming involved with primarily software.

Jeffrey R. Yost. An academic. I have only read his book “Making IT Work: A history of the computer services industry”, which was really a collection of vignettes about people, companies and events; needs some analysis. Must try to track down some of his papers (which are not available via his web page :-(.

Who have I missed? This list is derived from papers/books I have encountered while working on a book, not an active search for historians. Suggestions welcome.

National Apprenticeship Week

Paul Grenyer from Paul Grenyer

Seeing as it’s been National Apprenticeship Week this week, we thought we would shine a light on our apprentices, past and present. Naked Element would be a duller place without them and the valuable work they do!

We’ve had three apprentices in total, Lewis, Rain and Jack and they’ve all been invaluable to our business. Lewis spent his year-long software development apprenticeship with us, before staying on a while longer as a full-time employee. He headed User Story workshops, held meetings with clients and even managed to join in with some of the social sides of Naked Element too! Lewis got a lot out of his time with us, saying "an apprenticeship is a great way to get your foot in the door of an industry, gain some excellent skills and first-hand experience in a job you may want to turn into a career". Lewis decided to be an apprentice because he felt that a more hands-on approach to learning would suit him better than studying full time. At the time he hoped he would be working in the US in the near future, but he has since decided to settle down at university and is due to begin a Computer Science degree at the UEA later this year to bolster his industry experience with a formal qualification.

Rain joined us as an administrative apprentice for just over a year, keeping us organised and the company running smoothly. Rain was an asset to Naked Element, as a natural networker and often the first face to greet clients, she helped start the conversation about software and business. From the professional presentation in her initial interview to managing conferences, she impressed us all. She took her experience with Naked Element and became the executive PA to the CEO of Apple Helicopters!

Our current apprentice is Jack, who is part-way through his software apprenticeship. We’ve been so impressed with Jack that we’re hoping he will stay on after his course has finished to be a software developer full time! He’s a good problem solver, helping Naked Element deliver projects more cost effectively and equally enthusiastic at tech events when he represents the company.

Our CEO Paul says "I believe that apprentices are an excellent way for the predominantly small tech companies in the TechEast region to grow and a way to help fill the skills gap we have here. They are also a great way to support young people in our region to get industry experience." Naked Element has found all three apprentices invaluable to supporting and growing our business and we’re very proud of how far they’ve come!

Visual Lint 6.5 has been released

Products, the Universe and Everything from Products, the Universe and Everything

The first public build of Visual Lint 6.5 has just been uploaded to our website.

Visual Lint 6.5 is the second Visual Lint 6.x release, superseding Visual Lint 6.0. As a minor update, it will also accept existing per-user Visual Lint 6.0 licences; Visual Lint 1.x, 2.x, 3.x, 4.x and 5.x per-user licences must however be upgraded to work with this version.

Full details of the changes in this version are as follows:

Host Environments:
  • Removed the (deprecated since Visual Lint 5.0) ability of the Visual Studio plug-in to load within Microsoft Visual Studio 6.0 and eMbedded Visual C++ 4.0. Projects for these environments can of course still be analysed in the standalone VisualLintGui and VisualLintConsole applications.
Analysis Tools:
  • Modifications to support PC-lint Plus PCH analysis, which creates object files (.lpph or .lpch) in the project working folder rather than (as was the case with PC-lint 9.0) in the folder containing the PCH header file. This should affect only projects where the PCH header file is contained in a different folder from the project file.
  • PC-lint project indirect (project.lnt) files are now automatically recreated if a different version of the analysis tool is in use.
Installation:
  • The installer now prompts for affected applications (Visual Studio, Atmel Studio, AVR Studio, Eclipse, VisualLintConsole and VisualLintGui) to be closed before installation can proceed.
  • The installer now installs VSIX extensions to Visual Studio 2017 and Atmel [AVR] Studio silently.
  • Revised the order of registration of the Visual Studio plug-in with each version of Visual Studio so that the newest versions are now registered first.
  • Uninstallation no longer incorrectly runs "Configuring Visual Studio..." steps if the VS plug-in is not selected for installation.
  • The "Installing Visual Lint" progress bar is now updated while Visual Studio, Atmel Studio and Eclipse installations are being registered.
  • Improved the logging of VSIX extension installation/uninstallation.
User Interface:
  • The Analysis Status View now supports text filters of the form "Project/File".
  • Added a new Window List Dialog to VisualLintGui to display details of the open MDI child windows, and allow selected windows to be activated, saved or closed as a group.
  • Widened the About Box slightly.
Reports:
  • Replaced the table sort code in generated HTML reports with a simpler, more robust implementation from https://www.kryogenix.org/code/browser/sorttable/.
  • Replaced the Teechart generated Issue Count by Category/ID charts in HTML reports with Javascript ones.
Bug Fixes:

Download Visual Lint 6.5.0.293

Visual Lint 6.5 has been released

Products, the Universe and Everything from Products, the Universe and Everything

The first public build of Visual Lint 6.5 has just been uploaded to our website.

Visual Lint 6.5 is the second Visual Lint 6.x release, superseding Visual Lint 6.0. As a minor update, it will also accept existing per-user Visual Lint 6.0 licences; Visual Lint 1.x, 2.x, 3.x, 4.x and 5.x per-user licences must however be upgraded to work with this version.

Full details of the changes in this version are as follows:

Host Environments:
  • Removed the (deprecated since Visual Lint 5.0) ability of the Visual Studio plug-in to load within Microsoft Visual Studio 6.0 and eMbedded Visual C++ 4.0. Projects for these environments can of course still be analysed in the standalone VisualLintGui and VisualLintConsole applications.
Analysis Tools:
  • Modifications to support PC-lint Plus PCH analysis, which creates object files (.lpph or .lpch) in the project working folder rather than (as was the case with PC-lint 9.0) in the folder containing the PCH header file. This should affect only projects where the PCH header file is contained in a different folder from the project file.
  • PC-lint project indirect (project.lnt) files are now automatically recreated if a different version of the analysis tool is in use.
Installation:
  • The installer now prompts for affected applications (Visual Studio, Atmel Studio, AVR Studio, Eclipse, VisualLintConsole and VisualLintGui) to be closed before installation can proceed.
  • The installer now installs VSIX extensions to Visual Studio 2017 and Atmel [AVR] Studio silently.
  • Revised the order of registration of the Visual Studio plug-in with each version of Visual Studio so that the newest versions are now registered first.
  • Uninstallation no longer incorrectly runs "Configuring Visual Studio..." steps if the VS plug-in is not selected for installation.
  • The "Installing Visual Lint" progress bar is now updated while Visual Studio, Atmel Studio and Eclipse installations are being registered.
  • Improved the logging of VSIX extension installation/uninstallation.
User Interface:
  • The Analysis Status View now supports text filters of the form "Project/File".
  • Added a new Window List Dialog to VisualLintGui to display details of the open MDI child windows, and allow selected windows to be activated, saved or closed as a group.
  • Widened the About Box slightly.
Reports:
  • Replaced the table sort code in generated HTML reports with a simpler, more robust implementation from https://www.kryogenix.org/code/browser/sorttable/.
  • Replaced the Teechart generated Issue Count by Category/ID charts in HTML reports with Javascript ones.
Bug Fixes:

Download Visual Lint 6.5.0.293

Visual Lint 6.5 has been released

Products, the Universe and Everything from Products, the Universe and Everything

The first public build of Visual Lint 6.5 has just been uploaded to our website. Visual Lint 6.5 is the second Visual Lint 6.x release, superseding Visual Lint 6.0. As a minor update, it will also accept existing per-user Visual Lint 6.0 licences; Visual Lint 1.x, 2.x, 3.x, 4.x and 5.x per-user licences must however be upgraded to work with this version. Full details of the changes in this version are as follows: Host Environments:
  • Removed the (deprecated since Visual Lint 5.0) ability of the Visual Studio plug-in to load within Microsoft Visual Studio 6.0 and eMbedded Visual C++ 4.0. Projects for these environments can of course still be analysed in the standalone VisualLintGui and VisualLintConsole applications.
Analysis Tools:
  • Modifications to support PC-lint Plus PCH analysis, which creates object files (.lpph or .lpch) in the project working folder rather than (as was the case with PC-lint 9.0) in the folder containing the PCH header file. This should affect only projects where the PCH header file is contained in a different folder from the project file.
  • PC-lint project indirect (project.lnt) files are now automatically recreated if a different version of the analysis tool is in use.
Installation:
  • The installer now prompts for affected applications (Visual Studio, Atmel Studio, AVR Studio, Eclipse, VisualLintConsole and VisualLintGui) to be closed before installation can proceed.
  • The installer now installs VSIX extensions to Visual Studio 2017 and Atmel [AVR] Studio silently.
  • Revised the order of registration of the Visual Studio plug-in with each version of Visual Studio so that the newest versions are now registered first.
  • Uninstallation no longer incorrectly runs "Configuring Visual Studio..." steps if the VS plug-in is not selected for installation.
  • The "Installing Visual Lint" progress bar is now updated while Visual Studio, Atmel Studio and Eclipse installations are being registered.
  • Improved the logging of VSIX extension installation/uninstallation.
User Interface:
  • The Analysis Status View now supports text filters of the form "Project/File".
  • Added a new Window List Dialog to VisualLintGui to display details of the open MDI child windows, and allow selected windows to be activated, saved or closed as a group.
  • Widened the About Box slightly.
Reports:
  • Replaced the table sort code in generated HTML reports with a simpler, more robust implementation from https://www.kryogenix.org/code/browser/sorttable/.
  • Replaced the Teechart generated Issue Count by Category/ID charts in HTML reports with Javascript ones.
Bug Fixes: Download Visual Lint 6.5.0.293

Building a regression model is easy and informative

Derek Jones from The Shape of Code

Running an experiment is very time-consuming. I am always surprised that people put so much effort into gathering the data and then spend so little effort analyzing it.

The Computer Language Benchmarks Game looks like a fun benchmark; it compares the performance of 27 languages using various toy benchmarks (they could not be said to be representative of real programs). And, yes, lots of boxplots and tables of numbers; great eye-candy, but what do they all mean?

The authors, like good experimentalists, make all their data available. So, what analysis should they have done?

A regression model is the obvious choice and the following three lines of R (four lines if you could the blank line) build one, providing lots of interesting performance information:

cl=read.csv("Computer-Language_u64q.csv.bz2", as.is=TRUE)

cl_mod=glm(log(cpu.s.) ~ name+lang, data=cl)
summary(cl_mod)

The following is a cut down version of the output from the call to summary, which summarizes the model built by the call to glm.

                    Estimate Std. Error t value Pr(>|t|)    
(Intercept)         1.299246   0.176825   7.348 2.28e-13 ***
namechameneosredux  0.499162   0.149960   3.329 0.000878 ***
namefannkuchredux   1.407449   0.111391  12.635  < 2e-16 ***
namefasta           0.002456   0.106468   0.023 0.981595    
namemeteor         -2.083929   0.150525 -13.844  < 2e-16 ***

langclojure         1.209892   0.208456   5.804 6.79e-09 ***
langcsharpcore      0.524843   0.185627   2.827 0.004708 ** 
langdart            1.039288   0.248837   4.177 3.00e-05 ***
langgcc            -0.297268   0.187818  -1.583 0.113531 
langocaml          -0.892398   0.232203  -3.843 0.000123 *** 
  
    Null deviance: 29610  on 6283  degrees of freedom
Residual deviance: 22120  on 6238  degrees of freedom

What do all these numbers mean?

We start with glm's first argument, which is a specification of the regression model we are trying to fit: log(cpu.s.) ~ name+lang

cpu.s. is cpu time, name is the name of the program and lang is the language. I found these by looking at the column names in the data file. There are other columns in the data, but I am running in quick & simple mode. As a first stab, I though cpu time would depend on the program and language. Why take the log of the cpu time? Well, the model fitted using cpu time was very poor; the values range over several orders of magnitude and logarithms are a way of compressing this range (and the fitted model was much better).

The model fitted is:

cpu.s. = e^{Intercept+name+prog}, or cpu.s. = e^{Intercept}*e^{name}*e^{prog}

Plugging in some numbers, to predict the cpu time used by say the program chameneosredux written in the language clojure, we get: cpu.s. = e^{1.3}*e^{0.5}*e^{1.2}=20.1 (values taken from the first column of numbers above).

This model assumes there is no interaction between program and language. In practice some languages might perform better/worse on some programs. Changing the first argument of glm to: log(cpu.s.) ~ name*lang, adds an interaction term, which does produce a better fitting model (but it's too complicated for a short blog post; another option is to build a mixed-model by using lmer from the lme4 package).

We can compare the relative cpu time used by different languages. The multiplication factor for clojure is e^{1.2}=3.3, while for ocaml it is e^{-0.9}=0.4. So clojure consumes 8.2 times as much cpu time as ocaml.

How accurate are these values, from the fitted regression model?

The second column of numbers in the summary output lists the estimated standard deviation of the values in the first column. So the clojure value is actually e^{1.2 pm (0.2*1.96)}, i.e., between 2.2 and 4.9 (the multiplication by 1.96 is used to give a 95% confidence interval); the ocaml values are e^{-0.9 pm (0.2*1.96)}, between 0.3 and 0.6.

The fourth column of numbers is the p-value for the fitted parameter. A value of lower than 0.05 is a common criteria, so there are question marks over the fit for the program fasta and language gcc. In fact many of the compiled languages have high p-values, perhaps they ran so fast that a large percentage of start-up/close-down time got included in their numbers. Something for the people running the benchmark to investigate.

Isn't it easy to get interesting numbers by building a regression model? It took me 10 minutes, ok I spend a lot of time fitting models. After spending many hours/days gathering data, spending a little more time learning to build simple regression models is well worth the effort.

Product Owners need 4 things

Allan Kelly from Allan Kelly Associates

iStock_000008515543Small-2018-03-5-16-09.jpg

To be an effective Product Owner – and that includes product managers and business analysts who are nominating work for teams to do – you need at least four things. You may well need more than these four but these are common across all teams and domains.

  1. 1. Skills and experience

There is more to being a Product Owner than simply writing user stories and prioritising a backlog. Yes, you need to know how to work with a development team and how to work in an Agile-style process. Yes you need to be able to write user stories and acceptance criteria, perhaps BDD style cucumbers too; yes you need to be able to manage a backlog and prioritises and partake in planning meetings.

But how do you know what should be a priority?
How do you know what will deliver value? And please customers? Satisfy stakeholders?

Importantly Product Owners need to be able to do the work behind the backlog.

Product Owners need to meet people, have the conversations, do the analysis and thinking behind those things. Any idiot can pick random items form a backlog but it takes skills and experience to maximise value.

Product Owners need to be able to identify users, segment customers, interview people, understand their needs and jobs to be done. They need to know when to run experiments and when to turn to research journals and market studies. And that might mean they need data analysis skills too.

If the product is going to sell as a commercial product you will need wider product management skills. While if your product is for internal use you need more business analysis skills. And product managers will benefit from knowing about business analysts and business analysts will benefit from knowing about product management.

You may also need specialist domain knowledge – you might need to be a subject matter expert in your own right, or you might become an SME in given time.

Some understanding of business strategy, finance, marketing, process analysis and design, user experience design and more.

Don’t underestimate the skills and experience you need to be an effective Product Owner.

  1. 2. Authority

At the very least a Product Owner needs the authority to nominate the work the team are going to do for the next two weeks. They need the authority to choose items form a backlog and ask the team to do them. They need the authority not to have their decisions overridden on a regular basis. (OK, it happens occasionally.)

As a general rule the more authority the Product Owner has the more effective they are going to be in their role.

The organization may confer that authority but the team need to recognise and accept it too.

I’ve seen many Product Owners who while they have the authority to nominate work for a team don’t have the authority to throw things out of the backlog. When the only way for a story to leave the backlog is for it to be developed it is very expensive. This leads to constipated backlogs that are stuffed full of worthless rubbish and where one can’t see the wood for the trees.

If the Product Owner doesn’t have sufficient authority then either they need to borrow some or there is going to be trouble.

  1. 3. Legitimacy

Legitimacy is different from authority. Legitimacy is about being seen as the right person, the bonafide person to exercise authority and do the background work to find out what they need to find out in order to make those decisions.

Legitimacy means the Product Owner can go and meet customers if they want. And it means that they will get their expenses paid.

Legitimacy means that nobody else is trying to fill the Product Owner role or undermine them. In particular it means the team respect the Product Owner and trust them to make the right calls. Most of all they accept that once in a while – hopefully not too often – the Product Owner will have to say “I accept technologically X is the right thing but commercially it must be Y; full ahead and damn the torpedoes.”

It can be hard for a Product Owner to fill their role if the team believe a senior developer – or anyone else – should be managing the backlog and prioritising work to do.

  1. 4. Time

Finally, and probably the most difficult… Product Owners need time to do their work.

They need time to meet customers and reflect on those encounters.

They need time to work-the-backlog, value stories, weed out expired or valueless stories, think about the product vision, talk to stakeholders and more senior people, and then ponder what happens next.

Time to evaluate what has been delivered and see if it is delivering the expected value. Time to understand whether that which has been delivered is generating more or less value than expected. Time to feedback those findings into future work: to recalibrate expected values and priorities, generate more work or invalidate other work.

Product Owners need time to look at competitor products and consider alternatives – if only to steal ideas!

They need time to work with the technical team: have conversations about stories, expand on acceptance criteria, review work in progress, perhaps test completed features and socialise with the team.

They also need time to enhance their own skills and learn more about the domain.

And if they don’t have the time to do this?

Without time they will rush into planning meetings and say “I’ve been so busy, I haven’t looked at the backlog this week, just bear with me while I choose some stories…”

More often than not they will wing-it, they substitute opinion and guesswork instead of solid analysis, facts and data. They overlook competition and fail to listen to the team and other managers.

And O yes, they need time for their own lives and family.

I sometimes think that only Super Human’s need apply for a Product Owner role, or perhaps many Product Owners are set up to fail from day-1. Yet the role is so important.

I plan to explore this topic some more in the next few posts.

The post Product Owners need 4 things appeared first on Allan Kelly Associates.

A Decent Borel Code – a.k.

a.k. from thus spake a.k.

A few posts ago we took a look at how we might implement various operations on sets represented as sorted arrays, such as the union, being the set of every element that is in either of two sets, and the intersection, being the set of every element that is in both of them, which we implemented with ak.setUnion and ak.setIntersection respectively.
Such arrays are necessarily both finite and discrete and so cannot represent continuous subsets of the real numbers such as intervals, which contain every real number within a given range. Of particular interest are unions of countable sets of intervals Ii, known as Borel sets, and so it's worth adding a type to the ak library to represent them.