Reading ‘Let over Lambda’ – thoughts on fundamentals and why I keep reading classic CS texts

Timo Geusch from The Lone C++ Coder's Blog

Another one for my computer science reading list for this year. I do try to work my way through at least one classic computer science book annually and picked up Let Over Lambda a few weeks ago. Colour one of the cats not impressed, but then again she’s got more free time than I do and probably already read it. Not massively impressed by yet another old book

The first computer I owned

Derek Jones from The Shape of Code

The first computer I owned was a North Star Horizon. I bought it in kit form, which meant bags of capacitors, resistors, transistors, chips, printed circuit boards, etc, along with the circuit diagrams for each board. These all had to be soldered in the right holes, the chips socketed (no surface mount soldering for such a low volume system), and wires connected. I was amazed when the system booted the first time I powered it up; debugging with the very basic equipment I had would have been a nightmare. The only missing component was the power supply transformer, and a trip to the London-based supplier sorted that out. I saved a months’ salary by building the kit (which cost me 4-months salary, and I was one of the highest paid people in my circle).

North Star Horizon with cover removed.

The few individuals who bought a computer in the late 1970s bought either a Horizon or a Commodore Pet (which was more expensive, but came with an integrated monitor and keyboard). Computer ownership really started to take off when the BBC micro came along at the end of 1981, and could be bought for less than a months’ salary (at least for a white-collar worker).

My Horizon contained a Z80A clocking at 4MHz, 32K of RAM, and two 5 1/4-inch floppy drives (each holding 360K; the Wikipedia article says the drives held 90K, mine {according to the labels on the floppies, MD 525-10} are 40-track, 10-sector, double density). I later bought another 32K of memory; the system ROM was at 56K, and contained 4K of code, various tricks allowed the 4K above 60K to be used (the consistent quality of the soldering on one of the boards below identifies the non-hand built board).

North Star Horizon underside of boards.

The OS that came with the system was CP/M, renamed to CP/M-80 when the Intel 8086 came along, and will be familiar to anybody used to working with early versions of MS-DOS.

As a fan of Pascal, my development environment of choice was UCSD Pascal. The C compiler of choice was BDS C.

Horizon owners are total computer people :-) An emulator, running under Linux and capable of running Horizon disk images, is available for those wanting a taste of being a Horizon owner. I didn’t see any mention of audio emulation in the documentation; clicks and whirls from the floppy drive were a good way of monitoring compile progress without needing to look at the screen (not content with using our Horizon’s at home, another Horizon owner and I implemented a Horizon emulator in Fortran, running on the University’s Prime computers). I wonder how many Nobel-prize winners did their calculations on a Horizon?

The Horizon spec needs to be appreciated in the context of its time. When I worked in application support at the University of Surrey, users had a default file allocation of around 100K’ish (memory is foggy). So being able to store stuff on a 360K floppy, which could be purchased in boxes of 10, was a big deal. The mainframe/minicomputers of the day were available with single-digit megabytes, but many previous generation systems had under 100K of RAM. There were lots of programs out there still running in 64K. In terms of cpu power, nearly all existing systems were multi-user, and a less powerful, single-user, cpu beats sharing a more powerful cpu with 10-100 people.

In terms of sheer weight, visual appearance and electrical clout, the Horizon power supply far exceeds those seen in today’s computers, which look tame by comparison (two of those capacitors are 4-inches tall):

North Star Horizon power supply.

My Horizon has been sitting in the garage for 32-years, and tucked away in unused rooms for years before that. The main problem with finding out whether it still works is finding a device to connect to the 25-pin serial port. I have an old PC with a 9-pin serial port, but I have spent enough of my life fiddling around with serial-port cables and Kermit to be content trying a simpler approach. I connect the power supply and switched it on. There was a loud crack and a flash on the disk-controller board; probably a tantalum capacitor giving up the ghost (easy enough to replace). The primary floppy drive did spin up and shutdown after some seconds (as expected), but the internal floppy engagement arm (probably not its real name) does not swing free when I open the bay door (so I cannot insert a floppy).

I am hoping to find a home for it in a computer museum, and have emailed the two closest museums. If these museums are not interested, the first person to knock on my door can take it away, along with manuals and floppies.

Big Friendly GiantS – a.k.

a.k. from thus spake a.k.

In the previous post we saw how we could perform a univariate line search for a point that satisfies the Wolfe conditions meaning that it is reasonably close to a minimum and takes a lot less work to find than the minimum itself. Line searches are used in a class of multivariate minimisation algorithms which iteratively choose directions in which to proceed, in particular those that use approximations of the Hessian matrix of second partial derivatives of a function to do so, similarly to how the Levenberg-Marquardt multivariate inversion algorithm uses a diagonal matrix in place of the sum of the products of its Hessian matrices for each element and the error in that element's current value, and in this post we shall take a look at one of them.

What is it with Business Agility?

Allan Kelly from Allan Kelly Associates

Top of my Slack channels is the Business Agility Institute, just below that is the old #NoProjects slack that sometimes comes to life. Recently someone on #NoProjects asked:

Q: What do you guys think about Business Agility?

My reply: Business Agility, bit like apple pie, how can one not be in favour?

Of course, what flavour of business agility is another question. Lots of people seem to use the words “business agility” but I’m not sure there is a consensus on exactly what it is. I am a member, and supporter, of the Business Agility Institute which was founded by Evan Leybourn who also published a NoProjects book.

Evan and I were in regular communication while we were writing our books, we both saw the flaws in the project model and both arrived at the conclusion that as the business world digitalises business is never done therefore technology is never done. In essence that is the genesis of Continous Digital. While I wrote a book on the subject Evan founded the Business Agility Institute.

Q: So whats your take or how you think business agility is different from no-projects? is people just rebranding stuff to BA now?

My reply: Business Agility is good, it makes sense to go “up” from software to the business. Now look at the things you might want from Business Agility:

▪ Quick to market

▪ Fast to deliver

▪ Responsive to customers

▪ Reactive to trends and changes

▪ Efficient/effective

▪ … add your own here…

Isn’t that what any business wants? Whether you call it Business Agility or not? – these are apple pie things, hard to argue against and if you read (almost) any management textbook in the last 30 years they say the same things.

These aren’t #NoProjects, that is a very specific critique of the project model. Some people may have believed that projects facilitated those things, however what #NoProjects says is: the model is flawed, if you want those things you need to find another way. For me that other way is Continous Digital, which is why my presentations talk of #NoProjects evolution: it is not enough to say “projects don’t work”, one needs to suggest an alternative.

So how is Business Agility different?

First off: even if the things Business Agility offers aren’t new the rise of Business Agility is a new opportunity to push an agenda which is good, sometimes things need to be “rebranded” as new to get attention. Should’t be but there you are.

Second, the methods have changed: two forces at work here, Digital and “Millennials”

Digital tools – driven by Moore’s Law and the falling price of CPU power – have changed the way business works, it means that the things executives often want to avoid, software development, is now the power house of your business.

Hence, “the business is the technology and the technology is the business.” Think Uber: how do you separate Uber’s technology from Uber the taxi company?

This is why I have take to saying “IT is dead, IT’s Digital”. Information technology in business is no longer a cost centre, it is no longer “just” and enabler for business services, Digital means it is the business, it is were innovation happens and it is a driver of revenue and profitability.

That also means “Agile Methods” (a la software engineering) come into focus because a) you need to create software and b) as digital tools permeate every aspect of business life agile becomes more applicable.

Agile methods are the processes that maximised the benefits of digital tools. Agile started with software engineers (and friends) because they had early access to digital tools (email, IM, VOIP, web, wiki, etc.) and are able to create “missing” tools

Millennials: those born about 2000 are said to want more meaning, purpose and autonomy in their work. Personally I’ve always wanted these things and I think everyone does. Whether I am right or not this is a trend which has been running for a while, millennials exhibit this most clearly. (Plus the pandemic adds to this)

This too fits with agile because agile methods recognise the people aspect, people in agile are not plug compatible (although we do encourage a more team based approach.). Agile considers motivation and recognise those doing the work as experts in their own right – are better at addressing that need.

Hence, and a point I’m making in my “Reawakening Agile with OKRs” presentation which I’m delivering this year, we need to think more about purpose driven development – PDD. Our software needs purpose so our people have purpose.

Ultimately, while business agility might not be anything new there is a greater need for it.


Subscribe to my blog newsletter and download Continuous Digital for free – normal price $9.99/£9.95/€9.95

The post What is it with Business Agility? appeared first on Allan Kelly Associates.

Announcing I-DUNNO 1.0 and web-i-dunno

Andy Balaam from Andy Balaam's Blog

It’s hard to believe it’s already a year since the release of RFC 8771 (The Internationalized Deliberately Unreadable Network NOtation), which for me at least made me think about IP addresses in a whole new way.

So, it seems fitting for the anniversary to be able to release proper support for this standard in the Rust universe, with Rust I-DUNNO version 1.0 released. You can find it on Rust’s crates.io at crates.io/crates/i-dunno and the API documentation is at docs.rs/i-dunno.

Also, because for a standard like this to receive the wide adoption it deserves, it’s important that young people have a chance to interact with it, playing with encodings to get a real feel for what it’s like to use in practice, I’m proud to announce the I-DUNNO Creator. On that page you can enter an IP address (IPv4 or IPv6) and see it transformed immediately into a candidate I-DUNNO, with live information about the Confusion Level of the I-DUNNO, as specified in the standard. You can find the source code for the I-DUNNO Creator in the web-i-dunno repo.

The I-DUNNO Creator is built on the Rust package, making use of Rust’s highly-developed WASM support to compile the code into a form that works naturally in a web browser.

I hope that by offering both systems programmers and the young people of today and their new-fangled web sites the opportunity to create I-DUNNOs, I can contribute a little to spreading the word about deliberately unreadable notations to new audiences.

Note: the current implementation is limited to generate only I-DUNNOs with no padding bits. As specified in the standard, I-DUNNOs may end with arbitrary padding, and adding this functionality to rust-i-dunno is left as an exercise for the reader: merge requests welcome!

Linux has a sleeper agent working as a core developer

Derek Jones from The Shape of Code

The latest news from Wikileaks, that GCHQ, the UK’s signal intelligence agency, has a sleeper agent working as a trusted member on the Linux kernel core development team should not come as a surprise to anybody.

The Linux kernel is embedded as a core component inside many critical systems; the kind of systems that intelligence agencies and other organizations would like full access.

The open nature of Linux kernel development makes it very difficult to surreptitiously introduce a hidden vulnerability. A friendly gatekeeper on the core developer team is needed.

In the Open source world, trust is built up through years of dedicated work. Funding the right developer to spend many years doing solid work on the Linux kernel is a worthwhile investment. Such a person eventually reaches a position where the updates they claim to have scrutinized are accepted into the codebase without a second look.

The need for the agent to maintain plausible deniability requires an arm’s length approach, and the GCHQ team made a wise choice in targeting device drivers as cost-effective propagators of hidden weaknesses.

Writing a device driver requires the kinds of specific know-how that is not widely available. A device driver written by somebody new to the kernel world is not suspicious. The sleeper agent has deniability in that they did not write the code, they simply ‘failed’ to spot a well hidden vulnerability.

Lack of know-how means that the software for a new device is often created by cutting-and-pasting code from an existing driver for a similar chip set, i.e., once a vulnerability has been inserted it is likely to propagate.

Perhaps it’s my lack of knowledge of clandestine control of third-party computers, but the leak reveals the GCHQ team having an obsession with state machines controlled by pseudo random inputs.

With their background in code breaking I appreciate that GCHQ have lots of expertise to throw at doing clever things with pseudo random numbers (other than introducing subtle flaws in public key encryption).

What about the possibility of introducing non-random patterns in randomised storage layout algorithms (he says waving his clueless arms around)?

Which of the core developers is most likely to be the sleeper agent? His codename, Basil Brush, suggests somebody from the boomer generation, or perhaps reflects some personal characteristic; it might also be intended to distract.

What steps need to be taken to prevent more sleeper agents joining the Linux kernel development team?

Requiring developers to provide a record of their financial history (say, 10-years worth), before being accepted as a core developer, will rule out many capable people. Also, this approach does not filter out ideologically motivated developers.

The world may have to accept that intelligence agencies are the future of major funding for widely used Open source projects.

Automatically filling in the UK COVID test results page with Selenium IDE

Andy Balaam from Andy Balaam's Blog

Lots of people are filling in the extremely detailed UK government COVID test result page twice every week.

It asks you to fill in a very large list of details, most of which are the same every time, but it doesn’t remember what you typed last time.

I didn’t want to write a Python script or similar to enter my results, because I wanted to check I’d done it right, and because there is a captcha at the end that is clearly intended to prevent automation like that.

However, with a Selenium IDE script, I can drive my browser, watching what it does and checking the input, and manually filling in the captcha and final double-check page.

In case it’s helpful, here is the script I created: report-covid-test.side.

You can create one for each child if you have several, filling in school name, NHS number, names, date of birth etc. in the script and re-using it, modifying it each time to enter the bar code number for the test itself.

To use it you’ll need the Selenium IDE plugin for firefox, or Selenium IDE plugin for another browser.

I’d recommend loading this script into the Selenium IDE plugin in Firefox, looking through it and editing the values that say “ENTER…HERE”, then clicking Run Script and watching it fill in values.

It doesn’t actually submit the result, so you can always check and modify it manually if something doesn’t work out, before clicking the last couple of buttons to submit.

The aura of software quality

Derek Jones from The Shape of Code

Bad money drives out good money, is a financial adage. The corresponding research adage might be “research hyperbole incentivizes more hyperbole”.

Software quality appears to be the most commonly studied problem in software engineering. The reason for this is that use of the term software quality imbues what is said with an aura of relevance; all that is needed is a willingness to assert that some measured attribute is a metric for software quality.

Using the term “software quality” to appear relevant is not limited to researchers; consultants, tool vendors and marketers are equally willing to attach “software quality” to whatever they are selling.

When reading a research paper, I usually hit the delete button as soon as the authors start talking about software quality. I get very irritated when what looks like an interesting paper starts spewing “software quality” nonsense.

The paper: A Family of Experiments on Test-Driven Development commits the ‘crime’ of framing what looks like an interesting experiment in terms of software quality. Because it looked interesting, and the data was available, I endured 12 pages of software quality marketing nonsense to find out how the authors had defined this term (the percentage of tests passed), and get to the point where I could start learning about the experiments.

While the experiments were interesting, a multi-site effort and just the kind of thing others should be doing, the results were hardly earth-shattering (the experimental setup was dictated by the practicalities of obtaining the data). I understand why the authors felt the need for some hyperbole (but 12-pages). I hope they continue with this work (with less hyperbole).

Anybody skimming the software engineering research literature will be dazed by the number and range of factors appearing to play a major role in software quality. Once they realize that “software quality” is actually a meaningless marketing term, they are back to knowing nothing. Every paper has to be read to figure out what definition is being used for “software quality”; reading a paper’s abstract does not provide the needed information. This is a nightmare for anybody seeking some understanding of what is known about software engineering.

When writing my evidence-based software engineering book I was very careful to stay away from the term “software quality” (one paper on perceptions of software product quality is discussed, and there are around 35 occurrences of the word “quality”).

People in industry are very interested in software quality, and sometimes they have the confusing experience of talking to me about it. My first response, on being asked about software quality, is to ask what the questioner means by software quality. After letting them fumble around for 10 seconds or so, trying to articulate an answer, I offer several possibilities (which they are often not happy with). Then I explain how “software quality” is a meaningless marketing term. This leaves them confused and unhappy. People have a yearning for software quality which makes them easy prey for the snake-oil salesmen.