My story, my why

AllanAdmin from Allan Kelly Associates

I thought I’d open 2021 year with a personal story of how I got where I am today (no, I’m not in San Francisco, although that is the Golden Gate in the picture)

Allan Kelly on the north side of the Golden Gate bridge, 2001.

I started programming when I was 12 (ZX81 then BBC) and then led a very successful career into my 30s – including a spell in California. Increasingly I found the code was not the challenge, I could make the code do what I wanted. The problem was the way we were managed, or mismanaged, the things we were ask to do and the way we were organised.

So began my journey into “management”. Determined to be a better manager than those I had worked for I took myself back to school. During a year in business school I learned a lot of good stuff, I discovered “organisational learning” and I reconnected with my dyslexia.

“Agile” was just breaking at the time and in agile I saw the same ethos of learning I was getting so excited about. The reports of agile teams I read described the best aspects of the developments I had worked on. For me, managing software delivery and enhancing agile are the same thing.

My mission became to help my younger self: help technologists deliver successful products and enjoy satisfying work. Most of what I do falls under the “agile” banner but really it is about creating the processes and environments were people can learning, thrive and excel.

When people are getting satisfaction from their work delivering great products, businesses succeed and grow. And as software has come to underpin every digital initiative my work has expanded.


For my latest blog posts, give aways and special offers on books and training subscrbe to my newsletter – and as a thank you download my Project Myopia eBook for free

The post My story, my why appeared first on Allan Kelly Associates.

Streaming video with Owncast on a free Oracle Cloud computer

Andy Balaam from Andy Balaam's Blog

I just streamed about 40 minutes of me playing Trials Fusion using Owncast. Owncast is a self-hosted alternative to streaming services like Twitch and YouTube live.

Normally, you would need to pay for a computer to self-host it on. Owncast suggest this will cost about $5/month.

But, Oracle Cloud has a “Always Free” tier that includes a “Compute Instance” (a virtual machine running Linux) that is capable of running Owncast.

Here’s how I did it:

Register for Oracle Cloud

This was probably the worst bit.

I went to oraclecloud.com and clicked “Sign up for free cloud tier”. It didn’t work in Firefox(!) so I had to use Chromium.

I had to enter my name, address, email address, phone number and credit card details. The email was verified, the phone number was verified (with a text message), and the credit card was verified (with a real transaction), so there was no getting around any of it.

They promise that they won’t charge my card. I’ll let you know if I discover differently.

Create a Compute Instance

Once I was logged in to the Oracle “console” (web site), I clicked the burger menu in the top left, chose “Compute” and then “Instances” to create a new instance. I followed all the default settings (including using the default “image”, which meant my instance was running Oracle Linux, which I think is similar to Red Hat), and when I got to the ssh keys part, I supplied the public key of my existing SSH key pair. Read the docs there if you don’t have one of these.

As soon as that was done, and I waited for the instance to be created and started, I was able to SSH in to my instance using a username of opc and the Public IP Address listed:

ssh opc@PUBLIC_IP

(Note: here and below, if I say “PUBLIC_IP”, I mean the IP address listed in the information about your compute instance. It should be a list of four numbers separated by dots.)

Allow connecting to the instance on different ports

Owncast listens for HTTP connections on port 8080, and RTMP streams on 1935, so I needed to do two things to make that work.

Modify the Security List to add Ingress Rules

  • On the information about my instance, I clicked on the name of the Subnet (under Primary VNIC).
  • In the subnet, I clicked the name of the Security List (“Default Security List for …”) in the Security Lists list.
  • In the Security List I clicked Add Ingress Rules and entered:
    Stateless: unchecked
    Source Type: CIDR
    Source CIDR: 0.0.0.0/0
    IP Protocol: TCP
    Source Port Range: (blank)
    Destination Port Range: 8080
    Description: (blank)

    and then clicked Add Ingress Rules to create the rule.

  • I then added another Ingress Rule that was identical, except Destination Port Range was 1935.

Allow ports 8080 and 1935 on the instance’s own firewall

It took me a long time to figure out, but it turns out the Oracle Linux running on the Compute Instance has its own firewall. Eventually, thanks to a blog post by meinside: When Oracle Cloud’s Ubuntu instance doesn’t accept connections to ports other than 22, and some Oracle docs on ways to secure resources, I found that I needed to SSH in to the machine (like I showed above) and run these commands:

sudo firewall-cmd --zone=public --permanent --add-port=8080/tcp
sudo firewall-cmd --zone=public --permanent --add-port=1935/tcp
sudo firewall-cmd --reload

Now I was able to connect to the services I ran on the machine on those ports.

Install Owncast

The Owncast install was incredibly easy. I just followed the instructions at Owncast Quickstart. I SSHd in to the instance as before, and ran:

curl -s https://owncast.online/install.sh | bash

and then edited the file owncast/config.yaml to have a custom stream key in it. You can do that by typing:

nano owncast/config.yaml

There is information about this file at: owncast.online/docs/configuration.

Run Owncast

I ran the service like this:

cd owncast
./owncast

In future, if I want to leave it running, I may run it inside screen, or even use systemd or similar.

Open the web site

I could now see the web site by typing this into my browser’s address bar:

http://PUBLIC_IP:8080

(Where PUBLIC_IP is the Public IP copied from the Instance info as before.)

Stream some video

Finally, in OBS‘s Settings I chose the Stream section and entered:

Service: Custom...
Server: rtmp://PUBLIC_IP/live
Stream key: STREAM_KEY

Where “STREAM_KEY” means the stream key I added to config.yaml earlier.

Now, when I clicked “Start Streaming” in OBS, my stream appeared on the web site!

Costs and limits

Oracle stated during sign-up that I would not be charged unless I explicitly chose to use a different tier.

The Compute Instance is part of the “Always Free” tier, so in theory it should stay up and working.

However, if you use lots of resources (which streaming for a long time probably does), I would expect services would be throttled and/or stopped completely. I have no idea whether they will allow enough resources for regular streaming, or whether this is all waste of time. We shall see.

Pinephone update

Andy Balaam from Andy Balaam's Blog

I got a Pinephone for Christmas!

Here is quick summary of my experience with it. (Originally published on mastodon.)


Update on the pinephone as promised.

I love it, but I would definitely not recommend expecting to use it as your actual phone.

I have the Manjaro Phosh edition. Phosh is GNOME customised for mobile.

It turns on, you can unlock it, and you get a launcher. It has apps, and some of them work.

Firefox works really well. I can use it for Youtube and loads of other sites. I installed uBlock Origin, and it works.

Adding my Nextcloud config to Phosh seamlessly gave me Calendar, Contact and TODO list apps working, with my data in them.

The Maps app found me easily via GPS. I could bring up directions by entering a from and to, but it didn't seem to want to guide me via GPS.

Several apps don't fit properly on screen, and there doesn't seem to be a way to scroll or move the windows.

The camera technically works but the picture looks terrible (squashed, wibbly and blue-coloured).

Scrolling around on the launcher updates at about 5-10 fps, which is fine but would put many people off.

Many of the apps available to install in the Software app don't really work. I assume the list of apps is the standard for GNOME or Manjaro, so many are not adapted for phones.

I _love_ the fact that all the work that has been put into desktop Linux can be re-used on phones. Why wasn't it always this way?

It's great to be able to buy hardware that is specifically designed to run properly free software.

The Terminal app works nicely and presents a keyboard with extra keys that you need in a terminal.

The settings app works nicely.

My biggest frustration was not being able to find software in the Software app that worked nicely.

I was looking for a Youtube app that protected my privacy. On Android I use NewPipe Legacy. On desktop I use Freetube. I couldn't find Freetube in Software. I tried Minitube but it was unusable (window didn't fit).

I haven't tried installing software from the command line. Maybe I can find (or build) Freetube via a Manjaro repo?

Or maybe I should investigate NewPipe Legacy via anbox, although that seems to miss the point a little :-)

Is your program a function or a service?

Andy Balaam from Andy Balaam's Blog

Maybe everyone knows this already, but for my own clarity, I think there are really two types of computer program:

  • A function: something that you run, and get back a result. Example: a command-line tool like ls
  • A service: something that sits around waiting for things to happen, and responds to them. Example: a web server

How functions work

Programs that are essentially functions should:

  • Validate their input and stop if it is wrong
  • Stop when they have finished their job*
  • Let you know whether they succeeded or failed

*The Halting Problem shows that you can’t prove they stop, so I won’t ask you to do that.

Writing functions is relatively easy.

How services work

Programs that are services should:

  • Start when you tell them to start, even when things are not right
  • Keep running until you tell them to stop, even when bad things happen
  • Tell the user about problems via some communication mechanism

Writing services seems a little harder than writing functions.

What about UIs?

I suggest that programs with UIs are just a special case of services. Do you agree?

What about let-it-crash?

I think that let-it-crash is a good way to build services, but when you build a service that way, I consider the whole system to be the real service: this means the code we are writing, plus the runtime. In this case, the runtime is responsible for keeping the service running (by restarting it), and telling the user about problems.

In effect, let-it-crash allows us to write programs that look like functions (which I claim is easier), and still have them behave like services, because the runtime does the extra work for us. Erlang seems like a good example of this.

What are the implications?

If you are writing a service, your program should start when asked, and keep running until it is asked to stop, even if things are bad.

For example:

  • a service that relies on a data source should keep running when that data source is unavailable, and emit errors saying that it is unable to work. It should start working when the data source becomes available. (Again, if you implement this behaviour by using a runtime that allows you to write in a let-it-crash style, good for you.)
  • a service that relies on the existence of a directory should probably create that directory if it doesn’t exist.
  • a service that needs config might want to start up with sane defaults if the config is not supplied. Or maybe it should complain loudly and poll for the file to be created?

Why not stop when things are wrong?

  • Using this approach, it doesn’t matter the order of starting services. The more services we have, the more painful it is to have an order we must follow.
  • It’s nice when things are predictable. We expect services to keep running under normal circumstances. Using this approach, our expectations are not wrong when things go wrong.

What are the down sides?

  • You must pay attention to the error reporting coming from running services – they may not be working.
  • Services will still stop, due to bugs, or at least due to hardware failures, so you still have to pay attention to whether services are running.

More: 12 Fractured Apps

Visual Lint 7.0.10.329 has been released

Products, the Universe and Everything from Products, the Universe and Everything

This is a recommended maintenance update for Visual Lint 7.0. The following changes are included:

  • The ${eclipse_home} and ${software_location} project variables are now defined when parsing S32 Design Studio for ARM projects.

  • Any project variables of the form ${VARNAME} which Visual Lint cannot expand will now be converted to the standard OS format (i.e. %VARNAME%) before any properties referencing them are written to a PC-lint or PC-lint Plus project indirect (project.lnt) file. This allows the values of Eclipse project variables to be defined as system environment variables and (for example) injected using the PC-lint/PC-lint Plus -setenv() directive if necessary.

  • Fixed a bug in the generation of PC-lint/PC-lint Plus command lines for projects containing per-file preprocessor definitions.

  • Fixed a bug in the generation of analysis command lines containing preprocessor definitions whose values contain quotes. The bug affected analysis command lines for PC-lint, PC-lint Plus and CppCheck.

  • Fixed a bug which could prevent VisualLintGui code editor views from reflecting changes to files which have been externally modified.

Download Visual Lint 7.0.10.329

Configuring MongoDB Java driver logging from Clojure using timbre

Timo Geusch from The Lone C++ Coder's Blog

I’ve mentioned in the past how you can configure the MongoDB Java driver output from Java. Most Clojure applications that use MongoDB use a database driver that wraps the official MongoDB Java driver. I personally use monger for a lot of my projects, but also occasionally created my own wrapper. The methods described in this […]

The post Configuring MongoDB Java driver logging from Clojure using timbre appeared first on The Lone C++ Coder's Blog.

What impact might my evidence-based book have in 2021?

Derek Jones from The Shape of Code

What impact might the release of my evidence-based software engineering book have on software engineering in 2021?

Lots of people have seen the book. The release triggered a quarter of a million downloads, or rather it getting linked to on Twitter and Hacker News resulted in this quantity of downloads. Looking at the some of the comments on Hacker News, I suspect that many ‘readers’ did not progress much further than looking at the cover. Some have scanned through it expecting to find answers to a question that interests them, but all they found was disconnected results from a scattering of studies, i.e., the current state of the field.

The evidence that source code has a short and lonely existence is a gift to those seeking to save time/money by employing a quick and dirty approach to software development. Yes, there are some applications where a quick and dirty iterative approach is not a good idea (iterative as in, if we make enough money there will be a version 2), the software controlling aircraft landing wheels being an obvious example (if the wheels don’t deploy, telling the pilot to fly to another airport to see if they work there is not really an option).

There will be a few researchers who pick up an idea from something in the book, and run with it; I have had a couple of emails along this line, mostly from just starting out PhD students. It would be naive to think that lots of researchers will make any significant changes to their existing views on software engineering. Planck was correct to say that science advances one funeral at a time.

I’m hoping that the book will produce a significant improvement in the primitive statistical techniques currently used by many software researchers. At the moment some form of Wilcoxon test, invented in 1945, is the level of statistical sophistication wielded in most software engineering papers (that do any data analysis).

Software engineering research has the feeling of being a disjoint collection of results, and I’m hoping that a few people will be interested in starting to join the dots, i.e., making connections between findings from different studies. There are likely to be a limited number of major dot joinings, and so only a few dedicated people are needed to make it happen. Why hasn’t this happened yet? I think that many academics in computing departments are lifestyle researchers, moving from one project to the next, enjoying the lifestyle, with little interest in any research results once the grant money runs out (apart from trying to get others to cite it). Why do I think this? I have emailed many researchers information about the patterns I have found in the data they sent me, and a common response is almost completely disinterest (some were interested) in any connections to other work.

What impact do you think ‘all’ the evidence presented will have?

I bought the first computer I ever wrote a program on

Timo Geusch from The Lone C++ Coder's Blog

I don’t usually do Happy New Year posts, but given how “well” 2020 went I thought it was appropriate to start 2021 with a whimsy post.  This post is probably going to date me since it’s been a few years – OK, decades – since these were current. Well, it’s not the actual computer, but […]

The post I bought the first computer I ever wrote a program on appeared first on The Lone C++ Coder's Blog.

Smooth Operator – a.k.

a.k. from thus spake a.k.

Last time we took a look at linear regression which finds the linear function that minimises the differences between its results and values at a set of points that are presumed, possibly after applying some specified transformation, to be random deviations from a straight line or, in multiple dimensions, a flat plane. The purpose was to reveal the underlying relationship between the independent variable represented by the points and the dependent variable represented by the values at them.
This time we shall see how we can approximate the function that defines the relationship between them without actually revealing what it is.

Source code discovery, skipping over the legal complications

Derek Jones from The Shape of Code

The 2020 US elections introduced the issue of source code discovery, in legal cases, to a wider audience. People wanted to (and still do) check that the software used to register and count votes works as intended, but the companies who wrote the software wouldn’t make it available and the courts did not compel them to do so.

I was surprised to see that there is even a section on “Transfer of or access to source code” in the EU-UK trade and cooperation agreement, agreed on Christmas Eve.

I have many years of experience in discovering problems in the source code of programs I did not write. This experience derives from my time as a compiler implementer (e.g., a big customer is being held up by a serious issue in their application, and the compiler is being blamed), and as a static analysis tool vendor (e.g., managers want to know about what serious mistakes may exist in the code of their products). In all cases those involved wanted me there, I could talk to some of those involved in developing the code, and there were known problems with the code. In court cases, the defence does not want the prosecution looking at the code, and I assume that all conversations with the people who wrote the code goes via the lawyers. I have intentionally stayed away from this kind of work, so my practical experience of working on legal discovery is zero.

The most common reason companies give for not wanting to make their source code available is that it contains trade-secrets (they can hardly say that it’s because they don’t want any mistakes in the code to be discovered).

What kind of trade-secrets might source code contain? Most code is very dull, and for some programs the only trade-secret is that if you put in the implementation effort, the obvious way of doing things works, i.e., the secret sauce promoted by the marketing department is all smoke and mirrors (I have had senior management, who have probably never seen the code, tell me about the wondrous properties of their code, which I had seen and knew that nothing special was present).

Comments may detail embarrassing facts, aka trade-secrets. Sometimes the code interfaces to a proprietary interface format that the company wants to keep secret, or uses some formula that required a lot of R&D (management gets very upset when told that ‘secret’ formula can be reverse engineered from the executable code).

Why does a legal team want access to source code?

If the purpose is to check specific functionality, then reading the source code is probably the fastest technique. For instance, checking whether a particular set of input values can cause a specific behavior to occur, or tracing through the logic to understand the circumstances under which a particular behavior occurs, or in software patent litigation checking what algorithms or formula are being used (this is where trade-secret claims appear to be valid).

If the purpose is a fishing expedition looking for possible incorrect behaviors, having the source code is probably not that useful. The quantity of source contained in modern applications can be huge, e.g., tens to hundreds of thousands of lines.

In ancient times (i.e., the 1970s and 1980s) programs were short (because most computers had tiny amounts of memory, compared to post-2000), and it was practical to read the source to understand a program. Customer demand for more features, and the fact that greater storage capacity removed the need to spend time reducing code size, means that source code ballooned. The following plot shows the lines of code contained in the collected algorithms of the Transactions on Mathematical Software, the red line is a fitted regression model of the form: LOC approx e^{0.0003Day}(code+data):

Lines of code contained in the collected algorithms of the Transactions on Mathematical Software, over time.

How, by reading the source code, does anybody find mistakes in a 10+ thousand line program? If the program only occasionally misbehaves, finding a coding mistake by reading the source is likely to be very very time-consuming, i.e, months. Work it out yourself: 10K lines of code is around 200 pages. How long would it take you to remember all the details and their interdependencies of a detailed 200-page technical discussion well enough to spot an inconsistency likely to cause a fault experience? And, yes, the source may very well be provided as a printout, or as a pdf on a protected memory stick.

From my limited reading of accounts of software discovery, the time available to study the code may be just days or maybe a week or two.

Reading large quantities of code, to discover possible coding mistakes, are an inefficient use of human time resources. Some form of analysis tool might help. Static analysis tools are one option; these cost money and might not be available for the language or dialect in which the source is written (there are some good tools for C because it has been around so long and is widely used).

Character assassination, or guilt by innuendo is another approach; the code just cannot be trusted to behave in a reasonable manner (this approach is regularly used in the software business). Software metrics are deployed to give the impression that it is likely that mistakes exist, without specifying specific mistakes in the code, e.g., this metric is much higher than is considered reasonable. Where did these reasonable values come from? Someone, somewhere said something, the Moon aligned with Mars and these values became accepted ‘wisdom’ (no, reality is not allowed to intrude; the case is made by arguing from authority). McCabe’s complexity metric is a favorite, and I have written how use of this metric is essentially accounting fraud (I have had emails from several people who are very unhappy about me saying this). Halstead’s metrics are another favorite, and at least Halstead and others at the time did some empirical analysis (the results showed how ineffective the metrics were; the metrics don’t calculate the quantities claimed).

The software development process used to create software is another popular means of character assassination. People seem to take comfort in the idea that software was created using a defined process, and use of ad-hoc methods provides an easy target for ridicule. Some processes work because they include lots of testing, and doing lots of testing will of course improve reliability. I have seen development groups use a process and fail to produce reliable software, and I have seen ad-hoc methods produce reliable software.

From what I can tell, some expert witnesses are chosen for their ability to project an air of authority and having impressive sounding credentials, not for their hands-on ability to dissect code. In other words, just the kind of person needed for a legal strategy based on character assassination, or guilt by innuendo.

What is the most cost-effective way of finding reliability problems in software built from 10k+ lines of code? My money is on fuzz testing, a term that should send shivers down the spine of a defense team. Source code is not required, and the output is a list of real fault experiences. There are a few catches: 1) the software probably to be run in the cloud (perhaps the only cost/time effective way of running the many thousands of tests), and the defense is going to object over licensing issues (they don’t want the code fuzzed), 2) having lots of test harnesses interacting with a central database is likely to be problematic, 3) support for emulating embedded cpus, even commonly used ones like the Z80, is currently poor (this is a rapidly evolving area, so check current status).

Fuzzing can also be used to estimate the numbers of so-far undetected coding mistakes.