short – command line tool to truncate lines to fit in the terminal

Andy Balaam from Andy Balaam's Blog

Sometimes I run grep commands that search files with hugely-long lines. If those lines match, they are printed out and spam my terminal with huge amounts of information, that I probably don’t need.

I couldn’t find a tool that limits the line-length of its output, so I wrote a tiny one.

It’s called short.

You use it like this (my typical usage):

grep foo myfile.txt | short

Or specify the column width like this:

short -w 5 myfile.txt

It’s written in Rust. Feel free to add features, fix bugs and package it for your operating system/distribution!

But our users don’t want to change

AllanAdmin from Allan Kelly Associates

A good question came into my mailbox:

“Much of the writing I’ve seen assumes that software can be shipped directly into the hands of customers to create value (hence the “smaller packages, more often” approach). My experience has been that especially with new launches or major releases, there needs to be a threshold of minimum functionality that needs to be in place.”

Check your phone. Is it set to auto-update apps? Is your desktop OS set to auto-update? Or do you manual choose when to update?

Look at the update notes on phone apps from the likes of Uber, Slack, SkyScanner, the BBC and others. They say little more than “we update our apps regularly.”

Today people are used to technology auto-changing on them. They may not like it but do they like a big change any more?

My guess is that most people don’t even notice those updates. When you batch up software releases users see lots of changes at once, when you release them as a regular stream of small updates then most go unnoticed.

Still, users will see some updates change things, and they will not like some of these. But how long do you want to hide these updates from your users?

The question that needs asking is: what is the cost of an update? The vast majority of updates are quick, easy, cheap and painless.

Of course people don’t like updates which introduce a new UI, a new payment model or which demand you uninstall an earlier app but when updates are easy and bring benefits – even benefits you don’t see – they happily accept them.

And remember, the alternative to 100 small updates is one big update where people are more likely to see changes.

If your updates are generally good why hold them back? And if your updates are going in the wrong direction shouldn’t you change something? If you run scared of giving your users changes then something is wrong.

Nor is it just apps. Most people (in Europe at least) use telco supplied handsets and when the telco calls up and says “Would you like a new handset at no additional cost?” people usually say Yes. That is how telcos keep their customers.

The question continues,

“there needs to be coordination across the company (e.g. training people from marketing, sales, channel partners, customer/ internal support, and so on). There is also the human element – the capacity to absorb these changes. As a user of tech, I’m not sure I could work (well) with a product where features were changing, new ones being added frequently (weekly or even monthly), etc.”

If every software update was introducing a big change then these would be problems. But most updates don’t. Most introduce teeny-tiny changes.

Of course sometimes things need to change. The companies which do this best invest time and energy in making these painless. For example, Google often offers a “try our new beta version” for months before an update. And for months afterwards they provide a “use the old interface option.”

The best companies invest in user experience design too. This can go along way to removing the need for training.

Just because a new feature is released doesn’t mean people have to use it. For starters new changes can be released but disabled. Feature toggles are not only a way of managing source code branches but they also allow new features to be released silently and switched on when everyone is ready. This allows for releases to be de-risked without the customer seeing.

And when they are switched on they can be switched on for a few users at a time. Feedback can be gathered and improvements made before the next release.

That can be co-ordinated with training: make the feature toggle user switchable, everyone gets the new software and as they complete the training they can choose to switch it on.

Now marketing… yes, marketeers do like the big bang release – “look at us, we have something shiny and new!”

You could leave lots of features switched off until your marketeers are ready to make a big bang. That also reduces the problem of marketers needing to know what will be ready when so they known when to launch a campaign.

Or you could release updates without any fuss and market when you have the critical mass.

Or you could change your marketing approach: market a stream of constant improvements rather than occasional big changes.

Best of all market the capabilities of your thing without mentioning features: market what the app or system allows you to do.

For years I’ve been hearing “business people” bemoan developers who “talk technical” but I see exactly the same thing with marketeers. Look at Sony Televisions, what is the “picture processor X1” ? And why should I care? I can’t remember when I last changed the contrast on my television so the “Backlight master drive” (what ever that is) means nothing to me.

Or, look at Samsung mobile phones, 5G, 5G, 5G – what do I care about 5G? What does 5G allow me to do that I can’t with my current phone?

Drill down, look at the Samsung Galaxy lineup: CPU speed, CPU type, screen size, resolution, RAM, ROM – what do I care? How does any of that help me? – Stop throwing technical details at me!

Don’t market features market solution. Tell me what “job to be done” the product the addresses, tell me how my life will be improved. Marketing a solution rather than features decouple marketing from the upgrade cycle.

So sure, people don’t like technology change – I’ll tell you a story in my next blog. But when technology change brings benefits are they still resistant?

Now, with modern technology, with agile and continuous delivery, technology can change faster than business functions like training and marketing. We can either choose to slow technology down or we can change those functions to work differently – not necessarily faster but differently in a way that is compatible with agile technology change.

These kind of tensions are common in businesses which move across to agile style working. A lot of company think agile applies to the “software engine room” and the rest of the business can carry on as before. Unfortunately they have released the Agile Virus – agile working has introduced a set of tensions into the organization which must either be embraced or killed.

Once again technology is disruptive.

Perhaps, if the marketing or training department are insisting on big-bang releases maybe it is them who should be changing. Maybe, just maybe, they need to rethink their approach, maybe they could learn a thing or two about agile and work with differently with technology teams.

Subscribe to my blog newsletter and download Project Myopia for Free

The post But our users don’t want to change appeared first on Allan Kelly Associates.

Regular releases reduce risk, increase value

AllanAdmin from Allan Kelly Associates

“If you’re not embarrassed by the product when you launch, you’ve launched too late.” Reid Hoffman, founder LinkedIn

Years ago I worked for a software company supplying Vodafone, Verizon, Nokia, etc. The last thing those companies wanted was to update the software on their engineers PC every months, let alone every week!

I was remembering this episode when I was drafting what will be my next post (“But our users don’t want to change”) and thought it was worth saying something about how regular releases change the risk-reward equation.

When you only release occasionally there is a big incentive to “get it right” – to do everything that might be needed and to remove every defect whether you think those changes are needed or not. When you release occasionally second chances don’t happen for weeks or months. So you err on the side of caution and that caution costs.

Regularly releases changes that equation. Now second chances come around often, additions and fixes are easy. Now you can err on the side of less and that allows you to save time and money.

The ability to deliver regularly – every two weeks as a baseline, every day for high performing teams – is more important than the actual deliveries. Releasable is more important than released. The actual question of whether to release or not is ultimately a question for business representatives to decide.

But, being releasable on a very regular basis is an indicator of the teams technical ability and the innate quality of the thing being built. Teams which are always asking for “more time” may well have a low quality product (lots of bugs to fix) or have something to hide.

The fact that a team can, and hopefully do, release (to live) massively reduces the risk involved. When software is only released at the end – and usually only tested before that end – then risk is tail loaded. Having releasable – and especially released – software reduces risk. The risk is spread across the work.

Actually releasing early further reduces risk because every step in the process is exercised. There are no hidden deployment problems.

That offsets sunk-cost and combats commitment escalation. Because at any time the business stakeholders can say “game over” and walk away with a working product means that they are no longer held captive by the fear of sunk-costs, suppliers and career threatening failures.

It is also a nice side effect that releasing new functionality early – or just fixing bugs – increases the return on investment because benefits are delivered earlier and therefore start earning a return sooner.

Just because new functionality is completed and even released early does not mean users need to see it. Feature-toggles allows feature and changes to be hidden from users – or only enabled for specified users. Releasing changed software with no apparent change may look pointless but it actually reduces risk because the changes are out there.

That also means testing is simplified. Rather than running tests against software with many changes tests are run against software with few changes which makes changes more efficient even if the users don’t see it. And it removes the “we can’t roll back one fix” problem when one of 10 changes don’t pass.

Back with Vodafone engineers who don’t want their laptops updated: that was then, that was the days of CD installs. Today the cloud changes that, there is only one install to do, it isn’t such an inconvenience. So they could have the updates but with disruptive changes hidden. At the same time they could have non-disruptive changes, e.g. bug fixes.

In a few cases regular deliveries may not be the right answer. The key thing though is to change the default answer from “we only deliver occasionally (or at the end)” to “we deliver regularly (unless otherwise requested).”


Subscribe to my blog newsletter and download Project Myopia for Free

The post Regular releases reduce risk, increase value appeared first on Allan Kelly Associates.

Impact of function size on number of reported faults

Derek Jones from The Shape of Code

Are longer functions more likely to contain more coding mistakes than shorter functions?

Well, yes. Longer functions contain more code, and the more code developers write the more mistakes they are likely to make.

But wait, the evidence shows that most reported faults occur in short functions.

This is true, at least in Java. It is also true that most of a Java program’s code appears in short methods (in C 50% of the code is contained in functions containing 114 or fewer lines, while in Java 50% of code is contained in methods containing 4 or fewer lines). It is to be expected that most reported faults appear in short functions. The plot below shows, left: the percentage of code contained in functions/methods containing a given number of lines, and right: the cumulative percentage of lines contained in functions/methods containing less than a given number of lines (code+data):

left: the percentage of code contained in functions/methods containing a given number of lines, and right: the cumulative percentage of lines contained in functions/methods containing less than a given number of lines.

Does percentage of program source really explain all those reported faults in short methods/functions? Or are shorter functions more likely to contain more coding mistakes per line of code, than longer functions?

Reported faults per line of code is often referred to as: defect density.

If defect density was independent of function length, the plot of reported faults against function length (in lines of code) would be horizontal; red line below. If every function contained the same number of reported faults, the plotted line would have the form of the blue line below.

Number of reported faults in C++ classes (not methods) containing a given number of lines.

Two things need to occur for a fault to be experienced. A mistake has to appear in the code, and the code has to be executed with the ‘right’ input values.

Code that is never executed will never result in any fault reports.

In a function containing 100 lines of executable source code, say, 30 lines are rarely executed, they will not contribute as much to the final total number of reported faults as the other 70 lines.

How does the average percentage of executed LOC, in a function, vary with its length? I have been rummaging around looking for data to help answer this question, but so far without any luck (the llvm code coverage report is over all tests, rather than per test case). Pointers to such data very welcome.

Statement execution is controlled by if-statements, and around 17% of C source statements are if-statements. For functions containing between 1 and 10 executable statements, the percentage that don’t contain an if-statement is expected to be, respectively: 83, 69, 57, 47, 39, 33, 27, 23, 19, 16. Statements contained in shorter functions are more likely to be executed, providing more opportunities for any mistakes they contain to be triggered, generating a fault experience.

Longer functions contain more dependencies between the statements within the body, than shorter functions (I don’t have any data showing how much more). Dependencies create opportunities for making mistakes (there is data showing dependencies between files and classes is a source of mistakes).

The previous analysis makes a large assumption, that the mistake generating a fault experience is contained in one function. This is true for 70% of reported faults (in AspectJ).

What is the distribution of reported faults against function/method size? I don’t have this data (pointers to such data very welcome).

The plot below shows number of reported faults in C++ classes (not methods) containing a given number of lines (from a paper by Koru, Eman and Mathew; code+data):

Number of reported faults in C++ classes (not methods) containing a given number of lines.

It’s tempting to think that those three curved lines are each classes containing the same number of methods.

What is the conclusion? There is one good reason why shorter functions should have more reported faults, and another good’ish reason why longer functions should have more reported faults. Perhaps length is not important. We need more data before an answer is possible.

If At First You Don’t Succeed – a.k.

a.k. from thus spake a.k.

Last time we took a first look at Bernoulli processes which are formed from a sequence of independent experiments, known as Bernoulli trials, each of which is governed by the Bernoulli distribution with a probability p of success. Since the outcome of one trial has no effect upon the next, such processes are memoryless meaning that the number of trials that we need to perform before getting a success is independent of how many we have already performed whilst waiting for one.
We have already seen that if waiting times for memoryless events with fixed average arrival rates are continuous then they must be exponentially distributed and in this post we shall be looking at the discrete analogue.

Mutant algorithms

Fran from BuontempoConsulting

 The word "algorithm" has caused a storm in recent news in the UK. Due to COVID-19 school children were not able to sit their exams. This left 16 and 18 year olds waiting to see how they would be assessed, and had obvious implications for their academic or career futures.  As you may know, the grades were awarded based on an "algorithm", which our Prime Minister later described as mutant. According to the BBC news, he said "'Mutant algorithm' caused exam chaos." This begs the question, what does our PM think a mutant algorithm is?

The news in the UK has talked generally about the algorithm's inputs being course work and teacher's estimated grades. These are "mutated" (or adjusted) by the algorithm to take into account a school's performance over the last three years. This means schools whose pupils sometimes struggle are more likely to be down-graded. The precise details are buried in a 319 page report. Feel free to read it all and report back. TL;DR; Private and public schools tend to get higher grades than government run schools, so poorer pupils tended to get down-graded and richer pupils did not. Some form of mutation, or even perversion, perhaps of justice, but not in the algorithm. 

Now, some algorithms do use mutation. In fact genetic algorithms rely on mutation to seek out new solutions to problems. This is guided by a fitness function, to check the "mutant algorithm" is doing what's required. You can test such algorithms to see what they do, and keep an eye on them as they run to check they are heading the right way. You frequently spend a long time tuning parameters to get better results. This, on the face of it, has nothing whatsoever to do with the "mutant algorithm" our PM was talking about. 

There has also be a hint of slur on the programmers who wrote the algorithm, suggesting the idea was good and proper but the naughty programmers took it upon themselves to do something completely different that got out of hand, like a Marvel movie. Think Magneto (naughty programmers) versus Charles Francis Xavier (sensible people like, our PM? Go figure). I am sick of programmer bashing and the general misunderstanding of algorithms.

Where a genetic algorithm uses mutation, or a Monte Carlo simulation uses random numbers as input, it is still possible to test the algorithm is doing what you require. Programmers should never abdicate responsibility for what they have built. However, it is highly irresponsible of the news to allow propaganda and misrepresentations to flourish like this. 

A while ago, the Imperial College model for COVID-19 was open-sourced. At the time many people raised bug reports against it. One rumour suggested that running it twice with the same seed for the pseudo-random numbers would produce different results. Now, that might be described as a "mutant algorithm", but we'd usually describe this as buggy code. I don't believe our PM has the technical know-how to spot buggy code, but I'm willing to help him out if he wants. I'm also willing to be interviewed by the BBC to explain some of these technical issues in more detail, if they are interested. Or I could find other technical people who could equally well help out.

DM me.

https://twitter.com/fbuontempo


 



The aims of software engineering research

Derek Jones from The Shape of Code

Physics researchers aim to explain the workings of the universe (technically they build models whose behavior mimics that of the universe we can measure), biologists the workings of biological systems, and psychologists the working of the human mind.

What are researchers in software engineering aiming to do?

Talking to academics, the answer is that they aim to do research that can be published in a high impact journal.

What do those involved in commercial software development think software engineering researchers should be aiming to achieve?

Most of the commercial developers I have asked have never thought about the subject; hardly surprising, they have plenty of other issues to think about.

Those who pay for software, rather than create it, want it to be cheaper and delivered faster.

Vendors are under some pressure to reduce costs and deliver sooner. But since its inception, software has been a sellers market, which means the customer pressure does not have the impact it has in other industries.

The very large organizations who pay lots of money for software for their own use (e.g., the U.S. Department of Defence) recognise that research into software production may well save them lots of money, and at one time interesting things were being discovered, but then funding got rerouted to people with an aversion to actual software engineering, i.e., academics.

Cheaper and faster will always be of interest, and will start to become a hot topic in software engineering research once software starts to becoming a buyers market.

Maintaining existing systems continues its growth to dominating what nearly every software developer does. Dependencies on the rest of the software world (e.g., libraries and compilers) is starting to consume a large percentage of maintenance costs. Managers want to know which packages are likely to have a long and stable lifetime, and which are likely to be short-lived. An understanding of the evolution of software ecosystems is a pressing need. This is really cheaper and faster over the long term.

Cheaper and faster (short term for development, long term for maintenance) covers everything.

It’s tempting to list personnel selection, i.e., who is likely to make the best software developer. But why should the process of selecting software developers be any different from the processes used to select people to become doctors, lawyers and other professions? I’m sure that those involved in the various professions would like a magic wand that points to the appropriate people (for some definition of appropriate), this magic wand is no more likely to exist for software developers than any other profession.

What do you think the aims of software engineering research should be?

Digital Ocean’s PaaS Goes BETA

Paul Grenyer from Paul Grenyer

Make no mistake, I LOVE DigitalOcean! It’s simple to use and reasonably priced, especially compared to some of its better known competitors. They even respond quickly to queries on Twitter!

A couple of days ago I received an email from DigitalOcean inviting me to try out their new Beta 2 for App Platform (DigitalOcean’s PaaS product) which they described as follows:

“It handles common infrastructure-related tasks like provisioning and managing servers, databases, operating systems, application runtimes, and other dependencies. This means you can go from code to production in just minutes. We support Node.js, Python, Ruby, Go, PHP, and static sites right out of the box. If you have apps in other languages, simply create a Dockerfile and App Platform will do the rest. You can deploy apps directly from your Github repos and scale them (vertically and horizontally) if needed.….”

I’m also a fan of Heroku for its ease of application deployment and, with the exception of a few AWS services, Heroku is the only platform other than DigitalOcean which I regularly use for deploying my projects. I use Heroku because I don’t have to provision a Droplet (a Linux server on DigitalOcean) to run a simple web application. Now that DigitalOcean has a similar service there’s a good chance I won’t need Heroku.

The DigitalOcean App Platform (which I’ll refer to as ‘Apps’ from here on) doesn’t yet have as many features as Heroku, but the corresponding features which Apps does support are much simpler to work with. There are basically two types of applications you can run, a static website (static), a web application (service) or a worker. A worker is basically a service without any routing and can be used for background tasks. As with Heroku you can add databases as well.

Apps is currently in Beta which means it’s an initial release of a potential future product. Customers who participate in DigitalOceans beta programs have the opportunity to test, validate, and provide feedback on future functionality, which helps DigitalOcean to focus their efforts on what provides the most value to their customers.

  • Customer availability: Participation in beta releases is by invitation, and customers may choose not to participate. Beta invitations may be public or private. (How exciting, they picked me!).
  • Support: Beta releases are unsupported.
  • Production readiness: Beta releases may not be appropriate for production-level workloads.
  • Regions: Beta offerings may only be available in select regions.
  • Charges: Beta offerings may be charged or free. However, free use of beta offerings may be discontinued at any point in time.
  • Retirement: At the end of a beta release, DigitalOcean will determine whether to continue an offering through its lifecycle. We reserve the right to change the scope of or discontinue a Beta product or feature at any point in time without notice, as outlined in our terms of service.

I was (am!) very excited and decided to try a few things out on Apps. Below you’ll find what I tried, how I did it and some of what I learnt.

Static Asset App

The simplest type of ‘app’ which you can deploy to Apps is a static website and it really is straight forward. Remember the days when you would develop a website by creating files in a directory and opening them locally in a browser? Well, once you’ve done that you can just push them to GitHub and they’re on the web!

1. Create a new GitHub repository - it can be private or public.

2. Add a single index.html file, e.g:

<!doctype html>
<html lang="en">
  <head>
    <!-- Required meta tags -->
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
    <!-- Bootstrap CSS -->
    <link rel="stylesheet" ref="https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/css/bootstrap.min.css" integrity="sha384-JcKb8q3iqJ61gNV9KGb8thSsNjpSL0n8PARn9HuZOnIxN0hoP+VmmDGMN5t9UJ0Z" crossorigin="anonymous">
    <title>Hello, App Platform!</title>
  </head>
  <body>
    <h1>Hello, App Platform!</h1>
    <!-- Optional JavaScript -->
    <!-- jQuery first, then Popper.js, then Bootstrap JS -->
    <script src="https://code.jquery.com/jquery-3.5.1.slim.min.js" integrity="sha384-DfXdz2htPH0lsSSs5nCTpuj/zy4C+OGpamoFVy38MVBnE+IbbVYUew+OrCXaRkfj" crossorigin="anonymous"></script>
    <script src="https://cdn.jsdelivr.net/npm/popper.js@1.16.1/dist/umd/popper.min.js" integrity="sha384-9/reFTGAW83EW2RDu2S0VKaIzap3H66lZH81PoYlFhbGU+6BZp6G7niu735Sk7lN" crossorigin="anonymous"></script>
    <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/js/bootstrap.min.js" integrity="sha384-B4gt1jrGC7Jh4AgTPSdUtOBvfO8shuf57BaghqFfPlYxofvL8/KUEfYiJOMMV+rV" crossorigin="anonymous"></script>
  </body>
</html>

I’ve used the Bootstrap Hello, World! example as it brings in CSS and JavaScript, but any HTML example will do.

3. Log into DigitalOcean and select Apps from the left-hand menu.

4. If it’s your first App click ‘Launch App’. Otherwise click ‘Create App’.

5. Then click ‘GitHub’. If this is your first App, select ‘Configure your GitHub permissions’ and follow the instructions to link your GitHub account.

6. Back in Apps, select your new repository from the dropdown list and click Next.

On the next page you’ll be asked to choose a name for the app, select the branch to use from your repository and configure ‘Autodeploy on Push’.

7. Update the name of the app if you want to, leave the rest of the defaults as they are and click Next.

On the next page you have the option to add build and run commands. You don’t need any for a simple HTML app.

8. On the ‘Choose Build Configuration’ page click ‘Launch App’ to deploy the app and wait while Apps configures the app.

9. After receiving the ‘Deployed successfully!’ message, click the ‘Live App’ link to launch the app in a new tab.

That’s it! Your HTML page is now live on DigitalOcean’s App Platform. You can treat your repository just like the root directory of a website and add pages, images and JavaScript as you need. Just add them to the repository, commit, push and wait for them to be deployed.

Apps will generate a URL with a subdomain which is a combination of the name of your app and a unique sequence of characters, on the domain .ondigitalocean.app. You can configure a custom domain from the app’s settings tab and Apps provides a CNAME for redirection.


Node App

The next step up from a static asset app is a simple node webapp. Apps will install Node.js and your app’s dependencies for you and then fire up the app.

I was hoping to be able to deploy a very simple node webapp such as:

var http = require('http');

http.createServer(function (req, res) {

  res.write('Hello, App Platform!');

  res.end();

}).listen(process.env.PORT || '3000');

But this seemed to confuse Apps. It requires a package-lock.js file, which is generated by running npm install, to be checked into the repository and didn’t deploy successfully until I added the express package. 

1. Create a new directory for a simple node project and move into it.

2. Run npm init at the command line. Enter a name for the app and accept the other defaults.

3. Add a .gitignore file containing:

node_modules

so that dependencies are not checked into the repository.

4. Add the Express () package:

npm install express --save

This will also generate the package-lock.js which Apps needs and must be checked into the repository with the other files.

5. Create an index.js file at the root of the project:

const express = require('express')

const app = express()

const port = process.env.PORT || '3000';

app.get('/', (req, res) => {

  res.send('Hello World!')

})

app.listen(port, () => {

  console.log(`Example app listening at http://localhost:${port}`)

})

Apps injects the port the webapp should run on as an environment variable called, PORT. This is easily read by Node.js as shown.

6. Add a start command to the scripts section in package.json:

"scripts": {

    "start": "node index.js",

    "test": "echo \"Error: no test specified\" && exit 1"

  },

7. Create a new GitHub repository, it can be private or public, and check your node project into it.

Then follow from step 3 of the Static Asset app above. Note at step 8, Apps has automatically configured npm start as the run command, having detected a Node application and you can select the pricing plan on the same screen.

WARNING: Node applications are NOT free on DigitalOcean App Platform. Make sure you delete unwanted applications from the Settings tab.


Docker App

As well as Node.js, Apps appears to support Ruby, Go and Python nativity, as well as others. What about .Net or other languages and platforms? For those Apps supports Docker. Let’s see if we can get a simple dotnet core application running in Apps.

1. Create a new directory for a dotnet core project (e.g. dotnetcore) and move into it.

2. Create a dotnet core web application:

dotnet new webapp

3. Add a Dockerfile to the project:

FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build-env

WORKDIR /app

# Copy everything else and build

COPY . ./

RUN dotnet publish dockercore.csproj -c Release -o out

# Build runtime image

FROM mcr.microsoft.com/dotnet/core/sdk:3.1

WORKDIR /app

COPY --from=build-env /app/out .

EXPOSE $PORT

ENTRYPOINT [ "dotnet", "dockercore.dll" ]

Apps injects the port the webapp should run on as an environment variable called, PORT. Make sure the Docker image will expose it as shown.

4. To make sure the application runs on the port injected add the following UseUrls method call in program.cs:

public static IHostBuilder CreateHostBuilder(string[] args)

{

    var port = Environment.GetEnvironmentVariable("PORT") ;

    return Host.CreateDefaultBuilder(args)

        .ConfigureWebHostDefaults(webBuilder =>

        {

            webBuilder.UseStartup<Startup>().UseUrls($"http://+:{port}");

        });

}

5. To prevent the application trying to redirect to a non-existent ssl port, remove or comment out the following line from startup.cs

// app.UseHttpsRedirection();

6. Building a dotnet core application generates a lot of intermediate files that you don’t want to check in, so add an appropriate .gitignore file to the root of the project.

7. Create a new GitHub repository, it can be private or public, and check your dotnet core project into it.

Then follow from step 3 of the Static Asset app above. Note at step 8, Apps has detected the Dockerfile and is not giving the option for build commands. You don’t need to specify any run commands and you can select the pricing plan on the same screen.

WARNING: Docker based  applications are NOT free on DigitalOcean App Platform. Make sure you delete unwanted applications from the Settings tab.


Finally

There was one big disadvantage for me and that’s the lack of a free tier for anything more advanced than a static web application. The cost isn’t extortionate (https://www.digitalocean.com/docs/app-platform/#plans-and-pricing), but quite a bit for hobby programmers. If you want to use a database on top there’s a further cost, whereas this is free to begin with in Heroku.

Apps currently only supports GitHub. You can use private repositories, which is great, but I’d like to see BitBucket support as well. Heroku has its own git repositories as well as supporting external repositories. 

I’d also like there to be Terraform support for Apps as there is for the rest of the DigitalOcean services. However, given that Apps in Beta, I can see why it isn’t supported yet.

Overall Apps was very easy to use and had a much shallower learning curve and was generally easier to use than Heroku.  DigitalOcean, do you think we could have AWS style Lambdas next, please?


User Story or Epic?

Allan Kelly from Allan Kelly Associates

GoldenRules-2020-08-26-19-57.jpeg

I have two golden rules for user stories:

  1. The story should deliver business value: it should be meaningful to some customer, user, stakeholder. In some way the story should make their lives better.
  2. The story should be small enough to be delivered soon: some people say “within 2 days” but I’d generous, after all I used to be a C++ programmer, I’m happy as long as the story can be delivered within 2-weeks, i.e. the standard size of a sprint.

Now these two rules are in conflict, the need for value – and preferably more value! – pushes stories to be bigger while the second rule demands they are small. That is just the way things are, there is no magic solution, that is the tension we must manage.

Those two rules also help us differentiate between stories and epics – and tasks if you are using them:

  • Epics honour rule #1, epics are very valuable but they are not small, by definition they are large this epics are unlikely to be delivered soon
  • Tasks honour rule #2, they are small, very small, say a day of work. But they do not deliver value to stakeholders – or if they do it is not a big deal

EpicsStoriesTasks-2020-08-26-19-57.jpeg

Tasks are the things you do to build stories. And stories are the things you do to deliver epics. If you find you can complete a story without doing one of the planned tasks then great, and similarly not all stories need to be completed for an epic to be considered done.

In an ideal world you would not need tasks, every story would be small enough to stand alone. Nor would you need epics because stories would justify themselves. We can work towards that world but until then most teams of my experience use two of these three levels – stories and tasks or epics and stories. A few even use all three levels.

Using more than three is an administration problem. There is always a fourth level above these, the project or product that is the reason they exist in the first place. But really, three levels is more than enough to model just about anything: really small, small, and damn big.

And every story is a potential epic until proven guilty.

More about epics, stories and tasks in Little Book of Requirements and User Stories and in my User Stories Masterclass next month (use Blog15 for 15% discount).


September micro-workshops – spaced limited

User Stories Masterclass, Agile Estimation & Forecasting, Maximising value delivered

Early bird discounts & free tickets for unemployed/furloughed

Book with code Blog15 for 15% discount or get more details


The post User Story or Epic? appeared first on Allan Kelly Associates.