BA role in agile discovery

Allan Kelly from Allan Kelly

Adrian Reed of BlackMetric ran a webinar panel discussion last night with myself, Angela Wick, Angie Doyle and Howard Podeswa and myself last night about the Business Analyst role in Agile discovery. The discussion was great fun and Adrian has now made the recording available on YouTube.

This is not the first time I’ve appeared in one of Adrian’s webinars, at a minimum I recommend keeping your eye on his upcoming subjects as he regularly has great guests.


Subscribe to my blog newsletter and download Continuous Digital for free

The post BA role in agile discovery appeared first on Allan Kelly.

OKRs and Agile

Allan Kelly from Allan Kelly

Book cover: Succeeding with OKRs in Agile

“How to combine OKRs and Agile” is a short piece by my published on the GTM Hub blog. GTM Hub is the provider of OKR software.

As it happens another OKR software provider Just 3 Things has been running a series on OKRs which I and over 20 others contributed too. This series takes a question and answer form. The latest instalment is Questions you should ask before starting your OKR journey, previous posts include:

OKR predictions for the next 5 years

Common OKR mistakes

Advice for OKR champtions

Benefits of OKRs for companies and employees

Cultural and structural similarities at companies that create great OKRs

And of course, if you like these subjects you will enjoy “Succeeding with OKRs in Agile.

The post OKRs and Agile appeared first on Allan Kelly.

Including data in Python packages

Austin Bingham from Good With Computers

Every time I need to include data in a Python package, I find myself going in circles checking existing projects, blog posts, and every other resource I can find to figure out the right way to do it. For something so seemingly straightforward, including data in a package always turns into a bit of a mess for me.

I had to make a package today that contained data, so - since it involved the standard running in circles for an hour - I thought I'd take the time to write down how I finally got it to work.

What is "package data"?

Broadly, package data is any files that you want to include with your Python package that aren't Python source files. An example is a TOML default configuration file that you want to be able to produce for users. It's not Python source code, so it wouldn't normally be included in a Python package. But with just a small amount of work, you can include it in a package and make it available programatically to users of your package (or your package itself).

The short version

  1. Set include_package_data to True in your setup.py.
  2. Set package_dir in your setup.py.
  3. Include a MANIFEST.in that references your data files.

If that doesn't mean anything to you, read on.

The longer version

Suppose you have a project structure like this:

setup.py
source/
    project/
        __init__.py
        data/
            default_config.toml

It's a fairly standard structure, with the source directory containing the actual package files. The name of the package in this case is project.

What stands out is the data/default_config.toml file under project. This is our package data. That is, it's a non-Python file that we want to include in our package. Normally setuptools won't include it in the distributions you build (e.g. wheels, etc.), so we need to tell setuptools about it.

Create a MANIFEST.in

The first step is to create a new file, MANIFEST.in, as a sibling to setup.py. This file lets us specify the files that should be included in our distributions (beyond the files that are included by default). You can read more about it in the Python Packaging User guide.

At it's simplest (which works for me most of the time), it just needs to specify that your package should include anything and everything under some directory. In our case, we can include everything under source/project/data like this:

recursive-include source/project/data *

That's it. You can, of course, have much more complex include/exclude specs in MANIFEST.in, but this will get you started.

Update setup.py

You also need to modify setup.py to make sure it will let you include package data. Fortunately, in the normal case, this is very simple:

setup(
    ...
    include_package_data=True,
    package_dir={"": "source"},
    ...
)

Now when you install your package from source or generate wheels for distribution, everything in the data directory will be included in your package.

Accessing the package data

Including the package data is only half of the battle, though. You still need some way to access the files from your program. This is where pkg_resource comes in. pkg_resources lets you (among other things) get paths to the directories and files in your package data. I won't go into great detail here, but here's how you could get the path to the data directory at runtime:

pkg_resources.resource_filename("project", "data")

Or you could get a readable stream to the default_config.toml file:

stream = pkg_resources.resource_stream("project", "data/default_config.toml")
stream.read()

The pkg_resource docs linked above are excellent, so I'll leave it at that.

What did I get wrong or leave out?

There are much more sophisticated ways to use pkg_utils and package data, but I find that what I've described above seems to work well for most of what I need. If I got things wrong or left out important details, let me know!

Including data in Python packages

Austin Bingham from Good With Computers

Every time I need to include data in a Python package, I find myself going in circles checking existing projects, blog posts, and every other resource I can find to figure out the right way to do it. For something so seemingly straightforward, including data in a package always turns into a bit of a mess for me.

I had to make a package today that contained data, so - since it involved the standard running in circles for an hour - I thought I'd take the time to write down how I finally got it to work.

What is "package data"?

Broadly, package data is any files that you want to include with your Python package that aren't Python source files. An example is a TOML default configuration file that you want to be able to produce for users. It's not Python source code, so it wouldn't normally be included in a Python package. But with just a small amount of work, you can include it in a package and make it available programatically to users of your package (or your package itself).

The short version

  1. Set include_package_data to True in your setup.py.
  2. Set package_dir in your setup.py.
  3. Include a MANIFEST.in that references your data files.

If that doesn't mean anything to you, read on.

The longer version

Suppose you have a project structure like this:

setup.py
source/
    project/
        __init__.py
        data/
            default_config.toml

It's a fairly standard structure, with the source directory containing the actual package files. The name of the package in this case is project.

What stands out is the data/default_config.toml file under project. This is our package data. That is, it's a non-Python file that we want to include in our package. Normally setuptools won't include it in the distributions you build (e.g. wheels, etc.), so we need to tell setuptools about it.

Create a MANIFEST.in

The first step is to create a new file, MANIFEST.in, as a sibling to setup.py. This file lets us specify the files that should be included in our distributions (beyond the files that are included by default). You can read more about it in the Python Packaging User guide.

At it's simplest (which works for me most of the time), it just needs to specify that your package should include anything and everything under some directory. In our case, we can include everything under source/project/data like this:

recursive-include source/project/data *

That's it. You can, of course, have much more complex include/exclude specs in MANIFEST.in, but this will get you started.

Update setup.py

You also need to modify setup.py to make sure it will let you include package data. Fortunately, in the normal case, this is very simple:

setup(
    ...
    include_package_data=True,
    package_dir={"": "source"},
    ...
)

Now when you install your package from source or generate wheels for distribution, everything in the data directory will be included in your package.

Accessing the package data

Including the package data is only half of the battle, though. You still need some way to access the files from your program. This is where pkg_resource comes in. pkg_resources lets you (among other things) get paths to the directories and files in your package data. I won't go into great detail here, but here's how you could get the path to the data directory at runtime:

pkg_resources.resource_filename("project", "data")

Or you could get a readable stream to the default_config.toml file:

stream = pkg_resources.resource_stream("project", "data/default_config.toml")
stream.read()

The pkg_resource docs linked above are excellent, so I'll leave it at that.

What did I get wrong or leave out?

There are much more sophisticated ways to use pkg_utils and package data, but I find that what I've described above seems to work well for most of what I need. If I got things wrong or left out important details, let me know!

The software heritage of K&R C

Derek Jones from The Shape of Code

The mission statement of the Software Heritage is “… to collect, preserve, and share all software that is publicly available in source code form.”

What are the uses of the preserved source code that is collected? Lots of people visit preserved buildings, but very few people are interested in looking at source code.

One use-case is tracking the evolution of changes in developer usage of various programming language constructs. It is possible to use Github to track the adoption of language features introduced after 2008, when the company was founded, e.g., new language constructs in Java. Over longer time-scales, the Software Heritage, which has source code going back to the 1960s, is the only option.

One question that keeps cropping up when discussing the C Standard, is whether K&R C continues to be used. Technically, K&R C is the language defined by the book that introduced C to the world. Over time, differences between K&R C and the C Standard have fallen away, as compilers cease supporting particular K&R ways of doing things (as an option or otherwise).

These days, saying that code uses K&R C is taken to mean that it contains functions defined using the K&R style (see sentence 1818), e.g.,

writing:

int f(a, b)
int a;
float b;
{
/* declarations and statements */
}

rather than:

int f(int a, float b)
{
/* declarations and statements */
}

As well as the syntactic differences, there are semantic differences between the two styles of function definition, but these are not relevant here.

How much longer should the C Standard continue to support the K&R style of function definition?

The WG14 committee prides itself on not breaking existing code, or at least not lots of it. How much code is out there, being actively maintained, and containing K&R function definitions?

Members of the committee agree that they rarely encounter this K&R usage, and it would be useful to have some idea of the decline in use over time (with the intent of removing support in some future revision of the standard).

One way to estimate the evolution in the use/non-use of K&R style function definitions is to analyse the C source created in each year since the late 1970s.

The question is then: How representative is the Software Heritage C source, compared to all the C source currently being actively maintained?

The Software Heritage preserves publicly available source, plus the non-public, proprietary source forming the totality of the C currently being maintained. Does the public and non-public C source have similar characteristics, or are there application domains which are poorly represented in the publicly available source?

Embedded systems is a very large and broad application domain that is poorly represented in the publicly available C source. Embedded source tends to be heavily tied to the hardware on which it runs, and vendors tend to be paranoid about releasing internal details about their products.

The various embedded systems domains (e.g., 8, 16, 32, 64-bit processor) tend to be a world unto themselves, and I would not be surprised to find out that there are enclaves of K&R usage (perhaps because there is no pressure to change, or because the available tools are ancient).

At the moment, the Software Heritage don’t offer code search functionality. But then, the next opportunity for major changes to the C Standard is probably 5-years away (the deadline for new proposals on the current revision has passed); plenty of time to get to a position where usage data can be obtained 🙂

Unborking the ISSO comments system and making it more resilient

Timo Geusch from The Lone C++ Coder's Blog

First, I apologise for not noticing that the comments had been broken for a while. This was entirely my fault and not fault of ISSO, which I’m still super happy with as a self-hosted comments system. So in this post I’m going to describe what went wrong, and also how I made the system a little more resilient at the same time. First, what did go wrong? My web server is using FreeBSD as its OS, with a bunch of software installed via FreeBSD’s ports system.

Streaming to Twitch and PeerTube simultaneously using nginx on Oracle cloud

Andy Balaam from Andy Balaam's Blog

Simulcasting RTMP using NGINX

I want people to be able to watch my Matrix and Rust live coding streams using free software, so I’d like to simulcast to PeerTube as well as Twitch.

This is possible using NGINX and its RTMP module. It does involve building NGINX from source, but I actually found that reasonably easy to do.

Why Oracle cloud?

I would never recommend using Oracle for anything, but they do provide up to two virtual machines in their cloud for free, and the one I am using has been consistently available with very good connectivity, in a London data centre since I set it up several months ago.

So, we are making our lives more difficult by trying to do this on Oracle Linux, which is a derivative of RHEL.

Building NGINX and its RTMP module on Oracle Linux

I ran these commands on my Oracle cloud instance (running Oracle Linux):

sudo yum install git pcre-devel openssl-devel
mkdir nginx
cd nginx
wget http://nginx.org/download/nginx-1.21.4.tar.gz
git clone https://github.com/arut/nginx-rtmp-module.git
cd nginx-1.21.4
./configure --add-module=../nginx-rtmp-module/
make
sudo make install

After all this NGINX was installed to /usr/local/nginx/.

Creating the NGINX config file for RTMP simulcasting

Next I edited the NGINX config file by typing:

sudo nano /usr/local/nginx/conf/nginx.conf

And pasted in this config at the bottom of the file:

rtmp {
    server {
        listen 1935;
        chunk_size 4096;
        application live {
            live on;
            record off;
            push rtmp://live.twitch.tv/app/live_INSERT_TWITCH_STREAM_KEY;
            push rtmp://diode.zone:1935/live/INSERT_PEERTUBE_STREAM_KEY;
        }
    }
}

Notice that you will need to get your Twitch stream key from Twitch -> Creator Dashboard -> Settings -> Stream, then Copy next to the Primary Stream Key.

To get a PeerTube stream ID, you will need to go to your PeerTube page and click Publish, then Go Live, choose your channel and choose Go Live. Note that if you want the streams to record and be available later, you have to create a new stream key each time you start a stream, and change it in nginx.conf.

If you use a different PeerTube server (I use diode.zone) then you’ll need to change the server name in the config file above too.

Make sure your config file is saved with the right URLs in it.

Opening ports

To send RTMP traffic to my server, I needed to open the right port to the Oracle cloud instance. That involved creating an ingress rule, and adding a firewall rule.

Creating an ingress rule

In the web interface, I went to the menu in the top left, clicked Compute, then Instances.

I clicked on my instance’s name, then I clicked on the name of the subnet in the details (on the right).

I clicked on Default security list for…, then Add Ingress Rules.

I made an ingress rule with Source Type=CIDR, Source CIDR=0.0.0.0/0, IP Protocol=TCP, Source Port Range=(blank, meaning all), Destination Port Range=1935

Adding a firewall rule

Then I ssh’d into the machine and ran these commands to create a firewall rule allowing the traffic:

sudo firewall-cmd --zone=public --permanent --add-port=1935/tcp
sudo firewall-cmd --reload

Stop and Start NGINX

After creating the config file and opening the right port, I needed to start NGINX.

Every time I change the config file, I need to restart it.

If it’s already running, I stop it with:

sudo /usr/local/nginx/sbin/nginx -s stop

and then I start it up again with

sudo /usr/local/nginx/sbin/nginx

I can check whether it’s happy by looking at the log files, for example to see any errors:

less /usr/local/nginx/logs/error.log

Starting the stream

Now I go into OBS and go to File -> Settings -> Stream and choose the type as Custom, and the Server as rtmp://1.1.1.1/live. (But instead of 1.1.1.1 I put the public IP address of my instance, which I found by clicking the name of the instance in the Oracle cloud management console.)

Open source: the goody bag for software infrastructure

Derek Jones from The Shape of Code

For 70 years there has been a continuing discovery of larger new ecosystems for new software to grow into, as well as many small ones. Before Open source became widely available, the software infrastructure (e.g., compilers, editors and libraries of algorithms) for these ecosystems had to be written by the pioneer developers who happened to find themselves in an unoccupied land.

Ecosystems may be hardware platforms (e.g., mainframes, minicomputers, microcomputers and mobile phones), software platforms (e.g., Microsoft Windows, and Android), or application domains (e.g., accounting and astronomy)

There are always a few developers building some infrastructure project out of interest, e.g., writing a compiler for their own or another language, or implementing an editor that suites them. When these projects are released, they have to compete against the established inhabitants of an ecosystem, along with other newly released software clamouring for attention.

New ecosystems have limited established software infrastructure, and may not yet have attracted many developers to work within them. In such ‘virgin’ ecosystems, something new and different faces less competition, giving it a higher probability of thriving and becoming established.

Building from scratch is time-consuming and expensive. Adapting existing software systems speeds things up and reduces costs; adaptation also has the benefit of significantly reducing the startup costs when recruiting developers, i.e., making it possible for experienced people to use the skills acquired while working in other ecosystems. By its general availability, Open source creates competition capable of reducing the likelihood that some newly created infrastructure software will become established in a ‘virgin’ ecosystem.

Open source not only reduces startup costs for those needing infrastructure for a new ecosystem, it also reduces ongoing maintenance costs (by spreading them over multiple ecosystems), and developer costs (by reducing the need to learn something different, which happened to be created by developers who built from scratch).

Some people will complain that Open source is reducing diversity (where diversity is viewed as unconditionally providing benefits). I would claim that reducing diversity in this case is a benefit. Inventing new ways of doing things based on the whims of those doing the invention is a vanity project. I have nothing against people investing their own resources on their own vanity projects, but let’s not pretend that the diversity generated by such projects is likely to provide benefits to others.

By providing the components needed to plug together a functioning infrastructure, Open source reduces the cost of ecosystem ‘invasion’ by software. The resources which might have been invested building infrastructure components can be directed to building higher level functionality.

A Day At The Races – baron m.

baron m. from thus spake a.k.

Halloo Sir R-----! Pray come join me and partake of a glass of this rather excellent potation!

Might I again tempt you with a wager?

Splendid!

I have in mind a game that always reminds me of my victory upon the turf at Newmarket. Ordinarily I would not participate in a public sporting event such as this since I am at heart a modest man and derive no pleasure in demonstrating my substantial superiority over my fellows.

New game: Tron – frantic multiplayer retro action

Andy Balaam from Andy Balaam's Blog

My newest game is out now on Smolpxl Games – Tron:

Pixellated lines fight each other to stay alive

Play at smolpxl.gitlab.io/tron.

It’s a frantic multiplayer retro pixellated thingy playable in your browser. Try to stay alive longer than everyone else!

This version allows many players (up to 16 if you can manage it), and is quite pure in its implementation.

There are bots to play against, and you can gather your friends around a keyboard to play together.

Part of the motivation for writing this game was to test my new smolpxl-remote remote-play system, but this is not enabled yet, so watch this space…

I love playing games with other people – preferably at least 3 other people. In theory you could have 8 players around a keyboard playing this – send me a picture if you try!

One feature I worked on in the Smolpxl library for this game: saving configuration to local storage (and asking permission to do so). I ended up with a very ugly hack to do this, so a bit more work is needed before I merge it into the library.