Running a virtualenv with a custom-built Python

Andy Balaam from Andy Balaam's Blog

For my attempt to improve the asyncio.as_completed Python standard library function I needed to build a local copy of cpython (the Python interpreter).

To test it, I needed the aiohttp module, which is not part of the standard library, so the easiest way to get it was using virtualenv.

Here is the recipe I used to get a virtualenv and install packages using pip with a custom-built Python:

$ ~/code/public/cpython/python -m venv env
$ . env/bin/activate
(env) $ pip install aiohttp
(env) $ python mycode.py

Adding a concurrency limit to Python’s asyncio.as_completed

Andy Balaam from Andy Balaam's Blog

Series: asyncio basics, large numbers in parallel, parallel HTTP requests, adding to stdlib

In the previous post I demonstrated how the limited_as_completed method allows us to run a very large number of tasks using concurrency, but limiting the number of concurrent tasks to a sensible limit to ensure we don’t exhaust resources like memory or operating system file handles.

I think this could be a useful addition to the Python standard library, so I have been working on a modification to the current asyncio.as_completed method. My work so far is here: limited-as_completed.

I ran similar tests to the ones I ran for the last blog post with this code to validate that the modified standard library version achieves the same goals as before.

I used an identical copy of timed from the previous post and updated versions of the other files because I was using a much newer version of aiohttp along with the custom-built python I was running.

server looked like:

#!/usr/bin/env python3

from aiohttp import web
import asyncio
import random

async def handle(request):
    await asyncio.sleep(random.randint(0, 3))
    return web.Response(text="Hello, World!")

app = web.Application()
app.router.add_get('/{name}', handle)

web.run_app(app)

client-async-sem needed me to add a custom TCPConnector to avoid a new limit on the number of concurrent connections that was added to aiohttp in version 2.0. I also need to move the ClientSession usage inside a coroutine to avoid a warning:

#!/usr/bin/env python3

from aiohttp import ClientSession, TCPConnector
import asyncio
import sys

limit = 1000

async def fetch(url, session):
    async with session.get(url) as response:
        return await response.read()

async def bound_fetch(sem, url, session):
    # Getter function with semaphore.
    async with sem:
        await fetch(url, session)

async def run(r):
    with ClientSession(connector=TCPConnector(limit=limit)) as session:
        url = "http://localhost:8080/{}"
        tasks = []
        # create instance of Semaphore
        sem = asyncio.Semaphore(limit)
        for i in range(r):
            # pass Semaphore and session to every GET request
            task = asyncio.ensure_future(
                bound_fetch(sem, url.format(i), session))
            tasks.append(task)
        responses = asyncio.gather(*tasks)
        await responses

loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.ensure_future(run(int(sys.argv[1]))))

My new code that uses my proposed extension to as_completed looked like:

#!/usr/bin/env python3

from aiohttp import ClientSession, TCPConnector
import asyncio
import sys

async def fetch(url, session):
    async with session.get(url) as response:
        return await response.read()

limit = 1000

async def print_when_done():
    with ClientSession(connector=TCPConnector(limit=limit)) as session:
        tasks = (fetch(url.format(i), session) for i in range(r))
        for res in asyncio.as_completed(tasks, limit=limit):
            await res

r = int(sys.argv[1])
url = "http://localhost:8080/{}"
loop = asyncio.get_event_loop()
loop.run_until_complete(print_when_done())
loop.close()

and with these, we get similar behaviour to the previous post:

$ ./timed ./client-async-sem 10000
Memory usage: 73640KB	Time: 19.18 seconds
$ ./timed ./client-async-stdlib 10000
Memory usage: 49332KB	Time: 18.97 seconds

So the implementation I plan to submit to the Python standard library appears to work well. In fact, I think it is better than the one I presented in the previous post, because it uses on_complete callbacks to notice when futures have completed, which reduces the busy-looping we were doing to check for and yield finished tasks.

The Python issue is bpo-30782 and the pull request is #2424.

Note: at first glance, it looks like the aiohttp.ClientSession‘s limit on the number of connections (introduced in version 1.0 and then updated in version 2.0) gives us what we want without any of this extra code, but in fact it only limits the number of connections, not the number of futures we are creating, so it has the same problem of unbounded memory use as the semaphore-based implementation.

Making 100 million requests with Python aiohttp

Andy Balaam from Andy Balaam's Blog

Series: asyncio basics, large numbers in parallel, parallel HTTP requests, adding to stdlib

I’ve been working on how to make a very large number of HTTP requests using Python’s asyncio and aiohttp.

PaweÅ‚ Miech’s post Making 1 million requests with python-aiohttp taught me how to think about this, and got us a long way, with 1 million requests running in a reasonable time, but I need to go further.

PaweÅ‚’s approach limits the number of requests that are in progress, but it uses an unbounded amount of memory to hold the futures that it wants to execute.

We can avoid using unbounded memory by using the limited_as_completed function I outined in my previous post.

Setup

Server

We have a server program “server”:

(Note it differs from PaweÅ‚’s version because I am using an older version of aiohttp which has fewer convenient features.)

#!/usr/bin/env python3.5

from aiohttp import web
import asyncio
import random

async def handle(request):
    await asyncio.sleep(random.randint(0, 3))
    return web.Response(text="Hello, World!")

async def init():
    app = web.Application()
    app.router.add_route('GET', '/{name}', handle)
    return await loop.create_server(
        app.make_handler(), '127.0.0.1', 8080)

loop = asyncio.get_event_loop()
loop.run_until_complete(init())
loop.run_forever()

This just responds “Hello, World!” to every request it receives, but after an artificial delay of 0-3 seconds.

Synchronous client

As a baseline, we have a synchronous client “client-sync”:

#!/usr/bin/env python3.5

import requests
import sys

url = "http://localhost:8080/{}"
for i in range(int(sys.argv[1])):
    requests.get(url.format(i)).text

This waits for each request to complete before making the next one. Like the other clients below, it takes the number of requests to make as a command-line argument.

Async client using semaphores

Copied mostly verbatim from Making 1 million requests with python-aiohttp we have an async client “client-async-sem” that uses a semaphore to restrict the number of requests that are in progress at any time to 1000:

#!/usr/bin/env python3.5

from aiohttp import ClientSession
import asyncio
import sys

limit = 1000

async def fetch(url, session):
    async with session.get(url) as response:
        return await response.read()

async def bound_fetch(sem, url, session):
    # Getter function with semaphore.
    async with sem:
        await fetch(url, session)

async def run(session, r):
    url = "http://localhost:8080/{}"
    tasks = []
    # create instance of Semaphore
    sem = asyncio.Semaphore(limit)
    for i in range(r):
        # pass Semaphore and session to every GET request
        task = asyncio.ensure_future(bound_fetch(sem, url.format(i), session))
        tasks.append(task)
    responses = asyncio.gather(*tasks)
    await responses

loop = asyncio.get_event_loop()
with ClientSession() as session:
    loop.run_until_complete(asyncio.ensure_future(run(session, int(sys.argv[1]))))

Async client using limited_as_completed

The new client I am presenting here uses limited_as_completed from the previous post. This means it can make a generator that provides the futures to wait for as they are needed, instead of making them all at the beginning.

It is called “client-async-as-completed”:

#!/usr/bin/env python3.5

from aiohttp import ClientSession
import asyncio
from itertools import islice
import sys

def limited_as_completed(coros, limit):
    futures = [
        asyncio.ensure_future(c)
        for c in islice(coros, 0, limit)
    ]
    async def first_to_finish():
        while True:
            await asyncio.sleep(0)
            for f in futures:
                if f.done():
                    futures.remove(f)
                    try:
                        newf = next(coros)
                        futures.append(
                            asyncio.ensure_future(newf))
                    except StopIteration as e:
                        pass
                    return f.result()
    while len(futures) > 0:
        yield first_to_finish()

async def fetch(url, session):
    async with session.get(url) as response:
        return await response.read()

limit = 1000

async def print_when_done(tasks):
    for res in limited_as_completed(tasks, limit):
        await res

r = int(sys.argv[1])
url = "http://localhost:8080/{}"
loop = asyncio.get_event_loop()
with ClientSession() as session:
    coros = (fetch(url.format(i), session) for i in range(r))
    loop.run_until_complete(print_when_done(coros))
loop.close()

Again, this limits the number of requests to 1000.

Test setup

Finally, we have a test runner script called “timed”:

#!/usr/bin/env bash

./server &
sleep 1 # Wait for server to start

/usr/bin/time --format "Memory usage: %MKB\tTime: %e seconds" "$@"

# %e Elapsed real (wall clock) time used by the process, in seconds.
# %M Maximum resident set size of the process in Kilobytes.

kill %1

This runs each process, ensuring the server is restarted each time it runs, and prints out how long it took to run, and how much memory it used.

Results

When making only 10 requests, the async clients worked faster because they launched all the requests simultaneously and only had to wait for the longest one (3 seconds). The memory usage of all three clients was fine:

$ ./timed ./client-sync 10
Memory usage: 20548KB	Time: 15.16 seconds
$ ./timed ./client-async-sem 10
Memory usage: 24996KB	Time: 3.13 seconds
$ ./timed ./client-async-as-completed 10
Memory usage: 23176KB	Time: 3.13 seconds

When making 100 requests, the synchronous client was very slow, but all three clients worked eventually:

$ ./timed ./client-sync 100
Memory usage: 20528KB	Time: 156.63 seconds
$ ./timed ./client-async-sem 100
Memory usage: 24980KB	Time: 3.21 seconds
$ ./timed ./client-async-as-completed 100
Memory usage: 24904KB	Time: 3.21 seconds

At this point let’s agree that life is too short to wait for the synchronous client.

When making 10000 requests, both async clients worked quite quickly, and both had increased memory usage, but the semaphore-based one used almost twice as much memory as the limited_as_completed version:

$ ./timed ./client-async-sem 10000
Memory usage: 77912KB	Time: 18.10 seconds
$ ./timed ./client-async-as-completed 10000
Memory usage: 46780KB	Time: 17.86 seconds

For 1 million requests, the semaphore-based client took 25 minutes on my (32GB RAM) machine. It only used about 10% of my CPU, and it used a lot of memory (over 3GB):

$ ./timed ./client-async-sem 1000000
Memory usage: 3815076KB	Time: 1544.04 seconds

Note: PaweÅ‚’s version only took 9 minutes on his laptop and used all his CPU, so I wonder whether I have made a mistake somewhere, or whether my version of Python (3.5.2) is not as good as a later one.

The limited_as_completed version ran in a similar amount of time but used 100% of my CPU, and used a much smaller amount of memory (162MB):

$ ./timed ./client-async-as-completed 1000000
Memory usage: 162168KB	Time: 1505.75 seconds

Now let’s try 100 million requests. The semaphore-based version lasted 10 hours before it was killed by Linux’s OOM Killer, but it didn’t manage to make any requests in this time, because it creates all its futures before it starts making requests:

$ ./timed ./client-async-sem 100000000
Command terminated by signal 9

I left the limited_as_completed version over the weekend and it managed to succeed eventually:

$ ./timed ./client-async-as-completed 100000000
Memory usage: 294304KB	Time: 150213.15 seconds

So its memory usage was still very bounded, and it managed to do about 665 requests/second over an extended period, which is almost identical to the throughput of the previous cases.

Conclusion

Making a million requests is usually enough, but when we really need to do a lot of work while keeping our memory usage bounded, it looks like an approach like limited_as_completed is a good way to go. I also think it’s slightly easier to understand.

In the next post I describe my attempt to get something like this added to the Python standard library.

Python – printing UTC dates in ISO8601 format with time zone

Andy Balaam from Andy Balaam's Blog

By default, when you make a UTC date from a Unix timestamp in Python and print it in ISO format, it has no time zone:

$ python3
>>> from datetime import datetime
>>> datetime.utcfromtimestamp(1496998804).isoformat()
'2017-06-09T09:00:04'

Whenever you talk about a datetime, I think you should always include a time zone, so I find this problematic.

The solution is to mention the timezone explicitly when you create the datetime:

$ python3
>>> from datetime import datetime, timezone
>>> datetime.fromtimestamp(1496998804, tz=timezone.utc).isoformat()
'2017-06-09T09:00:04+00:00'

Note, including the timezone explicitly works the same way when creating a datetime in other ways:

$ python3
>>> from datetime import datetime, timezone
>>> datetime(2017, 6, 9).isoformat()
'2017-06-09T00:00:00'
>>> datetime(2017, 6, 9, tzinfo=timezone.utc).isoformat()
'2017-06-09T00:00:00+00:00'

Python 3 – large numbers of tasks with limited concurrency

Andy Balaam from Andy Balaam's Blog

Series: asyncio basics, large numbers in parallel, parallel HTTP requests, adding to stdlib

I am interested in running large numbers of tasks in parallel, so I need something like asyncio.as_completed, but taking an iterable instead of a list, and with a limited number of tasks running concurrently. First, let’s try to build something pretty much equivalent to asyncio.as_completed. Here is my attempt, but I’d welcome feedback from readers who know better:

# Note this is not a coroutine - it returns
# an iterator - but it crucially depends on
# work being done inside the coroutines it
# yields - those coroutines empty out the
# list of futures it holds, and it will not
# end until that list is empty.
def my_as_completed(coros):

    # Start all the tasks
    futures = [asyncio.ensure_future(c) for c in coros]

    # A coroutine that waits for one of the
    # futures to finish and then returns
    # its result.
    async def first_to_finish():

        # Wait forever - we could add a
        # timeout here instead.
        while True:

            # Give up control to the scheduler
            # - otherwise we will spin here
            # forever!
            await asyncio.sleep(0)

            # Return anything that has finished
            for f in futures:
                if f.done():
                    futures.remove(f)
                    return f.result()

    # Keep yielding a waiting coroutine
    # until all the futures have finished.
    while len(futures) > 0:
        yield first_to_finish()

The above can be substituted for asyncio.as_completed in the code that uses it in the first article, and it seems to work. It also makes a reasonable amount of sense to me, so it may be correct, but I’d welcome comments and corrections.

my_as_completed above accepts an iterable and returns a generator producing results, but inside it starts all tasks concurrently, and stores all the futures in a list. To handle bigger lists we will need to do better, by limiting the number of running tasks to a sensible number.

Let’s start with a test program:

import asyncio
async def mycoro(number):
    print("Starting %d" % number)
    await asyncio.sleep(1.0 / number)
    print("Finishing %d" % number)
    return str(number)

async def print_when_done(tasks):
    for res in asyncio.as_completed(tasks):
        print("Result %s" % await res)

coros = [mycoro(i) for i in range(1, 101)]

loop = asyncio.get_event_loop()
loop.run_until_complete(print_when_done(coros))
loop.close()

This uses asyncio.as_completed to run 100 tasks and, because I adjusted the asyncio.sleep command to wait longer for earlier tasks, it prints something like this:

$ time python3 python-async.py
Starting 47
Starting 93
Starting 48
...
Finishing 93
Finishing 94
Finishing 95
...
Result 93
Result 94
Result 95
...
Finishing 46
Finishing 45
Finishing 42
...
Finishing 2
Result 2
Finishing 1
Result 1

real    0m1.590s
user    0m0.600s
sys 0m0.072s

So all 100 tasks were completed in 1.5 seconds, indicating that they really were run in parallel, but all 100 were allowed to run at the same time, with no limit.

We can adjust the test program to run using our customised my_as_completed function, and pass in an iterable of coroutines instead of a list by changing the last part of the program to look like this:

async def print_when_done(tasks):
    for res in my_as_completed(tasks):
        print("Result %s" % await res)
coros = (mycoro(i) for i in range(1, 101))
loop = asyncio.get_event_loop()
loop.run_until_complete(print_when_done(coros))
loop.close()

But we get similar output to last time, with all tasks running concurrently.

To limit the number of concurrent tasks, we limit the size of the futures list, and add more as needed:

from itertools import islice
def limited_as_completed(coros, limit):
    futures = [
        asyncio.ensure_future(c)
        for c in islice(coros, 0, limit)
    ]
    async def first_to_finish():
        while True:
            await asyncio.sleep(0)
            for f in futures:
                if f.done():
                    futures.remove(f)
                    try:
                        newf = next(coros)
                        futures.append(
                            asyncio.ensure_future(newf))
                    except StopIteration as e:
                        pass
                    return f.result()
    while len(futures) > 0:
        yield first_to_finish()

We start limit tasks at first, and whenever one ends, we ask for the next coroutine in coros and set it running. This keeps the number of running tasks at or below limit until we start running out of input coroutines (when next throws and we don’t add anything to futures), then futures starts emptying until we eventually stop yielding coroutine objects.

I thought this function might be useful to others, so I started a little repo over here and added it: asyncioplus/limited_as_completed.py. Please provide merge requests and log issues to improve it – maybe it should be part of standard Python?

When we run the same example program, but call limited_as_completed instead of the other versions:

async def print_when_done(tasks):
    for res in limited_as_completed(tasks, 10):
        print("Result %s" % await res)
coros = (mycoro(i) for i in range(1, 101))
loop = asyncio.get_event_loop()
loop.run_until_complete(print_when_done(coros))
loop.close()

We see output like this:

$ time python3 python-async.py
Starting 1
Starting 2
...
Starting 9
Starting 10
Finishing 10
Result 10
Starting 11
...
Finishing 100
Result 100
Finishing 1
Result 1

real	0m1.535s
user	0m1.436s
sys	0m0.084s

So we can see that the tasks are still running concurrently, but this time the number of concurrent tasks is limited to 10.

See also

To achieve a similar result using semaphores, see Python asyncio.semaphore in async-await function and Making 1 million requests with python-aiohttp.

It feels like limited_as_completed is more re-usable as an approach but I’d love to hear others’ thoughts on this. E.g. could/should I use a semaphore to implement limited_as_completed instead of manually holding a queue?

A Magical new World — Thoughts of a first time ACCU attendee.

Samathy from Stories by Samathy on Medium

I went to my first ACCU Conference last week. It was great.

ACCU Conference

I’d heard about ACCU from Russel Winder several months ago. He recommended I check out the conference (for which hes on the programme board) since I’m a fan and user of the C and C++ languages.

I arrived in Bristol on Tuesday excited for what the week held.

This post contains a section about the talks and a section about my experience at the bottom.

Be aware that some of the photos might not look as good on here as they should, I think Medium has compressed them a bit. All my photos should be online shortly.

The Talks

Russ Miles opens ACCU 2017

We started the conference proper with a fantastically explosive keynote delivered by Russ Miles who jumped on stage to deliver a programming parody of Highway to Hell accompanied by his own guitar playing.
His keynote was all about modern development and how most of a programmer’s tools currently just shout information at the programmer, rather than actually helping.

Later on the Wednesday I headed into a talk from Kevlin Henny that totally re-jigged how I think about concurrency. Thinking outside the Synchronisation Quadrant was wonderfully entertaining, with Kevlin excitedly bouncing across the floor.
A fantastically engaging speaker.

Lightning talks on Wednesday

Wednesday’s talks continued with several other good talks and a number of great lightning talks too.
Finalising with the welcome reception where delegates gathered in the hotel for drinks, food and conversation.

It was here that I really got the chance to socialise with a good few people, including Anna-Jayne and Beth, who I’d been excited about meeting since I found out they were going to be there!

Thursday began with an interesting keynote about the Chapel parallel programming language. The talk has encouraged me to try the language out and I’ll certainly be having a good play with that soon.

Peter Hilton’s Documentation for Developers workshop

Thursday’s stand out talks included Documentation for Developers workshop by Peter Hilton. I really enjoyed the workshoppy style that Peter used to deliver the talk. He got the audience working in groups, talking to each other and essentially complaining about documentation. He finished with suggesting a method of writing docs called Readme Driven Development as well as other suggestions.

The other talk on Thursday which I really loved was “The C++ Type System is your Friend”. Hubert Matthews was a great speaker with clear experience in explaining a complex topic in an easier to understand fashion.
I can’t say I understood everything, but I certainly liked listening to Hubert speak.

Thursday evening I headed out for dinner with Anna-Jayne and Beth before heading back to my accommodation to write up a last minute talk for Friday.

My talk was covering Intel Software Guard Extensions — Russel announced that there was an open slot on Friday for a 15 minute topic and I took the chance to speak then.

Friday began with a curious but thought provoking talk from Fran Buontempo called AI: Actual Intelligence.
I’m not entirely sure what the take away from the talk was intended to be, but nonetheless it was interesting!

Friday morning was full of 15 minute talks. A format I think is wonderful.
I really loved that amongst the 90 min talks throughout the rest of the week, there was time for these quick fire shorter talks too that were still serious technical talks (unlike the 5min lightning talks).

The talks I went to see were:

Odin Holmes talks about Named Parameters

At Friday lunch time I took part in a bit of an unplanned workshop on sketch noting with Michel Grootjans. It was essentially an hour of trying to make our notes prettier!
It was a lot of fun.

Sketch Noting with Michel Grootjans

Friday was the conference dinner — a rock themed night of fun and frivolities.

This was by far the high point of the conference for me.
It offered a great evening of meeting people and having a lot of fun.
I loved how everyone loosened up and spoke to anyone else there.

I met a whole bunch of people, and got on super well with a few people who I would like to consider friends now.

ACCU made it easy to get to know people too by forcing everyone who isn't a speaker to move tables between each meal course. Its a great idea!

Odin enjoys inflatable instruments

Saturday’s talks started with a really fun talk from Arjan van Leeuwen about string handling in C++1x. Covering the differences between char arrays and std::strings and how best to use them. As well as tantalising us with a C++17 feature called std::string_view (immutable views of a string).

Later I watched a talk from Anthony Williams and another from Odin, both of which went wildly over my head, but all the same I gained a few things from both of them.

Finishing off the conference was a brilliant keynote from renowned speaker and member of the ISO C++ standards committee Herb Sutter.

Herb Sutter on Metaclasses at ACCU 2017

Herb introduced a new feature of C++ that he may be proposing to the standards committee.
He described a feature allowing one to create meta-classes.

Essentially, one could describe a template of a class with certain interfaces, data and operators. Then, one could implement an instance of that class defining all the functionality of the class.
Its essentially a way to more cleanly describe something akin to inheritance with virtual functions.

I highly suggest you try to catch the talk, since it was so interesting that even an hour or so after the talk there was still quite a crowd of people gathered around Herb asking him questions.

Herb is surrounded by curious programmers

The Conference Environment

As a first time ACCU attendee — this wouldn't be a useful blog post without a few words about the environment at the conference.

As most of my readers know, I’m a young transwomen, so a safe and welcome environment is something that I very much appreciate and makes a huge difference to my experience of an event.

Its something thats super hard to achieve in a world like software development where the workforce are predominantly male.

I’m glad to say that ACCU did a great job of creating a safe and welcoming space. Despite being predominantly male as expected, everyone I encountered was not only friendly and helpful, but ever so willing to go out of their way to make me feel welcome and comfortable.
Everyone I met simply accepted me for me and didnt treat me any other way than friendly.

I would suggest that offering diversity tickets to ACCU would help make me feel even better there, since I’d feel better with a more diverse set of delegates.

I was especially comforted by Russel mentioning the code of conduct, without fail, every day of the conference. As well as one of the lightning talks being, delivered by a man, taking the form of a spoke word-ish piece praising the welcoming nature of ACCU and calling for the maintenance of the welcoming nature to all people in the community, not just people like himself.

I’d like to especially mention Julie and the Archer-Yates team for checking up on my happiness throughout the conference, they really helped me feel safe there.

I think there still could be work to do about making the conference a good place for younger adults — I was rather overwhelmed by the fact that everyone seemed older than me and clearly had a better idea of how to conduct themselves in the conference setting.
However, I think the only real way of solving this problem would be to make the conference easier to access to younger people (i,e cheaper tickets for students, its still super expensive) which wouldn't always be possible. Additionally, the inclusion of some simpler, easier to understand talks would have been great. Lots of the talks were very complicated and easily got to a level that was way over my head.

Thanks to everyone who helped me feel welcome at ACCU — including but not limited to Richard, Antonello, Anna-Jayne, Beth, Jackie, Fran, Russel and Odin.

In conclusion

ACCU was a fantastic experience for me. I would highly recommend it to anyone interested in improving their C and C++ programming skills as well as general programming skills.
I’ll certainly be heading back next year if I can, and am happily a registered ACCU member now!

Dumbing Down Code

Chris Oldwood from The OldWood Thing

A conversation with a colleague, which was originally sparked off by “C# BAD PRACTICES: Learn how to make a good code by bad example” (an article about writing clear code), reminded me about a recent change I had just made. When I reflected back I began to consider whether I had just “dumbed down” the code instead of keeping my original possibly simpler version. I also couldn’t decide whether the reason I had changed it was because I was using an old idiom which might now be unfamiliar or because I just didn’t think it was worth putting the burden on the reader.

Going Loopy

The change I made was to add a simple retry loop to some integration test clean-up code that would occasionally fail on a laggy VM. The loop just needed to retry a small piece of code a few times with a brief pause in between each iteration.

My gut instinct was to write the loop like this:

int attempt = 5;

while (attempt-- > 0)
{
  . . .
}

Usually when I need to write a “for” loop these days, I don’t. By far the most common use case is to iterate over a sequence and so I’d use LINQ in C# or, say, a pipeline (or foreach) in PowerShell. If I did need to manually index an array (which is rare) I’d probably go straight for a classic for loop.

After I wrote and tested the code I took a step back just to review what I’d written and realised I felt uncomfortable with the while loop. Yes this was only test code but I don’t treat it as a second class citizen, it still gets the same TLC as production code. So what did I not like?

  • While loops are much rarer than for loops these days.
  • The use of the pre and post-decrement operators in a conditional expression is also uncommon.
  • The loop counts down instead of up.

Of these the middle one – the use of the post-decrement operator in a conditional expression – was the most concerning. Used by themselves (in pre or post form) to just mutate a variable, such as in the final clause of a for loop [1], seems trivial to comprehend, but once you include it in a conditional statement the complexity goes up.

Hence there is no difference between --attempt and attempt-- when used in a simple arithmetic adjustment, but when used in a conditional expression it matters whether the value is decremented before or after the comparison is made. If you get it wrong the loop executes one less time than you expect (or vice-versa) [2].

Which leads me to my real concern – how easy is it to reason about how many times the loop executes? Depending on the placement of the decrement operator it might be one less than the number “attempt” is initialised with at the beginning.

Of course you can also place the “while” comparison at the end of the block which means it would execute at least once, so subconsciously this might also cause you to question the number of iterations, at least momentarily.

Ultimately I know I’ve written the loop so that the magic number “5” which is used to initialise the attempt counter represents the maximum number of iterations, but will a reader trust me to have done the same? I think they’ll err on the side of caution and work it out for themselves.

Alternatives

The solution I went with in the end was this:

const int maxAttempts = 5;

for (int attempt = 0; attempt != maxAttempts;
                                            ++attempt)
{
  . . .
}

Now clearly this is more verbose, but not massively more verbose than my original “while” loop. However the question is whether (to quote Sir Tony Hoare [3]) it obviously contains less bugs that the original. Being au fait with both forms it’s hard for me to decide so I’m trying to guess that the reader would prefer the latter.

Given that we don’t care what the absolute value of “attempt” is, only that we execute the loop N times, I did consider some other approaches, at least momentarily. Both of these examples use methods from the Enumerable class.

The first generates a sequence of 5 numbers starting from the “arbitrary” number 1:

const int maxAttempts = 5;

foreach (var _ in Enumerable.Range(1, maxAttempts))
{
  . . .
}

The second also generates a sequence of 5 numbers but this time by repeating the “arbitrary” number 0:

const int maxAttempts = 5;

foreach (var _ in Enumerable.Repeat(0, maxAttempts))
{
  . . .
}

In both cases I have dealt with the superfluous naming of the loop variable by simply calling it “_”.

I discounted both these ideas purely on the grounds that they’re overkill. It just feels wrong to be bringing so much machinery into play just to execute a loop a fixed number of times. Maybe my brain is addled from too much assembly language programming in my early years but seemingly unnecessary waste is still a hard habit to shake.

As an aside there are plenty of C# extension methods out there which people have written to try and reduce this further so you only need write, say, “5.Times()” or “0.To(5)” but they still feel like syntactic sugar just for the sake of it.

Past Attempts

This is not the first time I’ve questioned whether it’s possible to write code that’s perhaps considered too clever. Way back in 2012 I wrote “Can Code Be Too Simple?” which looked at some C++ code I had encountered 10 years ago and which first got me thinking seriously about the subject.

What separates the audience back then and the one now is the experience level of the programmers who will likely tackle this codebase. A couple of years later in “Will Your Successor Be a Superstar Programmer?” I questioned whether you write code for people of your own calibre or the (inevitable) application support team who have the unenviable task of having to be experts at two disciplines – support and software development. As organisations move towards development teams owning their services this issue is diminishing.

My previous musings were also driven by my perception of the code other people wrote, whereas this time I’m reflecting solely on my own actions. In particular I’m now beginning to wonder if my approach is in fact patronising rather than aiding? Have I gone too far this time and should I give my successors far more credit? Or doesn’t it matter as long as we just don’t write “really weird” code?

 

[1] In C++ iterators can be implemented as simple pointers or complex objects (e.g. STL container iterators in debug builds) and so you tend to be aware of the difference because of the performance impacts it can have.

[2] I was originally going to write while (attempt-- != 0), again because in C++ you normally iterate from “begin” to “!= end”, but many devs seem to be overly defensive and favour using the > and < comparison operators instead.

[3] “There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.” – Sir Tony Hoare.

Naming is hard – or is it?

Phil Nash from level of indirection

Following Peter Hilton's excellent ACCU talk, at last week's conference in Bristol, "How to name things - the hardest problem in programming", a few of us were discussing some of the points raised - and some not raised.

He had discussed identifier length without any mention of Uncle Bob's guideline, whereby the length of a variable name should be proportional to it's scope (i.e. large or global scopes need longer, descriptive, names whereas in smaller, local, scopes shorter, more concise - even single letter - names are appropriate). This seemed all the more of an omission given that he later referenced the book, Clean Code.

It wasn't that Peter disgreed with Uncle Bob (who doesn't, half the time?) that surprised me but that he didn't even mention it in passing. I thought it was fairly well known. Actually I double checked and it is not discussed fully in the book, which only says, "The length of a name should correspond to the size of its scope". This is expanded considerably in Clean Coders (video) episode 2. Also, of course, this is not really "Uncle Bob's rule". Kevlin Henney recalls that he first heard of it in the 90s and it may well have been kicking around before that. Bob calls it "The Scope Rule".

Kevlin was one of those discussing this afterwards. After initially toying with The Scope Rule in the 90s he came to consider it not particularly useful. This, too, surprised me as I had found it worked quite well for me. Or so I thought. Further discussion with Kevlin led to the conclusion that I had read more of my own interpretation into The Scope Rule than I had realised! So I started musing over exactly what my interpretation was.

A transparent reference

As it happened a concept key to clarifying matters came from another great talk at the same conference just the day before. Didier Verna's "Referential transparency is overrated". In this talk Didier discussed various ways that useful idioms in Lisp required violating referential transparency. At one point he explained how "hidden" variables may be introduced by one macro that were then used by another. This worked because the thing being referred to was named very generically - so both macros agreed on the name. He drew on the term "Anaphora" from linguistics, which is where one part of an expression - usually a pronoun - stands in for a more specialised part - such as a person's name - introduced earlier in the context. For example, just now I used the word, "he" to refer to Didier Verna. It was clear who I was talking about because his was the most recently specified name in the current context. In fact I used this anaphoric term a couple of times - and many, many, times in this article. If I had had to fully qualify "Didier Verna" every time writing would quickly become very cumbersome. Anaphora is used very frequently in natural language - usually to good effect.

Scope Creep

I believe this is key to understanding why and when shorter identifiers can be used too. When I had been talking with Kevlin it had become apparent that we had different interpretations of the word, "scope". I realised that I had subconsciously expanded the specific technical meaning to include a more general idea of "context" - including the anaphoric context.

To make this clear I might write some (C++) code like this:

	std::string s = getNextString();
	if( !s.empty() )
		std::cout << "received string: " << s << std::endl;

Many corporate, or personal, coding standards would balk at such practice! Single character identifiers? Way to obfuscate the code!

But how has it obfuscated anything? Look at it as an anaphoric entity. In this case the variable name 's' is anaphoric. We know it is "the next string" because we saw it being introduced by the function call, "getNextString()". We then use it twice on the next couple of lines. There are no other strings being introduced in the same vicinity to confuse it with, and the context in which it is used is kept small. There is no ambiguity and the full identity is revealed in the immediate vicinity.

Sustainability

But what if we add more code, or move parts of this elsewhere? Certainly code evolves over time in ways that can make things less clear if we don't change them. That's true regardless. Naming of the entities at play should always form part of your consideration when refactoring or otherwise modifying existing code. Does it make this code less "sustainable" (to reference another property that Kevlin likes to talk about)? I don't think so. In the worst case, if you don't immediately notice that a short name has become unclear because it's usage has drifted out of anaphoric range, you'll notice the next time you look at it and, momentarily, think "why is there an free variable called 's' here? What on earth is that. You'll take a moment to find it's original declaration, work out what it is, then decide to encode that in the name by renaming the variable at that point. Variable renaming is one of the safest and most ubiquituous refactorings around so I have no qualms about deferring such identity expansion to such a time as it is needed.

Why?

But what about the other side of the argument? Is there any advantage to using a short, even single character, variable name in the first place?

This is often cast as a matter of optimising for typing speed - in world where we typically read these names many many times more than we write them.

While introducing, even small, speed bumps to writing code might discourage spending more time than necessary writing code (which in turn may discourage certain refactorings) it's not really about typing performance at all - It's about readability! Consider again the linguistic definition of anaphora: substituting an, unambiguous, subsequent reference to an entity with a shorter form (e.g. a pronoun) that means the same thing. We do this all the time in natural speech and the written word. Why? Because it would sound unnatural and cumbersome to fully qualify every entity we talk about all the time!

The same applies in programming. Where it is perfectly clear from the immediate context what an identifier refers to then using greater verbosity actually increases the cognitive friction! The more unnecessary and redundant noise and ceremony we can strip away from our code the easier it will be to read, in a shorter period of time. That fact that anaphora is so common in natural language should give us a clue as to our ability to code with it's use in a natural and efficient way.

Now I've only mentally organised my thoughts around this as a result of ruminating on those two talks - and some of the offshoot discussions - but I realise this is essentially how I had interpreted The Scope Rule. Now I've worked it through when I go back and compare it with what Mr Martin actually said his version sounds like a poor proxy for the anaphoric interpretation.

So naming - good naming - is still hard. We've only just discussed one narrow aspect here. But perhaps this has made some of it that little bit easier.