Example of a systemd service file

Andy Balaam from Andy Balaam's Blog

Here is an almost-minimal example of a systemd service file, that I use to run the Mastodon bot of my generative art playground Graft.

I made a dedicated user just to run this service, and installed Graft into /home/graft/apps/graft under that username. Now, as root, I edited a file called /etc/systemd/service/graft.service and made it look like this:

[Service]
ExecStart=/home/graft/apps/graft/bot-mastodon
User=graft
Group=graft
[Install]
WantedBy=multi-user.target

Now I can start the graft service like any other service like this:

sudo systemctl start graft

and find out its status with:

sudo systemctl status graft

If I want it to run on startup I can do:

sudo systemctl enable graft

and it will. Easy!

If I want to look at its output, it’s:

sudo journalctl -u graft

As a reward for reading this far, here’s a little animation you can make with Graft:

Installing Flarum on Ubuntu 18.04

Andy Balaam from Andy Balaam's Blog

I am setting up a forum for sharing levels for my game Rabbit Escape, and I have decided to try and use Flarum, because it looks really usable and responsive, has features we need like liking posts and following authors, and I think it will be reasonably OK to write the custom features we want.

So, I want a dev environment on my local Ubuntu 18.04 machine, and the first step to that is a standard install.

Warning: at the time of writing the Flarum docs say it does not work with PHP 7.2, which is what is included with Ubuntu 18.04, so this may not work. (So far it looks OK for me.)

Here’s how I got it working (as far as the web installer stage, anyway):

sudo apt install \
    apache2 \
    libapache2-mod-php \
    mariadb-server \
    php-mysql \
    php-json \
    php-gd \
    php-tokenizer \
    php-mbstring \
    php-curl

php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"

# Get the neat line from https://getcomposer.org/download/
# Don't copy it exactly!
php -r "if (hash_file('SHA384', 'composer-setup.php') === '544e09ee996cdf60ece3804abc52599c22b1f40f4323403c44d44fdfdd586475ca9813a858088ffbc1f233e9b180f061') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;"

mkdir ~/bin
php composer-setup.php --install-dir=~/bin/ --filename=composer
rm composer-setup.php

cd /var/www/html
sudo mkdir flarum
sudo chown $(whoami) flarum

# Log out and in again here to get composer to be in your PATH
cd flarum
composer create-project flarum/flarum . --stability=beta

sudo chgrp -R www-data .
sudo chmod -R 775 .

sudo systemctl restart apache2

Go to http://localhost/flarum in your browser, and follow the instructions there to get set up.

If I get further, I will update this post, including on how to set up the MySQL database.

If you want to find and share levels for Rabbit Escape, check up on our progress setting up the forum at https://artificialworlds.net/rabbit-escape/levels.

How to write a programming language articles

Andy Balaam from Andy Balaam's Blog

Recent Overload journal issues contain my new articles on How to Write a Programming Language.

Part 1: How to Write a Programming Language: Part 1, The Lexer

Part 2: How to Write a Programming Language: Part 2, The Parser

PDF of the latest issue: Overload 146 containing part 2.

This is all creative-commons licensed and developed in public at github.com/andybalaam/articles-how-to-write-a-programming-language

It Compiles, Ship It!

Chris Oldwood from The OldWood Thing

The method was pretty simple and a fairly bog standard affair, it just attempted to look something up in a map and return the associated result, e.g.

public string LookupName(string key)
{
  string name;

  if (!customers.TryGetValue(key, out name)
    throw new Exception(“Customer not found”);

  return name;
}

The use of an exception here to signal failure implied to me that this really shouldn’t happen in practice unless the data structure is screwed up or some input validation was missed further upstream. Either way you know (from looking at the implementation) that the outcome of calling the method is either the value you’re after or an exception will be thrown.

So I was more than a little surprised when I saw the implementation of the method suddenly change to this:

public string LookupName(string key)
{
  string name;

  if (!customers.TryGetValue(key, out name)
    return null;

  return name;
}

The method no longer threw an exception on failure it now returned a null string reference.

This wouldn’t be quite so surprising if all the call sites that used this method had also been fixed-up to account for this change in behaviour. In fact what initially piqued my interest wasn’t that this method had changed (although we’ll see in a moment that it could have been expressed better) but how the calling logic would have changed.

Wishful Thinking

I always approach a change from a position of uncertainty. I’m invariably wrong or have something to learn, either from a patterns perspective or a business logic one. Hence my initial assumption was that I now needed to think differently about what happens when I need to “lookup a name” and that lookup fails. Where before it was truly exceptional and should never occur in practice (perhaps indicating a bug somewhere else) it’s now more likely and something to be formally considered, and resolving the failure needs to be handled on a case-by-case basis.

Of course that wasn’t the case at all. The method had been changed to return a null reference because it was now an implementation detail of another new method which didn’t want to use catching an exception for flow control. Instead they now simply check for null and act accordingly.

As none of the original call sites had been changed to handle the new semantics a rich exception thrown early had now been traded for (at best) a NullReferenceException later or (worse case) no error at all and an incorrect result calculated based on bad input data [1].

The TryXxx Pattern

Coming back to reality it’s easy to see that what the author really wanted here was another method that allowed them to attempt a lookup on a name, knowing that in their scenario it could possibly fail but that’s okay because they have a back-up plan. In C# this is a very common pattern that looks like this:

public bool TryLookupName(string key, out string name)

Success or failure is indicated by the return value and the result of the lookup returned via the final argument. (Personally I’ve tended to favour using ref over out for the return value [2].)

The Optional Approach

While statically types languages are great at catching all sorts of type related errors at compile time they cannot catch problems when you smuggle optional reference-type values in languages like C# and Java by using a null reference. Any reference-type value in C# can inherently be null and therefore the compiler is at a loss to help you.

JetBrains’ ReSharper has some useful annotations which you can use to help their static analyser point out mistakes or elide unnecessary checks, but you have to add noisy attributes everywhere. However expressing your intent in code is the goal and it’s one valid and very useful approach.

Winding the clock into the future we have the new “optional reference” feature to look forward to in C# (currently in preview). Rather than bury their heads in the sand the C# designers have worked hard to try and right an old wrong and reduce the impact of Sir Tony Hoare’s billion dollar mistake by making null references type unsafe.

In the meantime, and for those of us working with older C# compilers, we still have the ability to invent our own generic Optional<> type that we can use instead. This is something I’ve been dragging into C# codebases for many years (whilst standing on my soapbox [3]) in an effort to tame at least one aspect of complexity. Using one of these would have changed the signature of the method in question to:

public Optional<string> LookupName(string key)

Now all the call sites would have failed to compile and the author would have been forced to address the effects of their change. (If there had been any tests you would have hoped they would have triggered the alarm too.)

Fix the Design, Not the Compiler

Either of these two approaches allows you to “lean on the compiler” and leverage the power of a statically typed language. This is a useful feature to have but only if it’s put to good use and you know where the limitations are in the language.

While I would like to think that people listen to the compiler I often don’t think they hear it [4]. Too often the compiler is treated as something to be placated, or negotiated with. For example if the Optional<string> approach had been taken the call sites would all have failed to compile. However this calling code:

var name = LookupName(key);

...could easily be “fixed” by simply doing this to silence the compiler:

var name = LookupName(key).Value;

For my own Optional<> type we’d just have switched from a possible NullReferenceException on lookup failure to an InvalidOperationException. Granted this is better as we have at least avoided the chance of the null reference silently making its way further down the path but it doesn’t feel like we’ve addressed the underlying problem (if indeed there has even been a change in the way we should treat lookup failures).

Embracing Change

While the Optional<> approach is perhaps more composable the TryXxx pattern is more invasive and that probably has value in itself. Changing the signature and breaking compilation is supposed to put a speed bump in your way so that you consider the effects of your potential actions. In this sense the more invasive the workaround the more you are challenged to solve the underlying tension with the design.

At least that’s the way I like to think about it but I’m afraid I’m probably just being naïve. The reality, I suspect, is that anyone who could make such a change as switching an exception for a null reference is more concerned with getting their change completed rather than stopping to ponder the wider effects of what any compiler might be trying to tell them.

 

[1] See Postel’s Law and  consider how well that worked out for HTML.

[2] See “Out vs Ref For TryXxx Style Methods”.

[3] C# already has a “Nullable” type for optional values so I find it odd that C# developers find the equivalent type for reference-type values so peculiar. Yes it’s not integrated into the language but I find it’s usually a disconnect at the conceptual level, not a syntactic one.

[4] A passing nod to the conversation between Woody Harrelson and Wesley Snipes discussing Jimi Hendrix in White Men Can’t Jump.

Ideas on how lexing will work in Pepper3

Andy Balaam from Andy Balaam&#039;s Blog

I am trying to practice documentation-driven development in Pepper3, so every time I start on an area, I will write documentation explaining how it works, and include examples that are automatically verified during the build.

I’ve started work on lexing, since you can’t do much before you do that, but in fact, of course, I need to have a command line interface before I can verify any of the examples, so I’m working on that too.

Lexing is the process that takes a stream of characters (e.g. from a file) and turns it into a stream of “tokens” that are chunks of code like a variable name, a number or a string. (There is more on lexing in my mini programming language, Cell.)

My thoughts so far about lexing are in lexing.md, and current ideas about command line interface are at command_line.md. All very much subject to change.

Headlines:

  • Ordinary programmers can write their own lexing rules.
  • Operators (functions like “+” that find their arguments on their left and right, instead of between brackets like normal functions) are defined at the lexing phase, so any symbol (e.g. “in”) can be an operator if you want.
  • Anything you might want to do with a pepper program, including running it, compiling it, packaging it for an distribution system, should be available as a sub-command of the main pepper3 command line.
  • The command is “pepper3”, never “pepper”. If a new, incompatible version comes out, it will be called “pepper4”, and they will be parallel-installable, with no confusion.

Questions and answers about Pepper3

Andy Balaam from Andy Balaam&#039;s Blog

Series: Examples, Questions

My last post Examples of Pepper3 code was a reply to my friend’s email asking what it was all about. They replied with some questions, and I thought the questions and answers might shed some more light:

Questions!

Brilliant ones, thanks.

In general though you’ve said a lot about what Pepper can do without giving design decisions.

Yep, total brain dump.

Remind me again who this language is for :)

It’s a multi-paradigm (generic, functional, OO) language aimed at application programmers who want:

  • “native” performance on their chosen platform (definitely including actual native machine code). This is inspired by C++.
  • easy deployment (preferably a single binary containing everything, with an option to link most dependencies statically), including packaging of installers for major OSes. This is inspired by C++, and the pain of C++.
  • perfect flexibility for creating types – “meta-programming” is just programming. Things you would have done using code generation (e.g. generating a class hierarchy from an XSD) are done by running arbitrary code at compile time. The powerful type system is inspired by Haskell and the book “Modern C++ Design”, and the meta-programming is inspired by Lisp.
  • Simple memory management without GC through ownership. This is inspired by modern C++, and then Rust came along and implemented it before I could, thus proving it works. However, I would remove a lot of the functionality in Rust (lifetimes) to make it much simpler.
  • Strong support for functional programming if you want it. This is inspired by Haskell.
  • The simplest possible core language, with application programmers able to expand it by giving them the same tools as the language designers – e.g. “for” is just a function, so you can make your own. I am hoping I can even make “class” a function. This is inspired by Lisp, and oppositely-inspired by Java.
  • Separation between the idea of Interfaces, which I think I will call “type specifiers” (and will allow arbitrary code execution to determine whether a type satisfies the requirements) and structs/classes, allowing us to make new Interfaces and have old code satisfy them, meaning we can do generic stuff with e.g. ints even if
    no-one declared that “class Int : public Quaternion” or whatever.
  • Lots of “nudges” towards things that are good: by default things will be functional and immutable – you will have to explicitly say if you want to use more dangerous constructs like side effects and mutable values.
  • No implicit conversions, or really anything happening without you saying so.

Can you assign floats to ints or vice versa?

Yes, but you shouldn’t.

If you’re setting types in code at the start of a file, is this only available in the main file? Are there multiple files per program? Can
you have libraries? If so, do these decide the functionality of their types in the library or does this only happen in the main file?

I haven’t totally decided – either by being enforced, or as a matter of style, you will generally do this once at the beginning of the program (and choose on the compiler command line to do it e.g. the debug way or the release way) and it will affect all of your code.

Libraries will be packaged as Pepper3 source code, so choices you make of the type of Int etc. will be reflected through the whole dependency tree. Cool, huh?

This is inspired by Python.

Can you group variables together into structs or similar?

Yes – it will be especially easy to make “value types”, and lots of default methods will be provided, that you will be strongly encouraged to use – e.g. copy and move operations. This is inspired by Elm.

Why are variables immutable by default but mutable with a special syntax? It’s the opposite of C++ const, but why that way around?

This is one of the “nudges” – immutable stuff is much easier to think about, and makes parallel stuff easier, and allows optimisations and so on, so turning it on by default means you have to choose to take the bad path, and are inclined to take the virtuous one. This is inspired by Haskell and Rust.

Why only allow assignments, function calls and operators? I’m sure you have good reasons.

To be as simple as possible, so you only have those things to learn and the rest can be understood by just reading the code. This is inspired by Python.

I wrote more of my (earlier) thoughts in this 4-post series, which is better thought through: Goodness in Programming Languages

Constantly Confusing: C++ const and constexpr pointer behaviour

Samathy from Stories by Samathy on Medium

A quick explanation of how const and constexpr work on pointers in C++

So I was checking that my knowledge was correct when working on a Firefox bug.
I made a quick C++ file with all the examples I know of how to use const and constexpr on pointers.
As one can see, its pretty confusing!

Because there are several places in a statement where you can put ‘const’ it can be complicated to work out what part of your statement the ‘const’ is referring too.
Generally, its best to read from right to left to work it out. i.e:

static const char * const hello;

Would read like:

hello (is a) const pointer (to) const char

But, that takes a bit of practice.

C++’s constexpr brings another new dimension to the problem too!
It behaves like const in the sense that it makes all pointers constant pointers.
But because it occurs at the start of your statement (rather than after the ‘*’) its not immediately obvious.

Heres my list of all the ways you can use const and constexpr on pointers and how they behave.

Working with PDF Highlight Annotations Programmatically

Samathy from Stories by Samathy on Medium

PDFs are the format of choice in academia, but extracting the information they contain is annoyingly hard.

I’ve just started working on my degree’s final project. An academic project requires lots of research, which means reading lots of papers.
Papers are normally available in one form only, PDF.

While PDF is a format so ubiquitous nowadays that one can guarantee being able to display it as the writer(s) intended, its not a nice format, as I found out as soon as I needed to do something with it.

During the course of my research, I’ve been using PDF’s highlight annotations to highlight parts of a paper that’re particularly interesting.
I wanted to be able to retrieve the highlighted text at a later date so I didn’t have to open the paper again to find the parts I found interesting when I read it the first time.

You’d think that exporting annotations on text would be something that all PDF readers which support annotations (most of them do) would be capable of. I mean, surely its easy enough even if there arnt that many reasons why you’d want to do it.

Alas, none that I found running on Linux had this feature, so I delved into trying to write something to do what I needed.

I based my project on a tool I found in a StackOverflow answer to a question similar to mine.
The Python code in the answer utilises poppler-qt4 to export annotated text from a PDF. Unfortunately, the code is Python2 and the python poppler-qt4 package wouldn't install properly on my system anyway, even after installing the poppler-qt4 package.
Neither did Python’s poppler-qt5 bindings.

Convinced I could do a better job than a Python 2 script which depended on a package last updated in 2015, I translated the answer into the equivalent in C++.

I started with trying to use poppler-cpp, the C++ bindings for poppler where one has objects and namespaces, and none of the guff associated with GUI frameworks that I wouldn't need here. However, to my dismay, poppler-cpp doesn't support annotations at all. For whatever reason, annotation support only works with the bindings to a GUI framework, like glib or QT.

So instead I used poppler-glib (i.e glib from the GNOME project). Purely because I use GNOME, so wouldn't have to install anything extra.

Now, the PDF format is really odd. Annotations seem to be an after-thought to the format tacked on later.
Specifically highlighting is weird, because a highlight annotation has no connection to the document’s text.
As such, poppler’s poppler_annot_get_contents(PopplerAnnot *) which should return the annotation’s contents, returns nothing.
Instead, to get the text associated with a highlight annotation, one has to get the coordinates of the highlight annotation (A PopplerRectangle) and then utilise the function poppler_page_get_text_for_area(PopplerPage*, PopplerRectangle*) which returns the text in a defined area.

What an entirely baffling way to go about implementing highlighting. Attaching it as purely a visual element, rather than actually marking up the text.

Even more baffling is the fact that although my application works, it only mostly works.
Sometimes I get the full text highlighted, other times it chops off characters, and sometimes it adds things that’re nowhere near the highlighted text at all!
This is a problem I’m yet to solve, and I might never solve, because its ridiculous and the tool mostly does what I needed anyway.

In conclusion; The PDF format is weird, I wrote a thing.
If you use it, let me know how it goes!

https://github.com/Samathy/pdfcommentextractor

HTML5 CSS Toolbar + zoomable workspace that is mobile-friendly and adaptive

Andy Balaam from Andy Balaam&#039;s Blog

I have been working on a prototype level editor for Rabbit Escape, and I’ve had trouble getting the layout I wanted: a toolbar at the side or top of the screen, and the rest a zoomable workspace.

Something like this is very common in many desktop applications, but not that easy to achieve in a web page, especially because we want to take care that it adapts to different screen sizes and orientations, and, for example, allows zooming the toolbar buttons in case we find ourselves on a device with different resolution from what we were expecting.

In the end I’ve gone with a grid-layout solution and accepted the fact that sometimes on mobile devices when I zoom in my toolbar will disappear off the top/side. When I scroll back to it, it stays around, so using this setup is quite natural. On the desktop, it works how you’d expect, with the toolbar staying on screen at all zoom levels.

Here’s how it looks on a landscape display:

and portrait:

Read the full source code.

As you can see from the code linked above, after much fiddling I managed to achieve this with a relatively small amount of CSS, and no JavaScript. I’m hoping it will behave well in unexpected scenarios, because the code expresses what I want fairly closely.

The important bits of the HTML are simple – a main div, a toolbar containing buttons, and a workspace containing some kind of work:

<div id="main">
    <div id="toolbar">
        <button></button><button></button><button></button><button></button><button></button><button></button><button></button><button></button>
    </div>
    <div id="workspace">
        <div id="work">
        </div>
    </div>
</div>

The keys bits of the CSS are:

/* Ensure we take up the full height of the page. */
html, body, #main
{
    height: 100%;
}

@media all and (orientation:landscape)
{
    /* On a wide screen, it's a grid with 2 columns,
       and the toolbar can scroll downwards. */
    #main
    {
        display: grid;
        grid-template-columns: 5em 1fr;
    }
    #toolbar
    {
        overflow-x: hidden;
        overflow-y: auto;
    }
}

@media all and (orientation:portrait)
{
    /* On a tall screen, it's a grid with 2 rows,
       and the toolbar can scroll right. */
    #main
    {
        display: grid;
        grid-template-rows: 5em 1fr;
    }
    #toolbar
    {
        overflow-x: auto;
        overflow-y: hidden;
        white-space: nowrap;
    }
}

That replaces an awful lot of code in my first attempt, so I’m reasonably happy. If anyone has suggestions about how to make “100%” really mean 100% of the real device width and height, let me know. If I do some JavaScript I can make Mobile Firefox fit to the real screen size, but Mobile Chrome (and, I assume, Mobile Safari) lie to me about the screen size when zoomed in.