Setting up rdiff-backup on FreeBSD 12.1

Timo Geusch from The Lone C++ Coder's Blog

My main PC workstation (as opposed to my Mac Pro) is a dual-boot Windows and Linux machine. While backing up the Windows portion is relatively easy via some cheap-ish commercial backup software, I ended up backing up my Linux home directories only very occasionally. Clearly, Something Had To Be Done ™. I had a look […]

The post Setting up rdiff-backup on FreeBSD 12.1 appeared first on The Lone C++ Coder's Blog.

Predicting the future with data+logistic regression

Derek Jones from The Shape of Code

Predicting the peak of data fitted by a logistic equation is attracting a lot of attention at the moment. Let’s see how well we can predict the final size of a software system, in lines of code, using logistic regression (code+data).

First up is the size of the GNU C library. This is not really a good test, since the peak (or rather a peak) has been reached.

Growth of glibc, in lines,, with logistic regression fit

We need a system that has not yet reached an easily recognizable peak. The Linux kernel has been under development for many years, and lots of LOC counts are available. The plot below shows a logistic equation fitted to the kernel data, assuming that the only available data was up to day: 2,900, 3,650, 4,200, and 5,000+. Can you tell which fitted line corresponds to which number of days?

Number lines in Linux kernel, on days since release1, and four fitted logistic regression models.

The underlying ‘problem’ is that we are telling the fitting software to fit a particular equation; the software does what it has been told to do, and fits a logistic equation (in this case).

A cubic polynomial is also a great fit to the existing kernel data (red line to the left of the blue line), and this fitted equation can be extended into future (to the right of the blue line); dotted lines are 95% confidence bounds. Do any readers believe the future size of the Linux kernel predicted by this cubic model?

Number of distinct silhouettes for a function containing four statements

Predicting the future requires lots of data on the underlying processes that drive events. Modeling events is an iterative process. Build a model, check against reality, adjust model, rinse and repeat.

If the COVID-19 experience trains people to be suspicious of future predictions made by models, it will have done something positive.

Automating Windows VM Creation on Ubuntu

Chris Oldwood from The OldWood Thing

TL;DR you can find my resulting Oz and Packer configuration files in this Oz gist and this Packer gist on my GitHub account.

As someone who has worked almost exclusively on Windows for the last 25 years I was somewhat surprised to find myself needing to create Windows VMs on Linux. Ultimately these were to be build server agents and therefore I needed to automate everything from creating the VM image, to installing Windows, and eventually the build toolchain. This post looks at the first two aspects of this process.

I did have a little prior experience with Packer, but that was on AWS where the base AMIs you’re provided have already got you over the initial OS install hurdle and you can focus on baking in your chosen toolchain and application. This time I was working on-premise and so needed to unpick the Linux virtualization world too.

In the end I managed to get two approaches working – Oz and Packer – on the Ubuntu 18.04 machine I was using. (You may find these instructions useful for other distributions but I have no idea how portable this information is.)

QEMU/KVM/libvirt

On the Windows-as-host side (until fairly recently) virtualization boiled down to a few classic options, such as Hyper-V and Virtual Box. The addition of Docker-style Windows containers, along with Hyper-V containers has padded things out a bit more but to me it’s still fairly manageable.

In contrast on the Linux front, where this technology has been maturing for much longer, we have far more choice, and ultimately, for a Linux n00b like me [1], this means far more noise to wade through on top of the usual “which distribution are you running” type questions. In particular the fact that any documentation on “virtualization” could be referring to containers or hypervisors (or something in-between), when you’re only concerned with hypervisors for running Windows VMs, doesn’t exactly aid comprehension.

Luckily I was pointed towards KVM as a good starting point on the Linux hypervisor front. QEMU is one of those minor distractions as it can provide full emulation, but it also provides the other bit KVM needs to be useful in practice – device emulation. (If you’re feeling nostalgic you can fire up an MS-DOS recovery boot-disk from “All Boot Disks” under QMEU/KVM with minimal effort which gives you a quick sense of achievement.)

What I also found mentioned in the same breath as these two was a virtualization “add-on layer” called libvirt which provides a layer on top of the underlying technology so that you can use more technology agnostic tools. Confusingly you might notice that Packer doesn’t mention libvirt, presumably because it already has providers that work directly with the lower layer.

In summary, using apt, we can install this lot with:

$ sudo apt install qemu qemu-kvm libvirt-bin  bridge-utils  virt-manager -y

Windows ISO & Product Key

We’re going to need a Windows ISO along with a related product key to make this work. While in the end you’ll need a proper license key I found the Windows 10 Evaluation Edition was perfect for experimentation as the VM only lasts for a few minutes before you bin it and start all over again.

You can download the latest Windows image from the MS downloads page which, if you’ve configured your browser’s User-Agent string to appear to be from a non-Windows OS, will avoid all the sign-up nonsense. Alternatively google for “care.dlservice.microsoft.com” and you’ll find plenty of public build scripts that have direct download URLs which are beneficial for automation.

Although the Windows 10 evaluation edition doesn’t need a specific license key you will need a product key to stick in the autounattend.xml file when we get to that point. Luckily you can easily get that from the MS KMS client keys page.

Windows Answer File

By default Windows presents a GUI to configure the OS installation, but if you give it a special XML file known as autounattend.xml (in a special location, which we’ll get to later) all the configuration settings can go in there and the OS installation will be hands-free.

There is a specific Windows tool you can use to generate this file, but an online version in the guise of the Windows Answer File Generator produced a working file with fairly minimal questions. You can also generate one for different versions of the Windows OS which is important as there are many examples that appear on the Internet but it feels like pot-luck as to whether it would work or not as the format changes slightly between releases and it’s not easy to discover where the impedance mismatch lies.

So, at this point we have our Linux hypervisor installed, and downloaded a Windows installation .iso along with a generated autounattend.xml file to drive the Windows install. Now we can get onto building the VM, which I managed to do with two different tools – Oz and Packer.

Oz

I was flicking through a copy of Mastering KVM Virtualization and it mentioned a tool called Oz which was designed to make it easy to build a VM along with installing an OS. More importantly it listed having support for most Windows editions too! Plus it’s been around for a fairly long time so is relatively mature. You can install it with apt:

$ sudo apt install oz -y

To use it you create a simple configuration file (.tdl) with the basic VM details such as CPU count, memory, disk size, etc. along with the OS details, .iso filename, and product key (for Windows), and then run the tool:

$ oz-install -d2 -p windows.tdl -x windows.libvirt.xml

If everything goes according to plan you end up with a QEMU disk image and an .xml file for the VM (called a “domain”) that you can then register with libvirt:

$ virsh define windows.libvirt.xml

Finally you can start the VM via libvirt with:

$ virsh start windows-vm

I initially tried this with the Windows 8 RTM evaluation .iso and it worked right out of the box with the Oz built-in template! However, when it came to Windows 10 the Windows installer complained about there being no product key, despite the Windows 10 template having a placeholder for it and the key was defined in the .tdl configuration file.

It turns out, as you can see from Issue #268 (which I raised in the Oz GitHub repo) that the Windows 10 template is broken. The autounattend.xml file also wants the key in the <UserData> section too it seems. Luckily for me oz-install can accept a custom autounattend.xml file via the -a option as long as we fill in any details manually, like the <AutoLogin> account username / password, product key, and machine name.

$ oz-install -d2 -p windows.tdl -x windows.libvirt.xml –a autounattend.xml

That Oz GitHub issue only contains my suggestions as to what I think needs fixing in the autounattend.xml file, I also have a personal gist on GitHub that contains both the .tdl and .xml files that I successfully used. (Hopefully I’ll get a chance to submit a formal PR at some point so we can get it properly fixed; it also needs a tweak to the Python code as well I believe.)

Note: while I managed to build the basic VM I didn’t try to do any post-processing, e.g. using WinRM to drive the installation of applications and tools from the outside.

Packer

I had originally put Packer to one side because of difficulties getting anything working under Hyper-V on Windows but with my new found knowledge I decided to try again on Linux. What I hadn’t appreciated was quite how much Oz was actually doing for me under the covers.

If you use the Packer documentation [2] [3] and online examples you should happily get the disk image allocated and the VM to fire up in VNC and sit there waiting for you to configure the Windows install. However, after selecting your locale and keyboard you’ll probably find the disk partitioning step stumps you. Even if you follow some examples and put an autounattend.xml on a floppy drive you’ll still likely hit a <DiskConfiguration> error during set-up. The reason is probably because you don’t have the right Windows driver available for it to talk to the underlying virtual disk device (unless you’re lucky enough to pick an IDE based example).

One of the really cool things Oz appears to do is handle this nonsense along with the autounattend.xml file which it also slips into the .iso that it builds on-the-fly. With Packer you have to be more aware and fetch the drivers yourself (which come as part of another .iso) and then mount that explicitly as another CD-ROM drive by using the qemuargs section of the Packer builder config. (In my example it’s mapped as drive E: inside Windows.)

[ "-drive", "file=./virtio-win.iso,media=cdrom,index=3" ]

Luckily you can download the VirtIO drivers .iso from a Fedora page and stick it alongside the Windows .iso. That’s still not quite enough though, we also need to tell the Windows installer where our drivers are located; we do that with a special section in the autounattend.xml file.

<DriverPaths>
  <PathAndCredentials wcm:action="add" wcm:keyValue="1">
    <Path>E:\NetKVM\w10\amd64\</Path>

Finally, in case you’ve not already discovered it, the autounattend.xml file is presented by Packer to the Windows installer as a file in the root of a floppy drive. (The floppy drive and extra CD-ROM drives both fall away once Windows has bootstrapped itself.)

"floppy_files":
[
  "autounattend.xml",

Once again, as mentioned right at the top, I have a personal gist on GitHub that contains the files I eventually got working.

With the QEMU/KVM image built we can then register it with libvirt by using virt-install. I thought the --import switch would be enough here as we now have a runnable image, but that option appears to be for a different scenario [4], instead we have to take two steps – generate the libvirt XML config file using the --print-xml option, and then apply it:

$ virt-install --vcpus ... --disk ...  --print-xml > windows.libvert.xml
$ virsh define windows.libvert.xml

Once again you can start the finalised VM via libvirt with:

$ virsh start windows-vm

Epilogue

While having lots of documentation is generally A Good Thing™, when it’s spread out over a considerable time period it’s sometimes difficult to know if the information you’re reading still applies today. This is particularly true when looking at other people’s example configuration files alongside reading the docs. The long-winded route might still work but the tool might also do it automatically now if you just let it, which keeps your source files much simpler.

Since getting this working I’ve seen other examples which suggest I may have fallen foul of this myself and what I’ve written up may also still be overly complicated! Please feel free to use the comments section on this blog or my gists to inform any other travellers of your own wisdom in any of this.

 

[1] That’s not entirely true. I ran Linux on an Atari TT and a circa v0.85 Linux kernel on a 386 PC in the early-to-mid ‘90s.

[2] The Packer docs can be misleading. For example it says the disk_size is in bytes and you can use suffixes like M or G to simplify matters. Except they don’t work and the value is actually in megabytes. No wonder a value of 15,000,000,000 didn’t work either :o).

[3] Also be aware that the version of Packer available via apt is only 1.0.x and you need to manually download the latest 1.4.x version and unpack the .zip. (I initially thought the bug in [2] was down to a stale version but it’s not.)

[4] The --import switch still fires up the VM as it appears to assume you’re going to add to the current image, not that it is the final image.


Looks like I get to redo my WireGuard VPN server

Timo Geusch from The Lone C++ Coder&#039;s Blog

I’ve blogged about setting up a WireGuard VPN server earlier this year. It’s been running well since, but I needed to take care of some overdue maintenance tasks. Trying to log into the server this morning and I am greeted with “no route to host”. Eh? A quick check on my Vultr UI showed that […]

The post Looks like I get to redo my WireGuard VPN server appeared first on The Lone C++ Coder's Blog.

2019 in the programming language standards’ world

Derek Jones from The Shape of Code

Last Tuesday I was at the British Standards Institute for a meeting of IST/5, the committee responsible for programming language standards in the UK.

There has been progress on a few issues discussed last year, and one interesting point came up.

It is starting to look as if there might be another iteration of the Cobol Standard. A handful of people, in various countries, have started to nibble around the edges of various new (in the Cobol sense) features. No, the INCITS Cobol committee (the people who used to do all the heavy lifting) has not been reformed; the work now appears to be driven by people who cannot let go of their involvement in Cobol standards.

ISO/IEC 23360-1:2006, the ISO version of the Linux Base Standard, has been updated and we were asked for a UK position on the document being published. Abstain seemed to be the only sensible option.

Our WG20 representative reported that the ongoing debate over pile of poo emoji has crossed the chasm (he did not exactly phrase it like that). Vendors want to have the freedom to specify code-points for use with their own emoji, e.g., pineapple emoji. The heady days, of a few short years ago, when an encoding for all the world’s character symbols seemed possible, have become a distant memory (the number of unhandled logographs on ancient pots and clay tablets was declining rapidly). Who could have predicted that the dream of a complete encoding of the symbols used by all the world’s languages would be dashed by pile of poo emoji?

The interesting news is from WG9. The document intended to become the Ada20 standard was due to enter the voting process in June, i.e., the committee considered it done. At the end of April the main Ada compiler vendor asked for the schedule to be slipped by a year or two, to enable them to get some implementation experience with the new features; oops. I have been predicting that in the future language ‘standards’ will be decided by the main compiler vendors, and the future is finally starting to arrive. What is the incentive for the GNAT compiler people to pay any attention to proposals written by a bunch of non-customers (ok, some of them might work for customers)? One answer is that Ada users tend to be large bureaucratic organizations (e.g., the DOD), who like to follow standards, and might fund GNAT to implement the new document (perhaps this delay by GNAT is all about funding, or lack thereof).

Right on cue, C++ users have started to notice that C++20’s added support for a system header with the name version, which conflicts with much existing practice of using a file called version to contain versioning information; a problem if the header search path used the compiler includes a project’s top-level directory (which is where the versioning file version often sits). So the WG21 committee decides on what it thinks is a good idea, implementors implement it, and users complain; implementors now have a good reason to not follow a requirement in the standard, to keep users happy. Will WG21 be apologetic, or get all high and mighty; we will have to wait and see.

Setting up my own VPN server on Vultr with Centos 7 and WireGuard

Timo Geusch from The Lone C++ Coder&#039;s Blog

As an IT consultant, I travel a lot. I mean, a lot. Part of the pleasure is having to deal with day-to-day online life on open, potentially free-for-all hotel and conference WiFi. In other words, the type of networks you really want to do your online banking, ecommerce and other potentially sensitive operations on. After […]

The post Setting up my own VPN server on Vultr with Centos 7 and WireGuard appeared first on The Lone C++ Coder's Blog.

Example of a systemd service file

Andy Balaam from Andy Balaam&#039;s Blog

Here is an almost-minimal example of a systemd service file, that I use to run the Mastodon bot of my generative art playground Graft.

I made a dedicated user just to run this service, and installed Graft into /home/graft/apps/graft under that username. Now, as root, I edited a file called /etc/systemd/service/graft.service and made it look like this:

[Service]
ExecStart=/home/graft/apps/graft/bot-mastodon
User=graft
Group=graft
[Install]
WantedBy=multi-user.target

Now I can start the graft service like any other service like this:

sudo systemctl start graft

and find out its status with:

sudo systemctl status graft

If I want it to run on startup I can do:

sudo systemctl enable graft

and it will. Easy!

If I want to look at its output, it’s:

sudo journalctl -u graft

As a reward for reading this far, here’s a little animation you can make with Graft:

Installing Flarum on Ubuntu 18.04

Andy Balaam from Andy Balaam&#039;s Blog

I am setting up a forum for sharing levels for my game Rabbit Escape, and I have decided to try and use Flarum, because it looks really usable and responsive, has features we need like liking posts and following authors, and I think it will be reasonably OK to write the custom features we want.

So, I want a dev environment on my local Ubuntu 18.04 machine, and the first step to that is a standard install.

Warning: at the time of writing the Flarum docs say it does not work with PHP 7.2, which is what is included with Ubuntu 18.04, so this may not work. (So far it looks OK for me.)

Here’s how I got it working (as far as the web installer stage, anyway):

sudo apt install \
    apache2 \
    libapache2-mod-php \
    mariadb-server \
    php-mysql \
    php-json \
    php-gd \
    php-tokenizer \
    php-mbstring \
    php-curl

php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"

# Get the neat line from https://getcomposer.org/download/
# Don't copy it exactly!
php -r "if (hash_file('SHA384', 'composer-setup.php') === '544e09ee996cdf60ece3804abc52599c22b1f40f4323403c44d44fdfdd586475ca9813a858088ffbc1f233e9b180f061') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;"

mkdir ~/bin
php composer-setup.php --install-dir=~/bin/ --filename=composer
rm composer-setup.php

cd /var/www/html
sudo mkdir flarum
sudo chown $(whoami) flarum

# Log out and in again here to get composer to be in your PATH
cd flarum
composer create-project flarum/flarum . --stability=beta

sudo chgrp -R www-data .
sudo chmod -R 775 .

sudo systemctl restart apache2

Go to http://localhost/flarum in your browser, and follow the instructions there to get set up.

If I get further, I will update this post, including on how to set up the MySQL database.

If you want to find and share levels for Rabbit Escape, check up on our progress setting up the forum at https://artificialworlds.net/rabbit-escape/levels.

DSLinux on a DSLite with an M3DS Real card and SuperCard SD

Samathy from Stories by Samathy on Medium

DSLinux running on a Nintendo DSLite

I recently bought a gorgeous pink Nintendo DSLite with the sole purpose of running DSLinux on it.
When I posted about my success on Mastodon , someone helpfully asked “Has it have any use tho?”.
Lets answer that right away: Running Linux on a Nintendo DSLite is at best a few hours entertainment for the masochistic technologist, and at worst a waste of your time.

Running Linux on a Nintendo DSLite is at best a few hours entertainment for the masochistic technologist, and at worst a waste of your time.

But, I do rather enjoy running Linux on things that should not be running Linux, or at least attempting to do so. So heres what I did!

Hardware:

  • Nintendo DSLite
  • SuperCard SD (Slot 2)
  • M3DS Real (Slot 1)
  • R4 Card (Knockoff, says R4 SDHC Revolution for DS on the card)

DSLinux runs on a bunch of devices, luckily we had some R4 cards and an M3DS Real around the place which are both supported by DSLinux.
I purchased a SuperCard SD from Ebay to provide some extra RAM, which apparently is quite useful, since the DSLite has only 2mB of it on it’s own.The SuperCard SD I bought had 32mB extra RAM bringing the total up to some 34mB, wowee.

R4 Cards

The first cards I tried were the R4 cards we had.
They’re popular and supported by DSLinux. Unfortunately, it seems the ones we’ve got are knockoffs and therefore proved challenging to find firmware for.
I spent a long while searching around the internet and trying various firmwares for R4 cards — None of them I tried did anything except show the Menu? screen on boot.

Finally, finding this post on GBATemp.net from a user with a card that looks exactly the same as mine lead me to give up on the R4 card and move on to the M3DS Real. Although the post did prove useful later.

It should be noted that the R4 card I had had never been tested anyway, so it might never have worked.

M3DS Real

Another card listed as supported on the DSLinux site, so seemed a good one to try.
We had a Micro-SD Card in the M3Real anyway, with the M3 Sakura firmware on it so it seemed reasonable to just jump in there.

I copied the firmware onto another SD Card (because we didn’t want to loose the data on the original card). It was only 3 folders, SYSTEM, NDS and SKINS in the root of the card. The NDS file containing ‘games’.

In this case, I put the DSLinux files (dslinux.nds, dslinuxm.nds and ‘linux’, a folder) into the NDS folder and stuck it in my DSLITE.

After selecting DSLinux from the menu, I got the joy of….a blank screen.

Starting DSLinux from M3 Sakura results in a white screen

Some forum posts which are the first results when searching the issue on DuckDuckGo suggest that something called DLDI is the issue.

The DSLinux ‘Running DSLinux’ does mention patching the ‘dslinux.nds’ file with DLDI if the device one is using doesnt support auto-dldi. At the time this was all meaningless jargon to me, since I’ve never done any Nintendo DS homebrew before.

Turns out, DLDI is a library that allows programs to “read and write files on the memory card inserted into one of the system’s slots”.
Homebrew games must be ‘patched’ for whatever device you’re using to allow them to read/write to the storage device.
Most of the links on the DSLinux page to DLDI were broken, but we descovered the new home of DLDI and it’s associated tools to be www.chishm.com/DLDI/ .

I patched the dslinux.nds file using the linux command line tool and saw no change to the behaviour of the DSLite, still white screens.

Upon reading the DSLinux wiki page for devices a little closer, I noticed that the listing for the M3DS Real notes that one should ‘Use loader V2.7d or V2.8’.

What is a loader??
It means the card’s firmware/menu.

Where do I find it?
On the manufacturer’s website, or, bringing back the post mentioned earlier with the R4 card user on GBATemp.net, one can find lots of firmware’s for lots of different cards here: http://www.linfoxdomain.com/nintendo/ds/

Under the listing on the above site for ‘M3/G6 DS Real and M3i Zero’ one can find a link to firmware versions V2.7d and V2.8 listed as ‘M3G6_DS_Real_v2.8_E15_EuropeUSAMulti.zip’.

Upon installing this firmware to the SD Card (by copying ‘SYSTEM’ folder to the root of a FAT32 formatted card, I extracted the DSLinux files again (thus, without the DLDI patching I’d done earlier) and placed the files ‘dslinux.nds, ‘dslinuxm.nds’ and the folder ‘linux’ to an ‘NDS’ folder, also in the root of the drive.
This is INCORRECT.
Upon loading the dslinux.nds file through the M3DS Real menu it did indeed boot Linux, but dropped me into a single-user mode, with essentially no binaries in the PATH.
This is conducive to the Linux kernel having booted successfully, but not being able to find any userland. Hence the single-user and lack of programs.

Progress at least!

I re-read the DSLinux instructions and caught the clear mention to ‘Both of these must be extracted to the root directory of the CF or SD card.’ when talking about the DSLinux files.

Upon moving the DSLinux files to the root of the directory and starting ‘dslinux.nds’ from the M3DS Real menu I had a working Linux system!!

I type `uname -a` on DSLinux. Its running Kernel 2.6.

Notice the ‘DLDI compatible’ that pops up when starting DSLinux — That means that the M3DS Real auto-patches binaries when it runs them. Nice.

What Next?

Probably trying to compile a newer kernel and userspace to start with.
Kernel 2.6, at time of writing, is 2 major versions out of date.

After that, I’d like to understand how DSLinux is handling the multiple screens and multiple processors.
The DS has an ARM7 and an ARM9 processor and two screens, which I think are not connected to the same processor, the buttons are split between the chips too.

Lastly, I’d like to write something for linux on the DS.
Probably something silly, but I’d like to give it a try!

Don’t ask me questions about DSLinux, I don’t really know anything more than what I’ve mentioned here. I just read some Wiki’s, solved some problems and did some searching.

Thanks to the developers of DSLinux and DLDI for making this silliness possible.