The difference between no move constructor and a deleted move constructor

Anders Schau Knatten from C++ on a Friday

It’s easy to think that deleting the move constructor means removing it. So if you do MyClass(MyClass&&) = delete , you make sure it doesn’t get a move constructor. This is however not technically correct. It might seem like a nitpick, but it actually gives you a less useful mental model of what’s going on.

First: When does this matter? It matters for understanding in which cases you’re allowed to make a copy/move from an rvalue.

Here are some examples of having to copy/move an object of type MyClass:

MyClass obj2(obj1);
MyClass obj3(std::move(obj1));

MyClass obj4 = obj1;
MyClass obj5 = std::move(obj1);

return obj1;
return std::move(obj1);

These are all examples “direct initialization” (the former two) and “copy initialization” (the latter four). Note that there is no concept of “move initialization” in C++. Whether you end up using the copy or the move constructor to initialize the new object is just a detail. For the rest of this post, let’s just look at copy initialization, direct initialization works the same for our purposes. In any case you create a new copy of the object, and the implementation uses either the copy or the move constructor to do so.

Let’s first look at a class NoMove:

struct NoMove
{
    NoMove();
    NoMove(const NoMove&);
};

This class has a user-declared copy constructor, so it doesn’t automatically get a move constructor. From the C++ standard https://timsong-cpp.github.io/cppwp/n4659/class.copy:

If the definition of a class X does not explicitly declare a move constructor, a non-explicit one will be implicitly declared as defaulted if and only if

– X does not have a user-declared copy constructor,
– (…)

So this class doesn’t have a move constructor at all. You didn’t explicitly declare one, and none got implicitly declared for you.

On the other hand, let’s see what happens if we explicitly delete the move constructor:

struct DeletedMove
{
    DeletedMove();
    DeletedMove(const DeletedMove&);
    DeletedMove(DeletedMove&&) = delete;
};

This is called ” a deleted definition”. From the C++ standard: https://timsong-cpp.github.io/cppwp/n4659/dcl.fct.def.delete

A function definition of the form:
(…) = delete ;
is called a deleted definition. A function with a deleted definition is also called a deleted function.

Importantly, that does not mean that its definition has been deleted/removed and is no longer there. It means that is has a definition, and that this particular kind of definition is called a “deleted definition”. I like to read it as “deleted-definition”.

So our NoMove class has no move constructor at all. Our DeletedMove class has a move constructor with a deleted definition.

Why does this matter?

Let’s first look at a class with both a copy and a move constructor, and how to copy-initialize it.

struct Movable
{
    Movable();
    Movable(const Movable&);
    Movable(Movable&&);
};

Movable movable;
Movable movable2 = movable;

When initializing movable2, we need to find a function to do that with. A copy constructor would do nicely. And since we do have a copy constructor, it indeed gets used for this.

What if we turn movable into an rvalue?

Movable movable2 = std::move(movable);

Now a move constructor would be great. And we do have one, and it indeed gets used.

But what if we didn’t have a move constructor? That’s the case with our class NoMove from above.

struct NoMove
{
    NoMove();
    NoMove(const NoMove&);
};

This one has a copy constructor, so it doesn’t get a move constructor. We can of course still make copies using the copy constructor:

NoMove noMove;
NoMove noMove2 = noMove;

But what happens now?

NoMove noMove;
NoMove noMove2 = std::move(noMove);

Are we now “move initializing” noMove2 and need the move constructor? Actually, we’re not. We’re still copy-initializing it, and need some function to do that task for us. A move constructor would be great, but a copy constructor would also do. It’s less efficient, but of course you’re allowed to make a copy of an rvalue.

So this is fine, the code compiles, and the copy constructor is used to make a copy of the rvalue.

What happened behind the scenes in all the examples above, is overload resolution. Overload resolution looks at all the candidates to do the job, and picks the best one. In the cases where we initialize from an lvalue, the only candidate is the copy constructor. We’re not allowed to move from an lvalue. In the cases where we initialize from an rvalue, both the copy and the move constructors are candidates. But the move constructor is a better match, as we don’t have to convert the rvalue to an lvalue reference. For Movable, the move constructor got selected. For NoMove, there is no move constructor, so the only candidate is the copy constructor, which gets selected.

Now, let’s look at what’s different when instead of having no move constructor, we have a move constructor with a deleted definition:

struct DeletedMove
{
    DeletedMove();
    DeletedMove(const DeletedMove&);
    DeletedMove(DeletedMove&&) = delete;
};

We can of course still copy this one as well:

DeletedMove deletedMove = deletedMove;

But what happens if we try to copy-initialize from an rvalue?

DeletedMove deletedMove = std::move(deletedMove);

Remember, overload resolution tries to find all candidates to do the copy-initialization. And this class does in fact have both a copy and a move constructor, which are both candidates. The move constructor is picked as the best match, since again we avoid the conversion from an rvalue to an lvalue reference. But the move constructor has a deleted definition, and the program does not compile. From the C++ standard: https://timsong-cpp.github.io/cppwp/n4659/dcl.fct.def.delete#2:

A program that refers to a deleted function implicitly or explicitly, other than to declare it, is ill-formed. [ Note: This includes calling the function implicitly or explicitly (…) If a function is overloaded, it is referenced only if the function is selected by overload resolution.

The function is being called implicitly here, we’re not manually calling the move constructor. And we can see that this applies because overload resolution selected to use the move constructor with the deleted definition.

So the differences between not declaring a move constructor and defining one as deleted are:

  • The first one does not have a move constructor, the second one has a move constructor with a deleted definition.
  • The first one can be copy-initialized from an rvalue, the second can not.

If you enjoyed this post, you can subscribe to my blog, or follow me on Twitter.

Twenty Year Exit from the Oracle Ecosystem

Tim Pizey from Tim Pizey

The Oracle Ecosystem instance that I inheritted was designed from 2000 and went live in 2004. Its centre piece Oracle Workflow went end of life that year but had new versions through to 2007. In its own way it was a pinnacle of a certain view of software, the software vendor as a one stop shop, in the same way as DEC and IBM had previously sold their systems.
A whole set of components which were guaranteed to work together, with technical support. I probably would have made the same choice, even as late as 2000. This system was the largest system (by disk storage used) in Europe for a while. Housed in a purpose built computer room. Then came AWS S3 cloud storage.
The 'replacement' system migrated eighty percent of the files out of the database, seemingly unaware of the eighty-twenty rule, and duplicated the data and the dataflows. The replacement system was a 'microservice architecture' joined by a single datastore. The users now had two systems to use and were stuck with a 2000 vintage user interface. The next step was to stop using the Oracle Forms and introduce a modern three tier web application, choosing the new, shiny AngularJS as the front end, Hibernate on java as the middleware.
This was understandabley a big job, the UI was now on AWS but the data was in a data centre (a commercial one by now). The need to quit the data centre motivated the removal of CMSDK (Content Management Software Development Kit) from inside the database to separate, external, email and SFTP handling systems, to reduce databse size and enable security in the cloud.
The remaining steps are to finsh the migration of files (who knew this would be the difficult bit) and to migrate from Oracle AQ (Advanced Queues (software naming error 101: avoid hubristic adjectives)) to AWS SQS (Simple Queueing Service (software naming error 102: avoid indexical adjectives)). Note that we have to upgrade AngularJS to Angular just to stay still, and pop things into Kubernetes, just because.
This is clean, recogisable, manageable and stable. We could rest here for a while; but all that pain motivates completion:
The elephant is in the room. The final step is then to replace Oracle Workflow with Camunda.

Two failed software development projects in the High Court

Derek Jones from The Shape of Code

When submitting a bid, to be awarded the contract to develop a software system, companies have to provide information on costs and delivery dates. If the costs are significantly underestimated, and/or the delivery dates woefully optimistic, one or more of the companies involved may resort to legal action.

Searching the British and Irish Legal Information Institute‘s Technology and Construction Court Decisions throws up two interesting cases (when searching on “source code”; I have not been able to figure out the patterns in the results that were not returned by their search engine {when I expected some cases to be returned}).

The estimation and implementation activities described in the judgements for these two cases could apply to many software projects, both successful and unsuccessful. Claiming that the system will be ready by the go-live date specified by the customer is an essential component of winning a bid, the huge uncertainties in the likely effort required comes as standard in the software industry environment, and discovering lots of unforeseen work after signing the contract (because the minimum was spent on the bid estimate) is not software specific.

The first case is huge (BSkyB/Sky won the case and EDS had to pay £200+ million): (1) BSkyB Limited (2) Sky Subscribers Services Limited: Claimants – and (1) HP Enterprise Services UK Limited (formerly Electronic Data Systems Limited) (2) Electronic Data systems LLC (Formerly Electronic Data Systems Corporation: Defendants. The amount bid was a lot less than £200 million (paragraph 729 “The total EDS “Sell Price” was £54,195,013 which represented an overall margin of 27% over the EDS Price of £39.4 million.” see paragraph 90 for a breakdown).

What can be learned from the judgement for this case (the letter of Intent was subsequently signed on 9 August 2000, and the High Court decision was handed down on 26 January 2010)?

  • If you have not been involved in putting together a bid for a large project, paragraphs 58-92 provides a good description of the kinds of activities involved. Paragraphs 697-755 discuss costing details, and paragraphs 773-804 manpower and timing details,
  • if you have never seen a software development contract, paragraphs 93-105 illustrate some of the ways in which delivery/payments milestones are broken down and connected. Paragraph 803 will sound familiar to developers who have worked on large projects: “… I conclude that much of Joe Galloway’s evidence in relation to planning at the bid stage was false and was created to cover up the inadequacies of this aspect of the bidding process in which he took the central role.” The difference here is that the money involved was large enough to make it worthwhile investing in a court case, and Sky obviously believed that they could only be blamed for minor implementation problems,
  • don’t have the manager in charge of the project give perjured evidence (paragraph 195 “… Joe Galloway’s credibility was completely destroyed by his perjured evidence over a prolonged period.”). Bringing the law of deceit and negligent misrepresentation into a case can substantially increase/decrease the size of the final bill,
  • successfully completing an implementation plan requires people with the necessary skills to do the work, and good people are a scarce resource. Projects fail if they cannot attract and keep the right people; see paragraphs 1262-1267.

A consequence of the judge’s finding of misrepresentation by EDS is a requirement to consider the financial consequences. One item of particular interest is the need to calculate the likely effort and time needed by alternative suppliers to implement the CRM System.

The only way to estimate, with any degree of confidence, the likely cost of implementing the required CRM system is to use a conventional estimation process, i.e., a group of people with the relevant domain knowledge work together for some months to figure out an implementation plan, and then cost it. This approach costs a lot of money, and ties up scarce expertise for long periods of time; is there a cheaper method?

Management at the claimant/defence companies will have appreciated that the original cost estimate is likely to be as good as any, apart from being tainted by the perjury of the lead manager. So they all signed up to using Tasseography, e.g., they get their respective experts to estimate the amount of code that needs to be produce to implement the system, calculate how long it would take to write this code and multiply by the hourly rate for a developer. I would loved to have been a fly on the wall when the respective IT experts, all experienced in provided expert testimony, were briefed. Surely the experts all knew that the ballpark figure was that of the original EDS estimate, and that their job was to come up with a lower/high figure?

What other interpretation could there be for such a bone headed approach to cost estimation?

The EDS expert based his calculation on the debunked COCOMO model (ok, my debunking occurred over six years later, but others have done it much earlier).

The Sky expert based his calculation on the use of function points, i.e., estimation function points rather than lines of code, and then multiply by average cost per function point.

The legal teams point out the flaws in the opposing team’s approach, and the judge does a good job of understanding the issues and reaching a compromise.

There may be interesting points tucked away in the many paragraphs covering various legal issues. I barely skimmed these.

The second case is not as large (the judgement contains a third the number of paragraphs, and the judgement handed down on 19 February 2021 required IBM to pay £13+ million): SCIS GENERAL INSURANCE LIMITED: Claimant – and – IBM UNITED KINGDOM LIMITED: Defendant.

Again there is lots to learn about how projects are planned, estimated and payments/deliveries structured. There are staffing issues; paragraph 104 highlights how the client’s subject matter experts are stuck in their ways, e.g., configuring the new system for how things used to work and not attending workshops to learn about the new way of doing things.

Every IT case needs claimant/defendant experts and their collection of magic spells. The IBM expert calculated that the software contained technical debt to the tune of 4,000 man hours of work (paragraph 154).

If you find any other legal software development cases with the text of the judgement publicly available, please let me know (two other interesting cases with decisions on the British and Irish Legal Information Institute).

ResOrg 2.0.10.31 adds support for Visual Studio 2022

Products, the Universe and Everything from Products, the Universe and Everything

ResOrg 2.0.10.31 has now been released. This a recommended maintenance update for ResOrg 2.0, and adds support for Visual Studio 2022 Preview:

ResOrg 2.0.10.31 running within Visual Studio 2022 Preview 4.1ResOrg 2.0.10.31 running within Visual Studio 2022 Preview 4.1

The following changes are included in this build:

  • Added support for Visual Studio 2022 Preview.

  • The ResOrg installer now includes dedicated VSIX extensions for Visual Studio 2017 (ResOrgVsPlugIn_vs2017.vsix), Visual Studio 2019 (ResOrgVsPlugIn_vs2019.vsix) and Visual Studio 2022 (ResOrgVsPlugIn_vs2022.vsix).

  • The Visual Studio plug-in now uses a fully VSPackage based implementation when hosted in Visual Studio 2010, 2012 or 2013 (Visual Studio 2008 and earlier are still supported via an add-in).

  • The installer now checks if Visual Studio or ResOrg are running before allowing the installation to proceed.

  • Fixed a bug in the generation of HTML reports.

  • The AboutBox now shows the name of the host development environment (if any) and the platform (x86 or x64).

  • Updated the website address shown on the Aboutbox, in reports etc. from http://www.riverblade.co.uk to .

Download ResOrg 2.0.10.31

The Woes of Windows Smartscreen

Products, the Universe and Everything from Products, the Universe and Everything

Windows Smartscreen is a great idea, but if you develop downloadable software for Windows it can sometimes be incredibly frustrating.

That has certainly been our experience this year, as Windows has displayed the following warning when running every build we have released since we renewed our code signing certificate:

Windows SmartscreenEek!

What this warning means is that Smartscreen does not recognise the executable (fair enough, as the chances are we had only just built it when it was downloaded), and does not yet trust the code signing certificate (we know that an EV certificate would help with this, but only at the cost of a loss of flexibility in the build process).

To the end-user (and yes, I include many developers in that) this warning must be offputting, to say the least. The lack of a "Run Anyway" button just compounds that. To see that, you have to click on the "More info" link to reveal it, as well as the name of the executable file and publisher:

Windows SmartscreenIf you click on the "More info" link you will see some useful information and a "Run anyway" button.

The warning usually disappears after a few days as customers download and install the update, but this time it has been different and we have been pulling our hair out over this for months.

More than one build has been submitted to Microsoft for analysis, but we never had any luck until last week, when we received this feedback:

Submission feedbackSuccess! (for this particular build, at least)

We do not know exactly how the internal logic Microsoft use to trigger Smartscreen warnings has changed in recent months, but we suspect it has in some way.

Coincidentally, at the same time as we submitted this file for analysis we also uploaded the same executable directly to the Visual Studio Marketplace, rather than (as previously) linking to the product page. Maybe that also helped - who knows?

Regardless, it's welcome as it means that Visual Lint 8.0.4.342 no longer triggers the Smartscreen warning, and hopefully as the code signing certificate builds reputation other builds will cease to do so too.

At least, we hope so.

Electronic Evidence and Electronic Signatures: book

Derek Jones from The Shape of Code

Electronic Evidence and Electronic Signatures by Stephen Mason and Daniel Seng is not the sort of book that I would normally glance at twice (based on its title). However, at this start of the year I had an interesting email conversation with the first author, who worked for the defence team on the Horizon IT project case, and he emailed with the news that the fifth edition was now available (there’s a free pdf version, so why not have a look; sorry Stephen).

Regular readers of this blog will be interested in chapter 4 (“Software code as the witness”) and chapter 5 (“The presumption that computers are ‘reliable'”).

Legal arguments are based on precedent, i.e., decisions made by judges in earlier cases. The one thing that stands from these two chapters is how few cases have involved source code and/or reliability, and how simplistic the software issues have been (compared to issues that could have been involved). Perhaps the cases involving complicated software issues get simplified by the lawyers, or they look like they will be so difficult/expensive to litigate that the case don’t make it to court.

Chapter 4 provided various definitions of source code, all based around the concept of imperative programming, i.e., the code tells the computer what to do. No mention of declarative programming, where the code specifies the information required and the computer has to figure out how to obtain it (SQL being a widely used language based on this approach). The current Wikipedia article on source code is based on imperative programming, but the programming language article is not so narrowly focused (thanks to some work by several editors many years ago 😉

There is an interesting discussion around the idea of source code as hearsay, with a discussion of cases (see 4.34) where the person who wrote the code had to give evidence so that the program output could be admitted as evidence. I don’t know how often the person who wrote the code has to give evidence, but these days code often has multiple authors, and their identity is not always known (e.g., author details have been lost, or the submission effectively came via an anonymous email).

Chapter 5 considers the common law presumption in the law of England and Wales that ‘In the absence of evidence to the contrary, the courts will presume that mechanical instruments were in order. Yikes! The fact that this is presumption is nonsense, at least for computers, was discussed in an earlier post.

There is plenty of case law discussion around the accuracy of devices used to breath-test motorists for their alcohol level, and defendants being refused access to the devices and associated software. Now, I’m sure that the software contained in these devices contains coding mistakes, but was a particular positive the result of a coding mistake? Without replicating the exact conditions occurring during the original test, it could be very difficult to say. The prosecution and Judges make the common mistake of assuming that because the science behind the test had been validated, the device must produce correct results; ignoring the fact that the implementation of the science in software may contain implementation mistakes. I have lost count of the number of times that scientist/programmers have told me that because the science behind their code is correct, the program output must be correct. My retort that there are typos in the scientific papers they write, therefore there may be typos in their code, usually fails to change their mind; they are so fixated on the correctness of the science that possible mistakes elsewhere are brushed aside.

The naivety of some judges is astonishing. In one case (see 5.44) a professor who was an expert in mathematics, physics and computers, who had read the user manual for an application, but had not seen its source code, was considered qualified to give evidence about the operation of the software!

Much of chapter 5 is essentially an overview of software reliability, written by a barrister for legal professionals, i.e., it is not always a discussion of case law. A barristers’ explanation of how software works can be entertainingly inaccurate, but the material here is correct in a broad brush sense (and I did not spot any entertainingly inaccuracies).

Other than breath-testing, the defence asking for source code is rather like a dog chasing a car. The software for breath-testing devices is likely to be small enough that one person might do a decent job of figuring out how it works; many software systems are not only much, much larger, but are dependent on an ecosystem of hardware/software to run. Figuring out how they work will take multiple (expensive expert) people a lot of time.

Legal precedents are set when both sides spend the money needed to see a court case through to the end. It’s understandable why the case law discussed in this book is so sparse and deals with relatively simple software issues. The costs of fighting a case involving the complexity of modern software is going to be astronomical.

Finding The Middle Ground – a.k.

a.k. from thus spake a.k.

Last time we saw how we can use Euler's method to approximate the solutions of ordinary differential equations, or ODEs, which define the derivative of one variable with respect to another as a function of them both, so that they cannot be solved by direct integration. Specifically, it uses Taylor's theorem to estimate the change in the first variable that results from a small step in the second, iteratively accumulating the results for steps of a constant length to approximate the value of the former at some particular value of the latter.
Unfortunately it isn't very accurate, yielding an accumulated error proportional to the step length, and so this time we shall take a look at a way to improve it.