The Future Of Computing

Phil Nash from level of indirection

The future is already here! - it's just not very evenly distributed.

I have some ideas about what computing will be like in the future but it is composed mostly of pieces we already have - or have the promise of. At the centre of my vision is the evolution of the Post-PC device

What is Post PC anyway?

Many people attribute this term to Steve Jobs, who certainly brought it to the mainstream in 2007, using it to describe iOS devices and how they would come to eclipse "traditional" PCs in sales and use. This is already coming to pass. But it was actually David Clark who coined the phrase, back in 1999. That article is really worth a read. You should go and read it now. Go on. I'll wait. (Actually I'll just carry on writing - but the appearance will be the same).

So while the Jobsian vision (initially, at least) refers to the reset in expectation, interaction and ease of use that iOS devices ushered in, Clarks original words encompass more - including Cloud Services, cashless payment systems, and most interestingly (to me) finer grained distribution of responsibilities.

It's that last one where I think the most opportunities are yet to play out.

For two or three decades we have obsessed over convergence. Traditional PC systems converged to a single device - the laptop. Post-PC devices have taken that to the next level - a single slab, fronted by a piece of glass that is both the display and primary input. These tiny devices also pack in cameras, extra sensors and even fingerprint scanners and replace what used to be dozens of separate devices. But they have also been born into a world where wireless communication technologies are ubiquitous and come in many forms. Many of their capabilities are distributed in "the cloud", or consist of sending things between devices or connecting wirelessly with additional "smart" peripherals such as cameras, fitness trackers, printers and other devices. They are intensely personal yet highly social. Autonomous yet democratised. Functions such as Airplay and its counterparts reinforce the idea that these devices are not isolated computing silos. They are participants in a computing ecosystem that is distributed at many different levels. And all so seamlessly that entire demographics that were previously written off as "computer illiterate" are regularly using these devices. They are barely even considered "computers" anymore. The term has come to be associated with that clunky, finicky, bulky thing you used to struggle to get to do anything you want.

This new generation of devices, finally, "just works".

The NeXT Steps

So where does it go from here? Have we reached the end of the evolution of the personal computing device?

Not by a long shot! We're just getting warmed up!

We have just crossed the threshold from general-purpose computers being primarily for the focused used of businesses and enthusiasts to being something that everyone uses and carries with them everywhere. That in itself has been opening up possibilities that had been hitherto unseen or simply not feasible.

The degree to which these devices and their interconnections have embedded themselves into our lives already is quite breath-taking when you take a step back. While, admittedly, I'm a bit of an early adopter, none of the following is particularly extreme:

On a typical, weekday, morning I am awoken by music served as an alarm from my phone. I get up and go to begin my bathroom routine. Part of that routine involves stepping onto a set of scales that take my weight and fat mass and automatically send the figures, via wi-fi, to a cloud service that is immediately accessible to my phone, collated together with a number of other metrics that are tracked over time.

Once finished and dressed I leave the house and go to my car, which automatically unlocks itself due to the proximity of the key fob in my pocket. I get in and push a button and the car starts. As I start driving the media system in the car has automatically connected, via bluetooth, to my phone, which is also still in my pocket, and continues playing the podcast that I had previously been listening to. I drive to the station and park the car.

As I get out I put my bluetooth headphones on and, at the push of another button, they too have connected to my phone (still in my pocket) and the podcast resumes once again. I get on the train and get my laptop out to do some development work. It connects via a personal wi-fi network to my phone for an internet connection (which, when I pick up LTE, is faster than my home broadband was only a few years ago) - all the time it is still sending audio to my headphones. Later I get off the train and walk to my office. As I walk my steps are being counted by a device on my belt that intermittently sends this information on to my phone via Bluetooth LE, where it is sent to the cloud service that is collating my health related measurements - including heart rate and blood pressure. Along my journey something interesting and unexpected happens. I take out my phone and take a photo, then continue on. As I get near the office a reminder pops up that I had set to go off in that proximity. Eventually I get to my desk where I put my phone in a dock to charge because battery technology is still struggling to keep up with all these demands!

We're only just getting started, so it's not all as seamless as it could be yet, but the story I've just recounted is real and usually all "just works" without a hitch. I think, as time goes on, these sort of experiences will become more reliable and encompass more things.

But that's the present - wasn't I going to be talking about the future? Well I apologise for burying the lede but it's important to remember how much of the future is already here (albeit not evenly distributed). And my vision is really an extension of the things already discussed. That may sound a little uninspiring - but remember that phenomenon of incremental advances suddenly creating whole new opportunities?

Evenly distributed

One of the criticisms often levelled at the current crop of Post-PC devices is that they are great for consumption, but less so for content creation - or "real work". Many contend that you still need a "real" PC for that. I don't think it's quite so black and white - but there do remain many tasks that are cumbersome to undertake with a tablet or smartphone. It won't always be that way, though. Although tablets with keyboards and mice, and hybrid operating systems, exist now - that's not the way of the future.

I believe that in the not too distant future touch-screens, keyboards, and other input devices will all be merely components of a distributed "system" that consists of both cloud services and local sharing of storage and processing. This system will scale seamlessly to the task at hand. Whether you need more computational power, a different input metaphor, or a different way to output you should be able to add what you need without missing a beat. Right now if your needs outgrow a tablet you have to switch to a whole different device (a laptop, say) - which may or may not sync over data you were working on - in this future you would just add the keyboard if you need it (more easily than now), add some extra processing units (you can do this now in certain limited ways), extra storage (again cloud services already play a role here - as does card based storage in some tablets) or even an extra display (technologies like AirPlay are showing the promise of this).

Each of these components would be what we call "smart". That is they are computers in their own right with enough processing power and sensors to be aware of their environment and how they connect and interact. Take a display, for example. The display itself would contain accelerometers and gyroscopes so it is aware of it's orientation in the real world and whether it is being moved - just like your tablet or smartphone does now. It would also know when another display is nearby, and if so how near and in what direction. Of course the display would be a touch-screen. Imagine you have an object on one display. You could start up a new display, place it next to the first one, touch the object and "flick" it over to the second display. All without any need to configure anything.

Now this system, distributed as it is, would need a centralised "brain". It must scale down to a single device that can be used in isolation. It would make sense for this to be what we currently think of as a smartphone. We would need to carry them with us everywhere and use them for communication, so it would be equipped with audio input and output and cameras - just as our current smartphones are. In fact they needn't be much different to the smartphones we have now. They would be more powerful - but needn't be much more powerful as they can scale up the processing power as needed with additional devices and/ or cloud services. And with all data synced to cloud services an alternate device could be picked up and made into your primary hub for the day as necessary.

Everyday revisited

Most of the pieces are already there. There are some challenges - mostly business-oriented rather than technical - but the trend is already in this direction. Yet it all seems very incremental. To see how transformative it would be consider a re-run of my story earlier, reworked to showcase these future technologies (and a few others to spice it up a bit).

It's a typical, weekday, morning. I am awoken by music serving as an alarm on my primary computing device (which will have a really cool name). I get up and go to begin my bathroom routine. Part of that routine involves having various health metrics samples and sent to a cloud service. Another part is that my bathroom mirror presents me with some curated information pertinent to the day ahead - the current weather, traffic conditions and any early appointments I have set. Perhaps also the days news headlines.

Once finished and dressed I leave the house and go to my car, which automatically unlocks itself due to the proximity of the computing device in my pocket. I get in and push a button and the car starts. As I start driving the media system in the car has automatically connected to my computing device, which is still in my pocket, and continues playing the podcast that I had previously been listening to. I drive to the station and park the car. My computing device knows that I have just parked in a car park and automatically communicates with the car park server and pays for my day's stay.

Just before I get out I ask the device to switch it's audio over to the earpieces embedded in my ears and the podcast resumes once again. I get on the train and get my tablet out to do some development work - which is, of course, already online. I might also fish out a keyboard - which automatically connects as it comes into proximity with the tablet. Later I get off the train and walk to my office. As I walk my steps are being counted by the peripheral device on my wrist where it is collated along with my other health measurements and sent to the cloud. Along my journey something interesting and unexpected happens. I bring out my device to take a photo. But I really want a good quality picture, so I quickly fish out a lens with a full size sensor from my bag, which wirelessly connects to my device and instantly beefs up the optics to professional standards. I take a great picture then continue on. As I get near the office a reminder pops up on my wrist that I had set to go off in that vicinity. Eventually I get to my desk where I put my device on the wireless charging pad as it connects to my keyboard and large displays and I continue the work I started on the train.

The task at hand

One consequence of this more distributed way of working is that the single-(main-)tasking metaphor that the iPhone doggedly champions is allowed to survive while still allowing multiple applications to run and be interactive. The metaphor becomes "one app per device". Each device is typically running one interactive application at a time - for some devices it is the same app at any time (a keyboard, for example). For a more general purpose device, such as a tablet, it may run one app, while a different app runs on the "phone" beside it. But the devices can see each other and documents and other data may be shared between them - probably using real-world metaphors like the "flick" mentioned earlier.

Conversely at any one time two or more devices may appear to be running a portion of the same app - but in truth they will be running their own instances - with tight integration between them.

My vision of the future is one of heterogenous, smart devices - some specialised, some generalised - participating in the fabric of a system that surrounds us - and which tends to recede into our surroundings. The seeds are there - and they're growing. I think the next decade is going to be an exciting and transformative time in technology - perhaps even more so than the last!

Postscript...

I had wanted to publish this post by New Year's Eve (2013) but didn't get time to finish up by then. I'm pushing it now, largely un-edited, to try and keep it relatively seasonal (but I may come back and edit more aggressively yet - it's much too rambling for my liking).

As I was finishing I saw blog post by Dave Addey - which he actually posted back in September - covering very similar material. I haven't had a chance to think how to work it in organically to this post (yet) but didn't want to miss the opportunity to link to it - so I'll do that explicitly here. Go read it now. Go on, I'll wait.

Review of Overload "editorials"

Frances Buontempo from BuontempoConsulting

I have run Charles Stross' code over my Overload "editorials" in the hope of generating some kind of end of year review. Any favourite phrases anyone?
 
---

the incoming information and produced metres of punched cards, for example musing on the name to the death of Ceefax. Started in and being replaced by keyboards. On Wednesday I got 0. On Wednesday I got 258 (plus some on accounts.

the perpetual call. There is the problem. Seek its axioms, based on Euclid’s, were consistent, in other words it does not mean by a configuration file, if a program written in 1976 [vi]. Many editors allow syntax high-lighting now, adding an.

the words 'Surreal' 'Mutation' standing out proudly. Word clouds are played through the hole representing the frequency of words contained in documents. A more traditional approach would present a histogram, with bars showing how many times an item appears. Wikipedia suggests.

the creation of the calculus gave ways to form the language, though paused for Turing. Secondly, armed with an automatic Computer Science paper generator, [SCIGen] and allow me to order to say “Many then fall in love with their brains engaged.

the system within the system, but when it’s dead. Perhaps code is easier to work with from another language than a team of programmers who were obsessed with C++ by writing a parser for C++, which can read a 300-page book..

the mouse (1968) and ways of solving problems to emerge. Introduction of the calculus gave ways of us the fore [Matthews]. Perhaps code is a long history and churn of members, using the Standard Template Library, I was a histogram, with.

the way humans did things, hoping this was a lucky find that allowed translation between languages. Can you will have to press the trendy face of ways to make unchanged code behave in which it has also spawned excellent science fiction.

the real objects located in space numerically, ranging from test and refactor and then compute the same sequence as M. Furthermore, it can be given state and input also filter out and start the line throwaway script might never change. I.

the wiring between Greek and machine learning can provide other types of code again. Electronic wizards can be given the instructions for a four year stint. Allow me to order search results by weight. Feel free to back me up on.

the natural numbers, there are two. The next state. It is trained to appreciate beauty. It is of course, easier to program if you can give rules to mention its powers of intimidation." [DASKeyboard] Research into the requirements or trying to.

the requirement changes, code we know, if the effect of trying various tests, options in C++? C++ is provable or falsifiable. This would allow all of mathematics to sound motifs or short tunes. The authors notice that a musical background made.

the debate you come down on. He then suggests "foldering may miss thereby allowing word-processing. They were also used as M. Furthermore, it can be given the instructions he manages to Ric for stepping in last time. I do often a.

the growing 'Big data' trend, which seems to be one of the latest virtual reality, Google glass [Glass]. A computer interface that allows editing changes the game. Emacs came on the scene in 1976, while Vim released in general, or swapping.

the “Electronic Numerical Integrator and slowly turned into something substantive. It's practically the opposite of engineering. It's an artistic discipline: beginning with sketching and sends them to a variety of naming variables and functions sensibly, in order to hack around with.

the prevalence of doing something. If it works, it easier than an answer. We have sometimes taken as “You have decayed away. Imagine that one day. A variety of ways of editing inputs for computers, so many technical books do you.

the idea of an editorial. How do you own that weigh less than this? How many books I own. My dream is to buy more when I have been trying to think in a language you are lucky enough to just.

the ACCU conference do not count. So, how it well as a strange word. When this wouldn't be necessary. The live speech has grown from Google’s machine learning. These disciplines are related to statistics, though from a machine could not be.

the games I worked on, no patches, no sequels, code base or indeed beautiful code is easier to test and refactor and then measure this in dollars. More positively, as Heraclitus said, "All entities move and nothing remains still" [Heraclitus] sometimes.

the hook. If any readers wish to continue and every Turing computable function.” [Turing_completeness] That was helpful, wasn’t it? Equivalently, it can simulate a universal language and useless.” As stated at the outset, we might need to learn to think deeply.

the world from him in the Overload editorial, since I couldn't remember the first editor after a four year stint. Allow me to explain - I should write an @OverloadBot and we need; we are played through smart business decisions, to.

Using IntelliJ, Adobe ActionScript and AIR SDK to create & package iOS 7 apps.

Pete Barber from C#, C++, Windows & other ramblings

Just a quick post. Lately I've been learning ActionScript. Having seen how easy it is to get an ActionScript project for Flash Player running on Android using Adobe AIR I wanted to do the same for my iPhone. Getting stuff running on the AIR emulator and on the iOS simulator (under OS X) and AIR was pretty easy. In my case this was using IntelliJ as the IDE (rather than Flash Builder) coupled with Flex 4.6 SDK. The real fun started when I started to package my application for submission to the App Store, in particular creating the App icons.

The version of the AIR SDK that comes with the Flex 4.6 SDK is 3.1. However this isn't aware of the new iOS 7 App icons. It would seem a simple matter of adding additional entries to the Application Descriptor file, i.e. to support the the 152x152 icon just add

<image152x152>icon152.png</image152x152>

to the <icon> section. Unfortunately the schema knows this isn't valid (well doesn't know about) and you end up with the following error:

error 103: application.icon.image152x152 is an unexpected element/attribute

To fix, the first step is to download & install the latest version AIR SDK which is 3.9 (4.0 beta aside). This does not mean download & install the latest version of the Flex SDK as this contains an older version of the AIR SDK. Also, as this needs installing on top of the one present in the existing Flex SDK installation do not download the installer version, instead use the zip (Windows) or tbz2 (OS X). The following link takes you to both: http://www.adobe.com/devnet/air/air-sdk-download.html

Then extract these within the Flex SDK (you might want to take a copy of this first but if things go wrong you can always re-download it). The easiest way is to just copy/move the archive to the Flex SDK directory and extract the files there which will overwrite the existing ones.

NOTE: Up to this point the same thing occurred on both Windows & OS X. The following steps only worked on OS X. In particular updating the scheme in the Application Descriptor didn't work and when reverted back to 3.1 (& support for iOS 7 App Icons removed) then packaging the app. was a problem as the AIR SDK seemed to be missing various binaries to create the ARM binaries. I haven't pursued this further as I was working on OS X at this point.

In theory everything should work now. However if you proceed to package the app. it will still give the same 103 error. This is because the scheme version number in the Application Descriptor needs updating. Most likely the line will be:

<application xmlns="http://ns.adobe.com/air/application/3.1">

the 3.1 needs changing to 3.9.

This may not fix the problem though. If you're using IntelliJ (sorry don't know about Flash Builder) and have selected the 'Generated' option for the Application Descriptor then it appears by default IntelliJ (AIR?) creates this with a version of 3.1. In this case you'll need to stop using this option. Instead choose the 'Custom template' and either create your own or have IntelliJ (AIR?) generate one for you. If you choose the latter option then IntelliJ offers a drop down to specify the version. However, it only lists 3.1 to 3.8. Therefore this will need manually changing to 3.9.


At this point it should be possible to successfully package an iOS app with iOS 7 App Icon support.

A CppQuiz a day, keeps the debugger away!

olvemaudal from Geektalk

C++ is a difficult language to master. Very difficult. It does not take more than a few days away from the keyboard before you start forgetting some of the details that will bite when you visit the dark and dusty corners of the language (sometimes because you work with code written by others).

Last month, Anders Schau Knatten officially launched a great tool for practicing your C++ language skills:

http://cppquiz.org

I recommend that you visit this site once in a while to challenge yourself. A good score on this quiz does not make you a great programmer, but it does suggest that you have a deeper understanding of the language than most. Being fluent in a programming language makes it much easier to avoid the dark and dusty corners so that you can concentrate on writing high quality software instead of spending time in the debugger.

Optional streaming

Phil Nash from level of indirection

Catch has a number of macros that allow values of arbitrary types to be streamed into an ostringstream. The canonical example is the INFO macro:

INFO( "There were " << bottles.size() << " green bottles, hanging on the wall" );

This macro builds up a string that will be passed to the next assertion to be included as an annotation. Note that, unlike with a naked ostringstream there is no leading <<. This makes it clean and uncluttered when you just want to log a single value (such as a string), for example:

INFO( "Weirdness" );
The obvious way to do this is for the macro to provide the leading << prior to its argument. Conceptually something like this:
#define INFO( log ) { \
	std::ostringstream oss; \
	oss << log; \
	useTheString( oss.str() ); 
}

This all works quite nicely. But there are a few other macros that use this idiom, too: WARN, SUCCEED and FAIL.

The last two are of interest because the logging behaviour is more of a secondary concern. The primary behaviour is to appear like a passing or failing assertion, respectively, without the need to actually assert on anything. SUCCEED can be useful if you otherwise have no assertions in a test and you don't want to see warnings about it. FAIL is useful if the situation that leads to the failure is not captured in an expression for some reason. It can also be useful to force a test to fail, perhaps as a placeholder. These are useful macros to have available, but they are not often needed in practice. So when they are it's nice to be able to annotate their useage inline - hence the streamed argument.

This is all well and good. But I've found there are still enough cases where I don't want to annotate that having to pass an empty string or make something up is a little annoying. I also use a similar idiom in other projects where it would be nice to be able to make the stream completely optional.

This is not as easy as it sounds, though. The first, and most obvious, issue is that this requires support for variadic macros. Catch has made use of variadic macros, where available, for some time now. In theory they are available to any C++11 compiler. In practice most, if not all, compilers that support any reasonable chunk of C++11 support variadic macros - and most supported them as an extension even before that. That's certainly true of Visual C++, GCC and Clang.

The technically more interesting problem, though, is dealing with that initial <<. Remember the first << is being supplied inside the macro. It will still be there even if the caller does not supply an argument to the macro. If we wrote FAIL the same way we presented INFO earlier (but with variadic macros) it might look something like this:

#define FAIL( ... ) { \
	std::ostringstream oss; \
	oss << __VA_ARGS__; \
	notifyFail( oss.str() ); \
}
... which, with no argument provided, would expand to...
{ 
	std::ostringstream oss; 
	oss << ; 
	notifyFail( oss.str() ); 
}

Do you see the problem? With nothing following the << this will not compile.

So do we need a different operator? What properties would we need? It seems we'd need an operator that comes in two forms: a binary operator that allows us to capture an argument, and a unary operator that allows us to omit the argument. Furthermore the binary form must not require its argument to be enclosed in any sort of brackets. Finally it must have higher precedence than << so we can switch over to normal stream insertion at that point.

That's a long list. Does such an operator exist? Fortunately there's not just one but two such operators to choose from! + and -. The only slight hitch is that the unary form is right-to-left associative, whereas the binary form is left-to-right. So how can we work these in?

Let's pick one of the operators. I've gone with +, but I don't think there is any advantage either way. Because unary + is right-to-left associative it needs to prefix something. So we can't use it at the start of our streaming expression. We can, however, use it at the end. Then we'll need an object to apply it to. The object doesn't actually need to do anything else. I've gone with this implementation of StreamEndStop in Catch:

struct StreamEndStop {
    std::string operator+() {
        return std::string();
    }
};
With this definition the expression, +StreamEndStop() now yields an empty string - which is idempotent with a stringstream. Which means we can write:
{
	std::ostringstream oss; 
	oss << +StreamEndStop();
	notifyFail( oss.str() ); 
}
And oss.str() evaluates to an empty string. Perfect. But what about when we do stream something? Well that would expand to:
{
	std::ostringstream oss; 
	oss << something +StreamEndStop();
	notifyFail( oss.str() ); 
}
... where something could be a string or variable or literal of any type. So we need some way for the expression:
something +StreamEndStop()
to yield the value of something. That's where the binary form of operator+ comes in:
template<typename T>
T const& operator + ( T const& value, StreamEndStop& ) {
	return value;
}
Now, whether we supply nothing, a single value or multiple values joined by <<s we'll end up with a stringstream containing what we expect. The relevant bit of code in Catch actually looks like this:
Catch::ExpressionResultBuilder( messageType ) \
	<< __VA_ARGS__ \
	+::Catch::StreamEndStop()
which yields an ExpressionResultBuilder that gets passed on elsewhere. This is all protected by CATCH_CONFIG_VARIADIC_MACROS. Otherwise it falls back to:
Catch::ExpressionResultBuilder( messageType ) << log
So a lot of work to save a few explicit empty strings, but sometimes it's the little things.

Exit code crimes

Frances Buontempo from BuontempoConsulting

I've seen a few exit code crimes recently, so I thought I'd list them.

1. Catch something specific

if __name__ == "__main__":
    try:
        run(None)
        sys.exit(0)
    except ValueError:
        sys.exit(1)

So, what does this do if another type of error is thrown? Why is it catching a specific type of error? The rest of the code is logging info - why don't we log the error? How does this make troubleshooting easy, or even possible?

2. Throw something specific

The pattern above is of course fine when the run function works like this:

def run(config):
    try:
    catch:
        raise ValueError

3. Tell no-one

Make the last line in you script

exit(0)

regardless of whatever just happened. There was a chance the script could return the exit code of whatever it called, so typing the extra line is first, effort, second achieves nothing, and finally means no-one will notice if something went wrong. Or at least not at the time. Perhaps whoever did that thought a shell script won't exit without the keyword exit at the end.



There are others - I'll add them as I find them.




Writing: The Ethical Programmer

Pete Goodliffe from Pete Goodliffe

The latest C Vu magazine from ACCU is out now. It contains my latest Becoming a Better Programer column. This month it's called The Ethical Programmer; the first instalment of a two-part series on ethics and the modern programmer. Gripping stuff.

This month, I look at our attitudes, at legal issues, and discuss software licenses.

To make the world a better place, you can enjoy a picture of some old rope, and a chicken. I also throw in some bad puns.

Capturing lvalue references in C++11 lambdas

Pete Barber from C#, C++, Windows &amp; other ramblings

Recently the question "what is the type of an lvalue reference when captured by reference in a C++11 lambda?" was asked. It turns out that it's a reference to whatever the original reference was too. This is just like taking a reference to an existing reference, e.g.

int foo = 7;
int& rfoo = foo;
int& rfoo1 = rfoo;
int& rfoo2 = rfoo1;

All references refer to foo rather than rfoo2->rfoo1->rfoo->foo meaning the following code

std::cout << "foo:" << foo << ", rfoo:" << rfoo 
          << ", rfoo1:" << rfoo1 << ", rfoo2:" << rfoo2 
          << '\n';
++foo;

std::cout << "foo:" << foo << ", rfoo:" << rfoo 
          << ", rfoo1:" << rfoo1 << ", rfoo2:" << rfoo2 
          << '\n';

std::cout << "&foo:" << &foo << ", &rfoo:" << &rfoo 
          << ", &rfoo1:" << &rfoo1 << ", &rfoo2:" << &rfoo2 
          << '\n';

Which gives:

foo:7, rfoo:7, rfoo1:7, rfoo2:7
foo:8, rfoo:8, rfoo1:8, rfoo2:8
&foo:00D3FB0C, &rfoo:00D3FB0C, &rfoo1:00D3FB0C, &rfoo2:00D3FB0C

I.e. all the references are aliases for the original foo hence the same value is displayed including when the original is modified and that the address of each variable is the same, that of foo.

There is nothing surprising here it's just basic C++ but it's along time since I've thought about it which is why with lambdas, l-value, r-value and universal references I sometimes I do a double take on what was once obvious.

The same happens with lambda capture but it's a slightly more interesting story. Take the following example:

int foo = 99;
int& rfoo = foo;
int& rfoo1 = foo;

std::cout << "foo:" << foo << ", rfoo:" << rfoo 
          << ", rfoo1:" << rfoo1 
          << '\n';

std::cout << "&foo:" << &foo << ", &rfoo:" << &rfoo 
          << ", &rfoo1:" << &rfoo1 
          << '\n';

auto l = [foo, rfoo, &rfoo1]()
{
    std::cout << "foo:" << foo << '\n';
    std::cout << "rfoo:" << rfoo << '\n';
    std::cout << "rfoo1:" << rfoo1 << '\n';

    std::cout << "&foo:" << &foo << ", &rfoo:" 
              << &rfoo << ", &rfoo1:" << &rfoo1 
              << '\n';
};

foo = 100;

l();

Which gives:

foo:99, rfoo:99, rfoo1:99
&foo:00D3FB0C, &rfoo:00D3FB0C, &rfoo1:00D3FB0C
foo:99
rfoo:99
rfoo1:100
&foo:00D3FAE0, &rfoo:00D3FAE4, &rfoo1:00D3FB0C

To begin with it behaves as per the first example in that foo, rfoo and rfoo1 all give the same value as rfoo and rfoo1 are effectively aliases for foo as shown when displaying their addresses; they're all the same.

However, when these same variables are captured it's a different story: The capture of foo is of no surprise as this is by-value so displays the captured value of 99 despite the original foo being changed to 100 prior to the lambda being invoked. Its address is that of a new variable; a member of the lambda.

It starts to get interesting with the capture of rfoo. When the lambda is invoked this too displays 99, the original captured value. Also, its address is not that of the original foo. It seems that the reference itself has not been captured but rather what it refers too, in this case an int with the value of 99. It appears to have been magically dereferenced as part of the capture.

This is the correct behaviour and when thought about becomes somewhat obvious. It's just like assigning a variable from a reference, e.g.

int foo = 7;
int& rfoo = foo;
int bar = rfoo;

bar doesn't become an int& and  rfoo is magically dereferenced except in this scenario there is nothing magical at all, it's as expected. If int were replaced with auto, e.g.

auto bar = rfoo;

then it would be expected that bar is an int as auto strips of CV and reference qualifiers.

Finally, there is rfoo1. This too is odd as it is attempting to take a reference to a reference. As seen in the first example this is perfectly fine. The end effect is that there can't be a reference to reference and so on and all are aliases of the original variable.

This is pretty much what's happening here. It's irrelevant that the target of the capture is a reference. In the end the capture by reference is capture by reference of the underlying variable, i.e. what rfoo1 refers too, in this case foo not rfoo1 itself. This is demonstrated twofold by rfoo1 within the lambda displaying the updated value of foo and also that the address of rfoo1 within the lambda is that of foo outside it.

This is as per the standard section 5.1.2 Lambda expression sub-note 14:

An entity is captured by copy if it is implicitly captured and the capture-default is = or if it is explicitly
captured with a capture that does not include an &. For each entity captured by copy, an unnamed nonstatic
data member is declared in the closure type. The declaration order of these members is unspecified.
The type of such a data member is the type of the corresponding captured entity if the entity is not a
reference to an object, or the referenced type otherwise. [ Note: If the captured entity is a reference to a
function, the corresponding data member is also a reference to a function. —end note ]

The sentence in bold states that for a reference captured by value then the type of the captured value is the type referred to, i.e. the reference aspect as been removed the crucial part being "or the referenced type otherwise". (NOTE: I haven't experimented with references to functions).

Finally, a vivid example showing that a reference captured by value involves a dereference.

class Bar
{
private:
int mValue;

public:
Bar(const Bar&) : mValue(9999)
{
}

public:
Bar(const int value) : mValue(value) {}
int GetValue() const { return mValue; }
void SetValue(const int value) { mValue = value; }
};

Bar bar(1);
Bar& rbar = bar;
Bar& rbar1 = bar;

std::cout << "&bar:" << &bar << ", &rbar:" << &rbar<< ", &rbar1:" << &rbar1 << '\n';

auto l2 = [bar, rbar, &rbar1]()
{
std::cout << "bar:" << bar.GetValue() << '\n';
std::cout << "rbar:" << rbar.GetValue() << '\n';
std::cout << "rbar1:" << rbar1.GetValue() << '\n';

std::cout << "&bar:" << &bar << ", &rbar:" << &rbar<< ", &rbar1:" << &rbar1 << '\n';
};

bar.SetValue(2);

l2();

The class bar provides a crude copy-constructor that sets the stored value to 9999. The following output is similar to that in the previous example in that the addresses of bar and rbar in the lambda differ from that of bar showing they're copies whilst rbar1 is the same. Secondly, the value of mValue stored within Bar is shown as 9999 for the first two captured variables meaning they were copy-constructed.

&bar:00D3FB0C, &rbar:00D3FB0C, &rbar1:00D3FB0C
bar:9999
rbar:9999
rbar1:2
&bar:00D3FAE0, &rbar:00D3FAE4, &rbar1:00D3FB0C

Making the copy-construct private (by commenting out the seemingly unnecessary 'public:') prevents compilation.

1>------ Build started: Project: References, Configuration: Debug Win32 ------
1>  main.cpp
1>c:\users\pete\desktop\references\references\main.cpp(85): error C2248: 'Bar::Bar' : cannot access private member declared in class 'Bar'
1>          c:\users\pete\desktop\references\references\main.cpp(59) : see declaration of 'Bar::Bar'
1>          c:\users\pete\desktop\references\references\main.cpp(54) : see declaration of 'Bar'
1>          c:\users\pete\desktop\references\references\main.cpp(59) : see declaration of 'Bar::Bar'
1>          c:\users\pete\desktop\references\references\main.cpp(54) : see declaration of 'Bar'

Writing this post has clarified the situation for me, I hope it helps you as well.

The sample code is available here.