Pdb ftw

Frances Buontempo from BuontempoConsulting

pdb - python debugger

If a script throws an exception, try running it in the debugger
python -m pdb myscript.py
For example given the following (rubbish) python program
C:\Dev\src>cat bad.py
def naughty():
raise Exception()
naughty()

if we just run it the exception tries to escape ...
C:\Dev\src>python bad.py
Traceback (most recent call last):
File "bad.py", line 4, in <module>
naughty()
File "bad.py", line 2, in naughty
raise Exception()
Exception

So, we know what line the problem's on. We could change the script to see what's going on using
import pdb; pdb.set_trace()
or we could run it under a debugger.

Run script through debugger

Invoke the pdb module in python and send it your script, e.g.
python -m pdb bad.py
This gives a (Pdb) prompt to type instructions in to:
> c:\dev\src\bad.py(1)<module>()
-> def naughty():
(Pdb)
Type 'c' for continue - it will halt when it gets an exception, as follows
(Pdb) c
Traceback (most recent call last):
File "C:\Python27\lib\pdb.py", line 1314, in main
pdb._runscript(mainpyfile)
File "C:\Python27\lib\pdb.py", line 1233, in _runscript
self.run(statement)
File "C:\Python27\lib\bdb.py", line 387, in run
exec cmd in globals, locals
File "<string>", line 1, in <module>
File "bad.py", line 1, in <module>
def naughty():
File "bad.py", line 2, in naughty
raise Exception()
Exception
Uncaught exception. Entering post mortem debugging
Running 'cont' or 'step' will restart the program
> c:\dev\src\bad.py(2)naughty()
-> raise Exception()
(Pdb)
At this point
 (Pdb) p <variable_name>
can be used to inspect variables, and
 (Pdb) a
shows any arguments to a function.

Main Pdb commands

Type 'q' for quit, 's' to step (maybe into another function) or 'n' to move on to the next line.
'w' shows where you are in the stack, 'u' moves up and 'd' moves down.
b(reak) [[filename:]lineno  sets a breakpoint.
There are more details in the manual: http://docs.python.org/2/library/pdb.html

Virtual functions in constructors and destructors

Frances Buontempo from BuontempoConsulting

Interview question.

What does this do?

class Base
{
public:
Base()
{
log();
}
virtual ~Base()
{
log();
}

//virtual void log() = 0;//note this compiles but doesn't link
virtual void log()
{
std::cout << "Base\n";
}
};

class Derived : public Base
{
public:
Derived()
{
log();
}
~Derived()
{
log();
}
virtual void log()
{
std::cout << "Derived\n";
}
};

int main()
{
Derived d;
}

I half remembered this http://www.artima.com/cppsource/nevercall.html so wasn't sure.

Keyboard configuration for Windows’ developers on OS X (& also IntelliJ)

Pete Barber from C#, C++, Windows &amp; other ramblings

Recently I've been doing some ActionScript programming. Rather than target a Flash Player app. I've been using ActionScript in combination with Adobe AIR in order to create an iOS app. This has meant I've been spending time in OS X and using IntelliJ with the the ActionScript/Flex/AIR plugin as my IDE.

Most of my previous work has been done on UNIX (so command lines & vi) and Windows. In particular I depend on the various Windows & Visual Studio editor key combinations plus the Insert, Delete, Home & End keys. For starters this means I use a PC keyboard with the iMac rather than the Apple keyboard as it lacks these keys; I'm also based in the UK so I use a British PC keyboard.

In addition to these keys I wanted the following combos to be available across all of OS X and any apps.
  • Alt-Tab to cycle through apps.
  • Ctrl-F for find.
  • Ctrl-S for save current document (I habitually press this whilst editing).
  • Ctrl-C & Ctrl-V for copy & paste.
  • Ctrl-Z for undo.
  • Obtain the correct behaviour for the '\|' key and the '`¬' key. They were swapped initially.
  • '@' & '"' correctly mapped.
Additionally, I wanted these combos to be available in IntelliJ:
  • Ctrl-Left Arrow & Ctrl-Right Arrow to move the previous/next word respectively
    • Plus their selected text equivalents.
  • Ctrl-Home/Ctrl-End to move to the top/bottom of the document being edited.
This post is a description of what I installed & configured to allow to
achieve this.

Configuring a British PC keyboard

The first step was to tell OS X I was using a PC keyboard, specifically a British one. This is achieved through the System Preferences->Keyboard->Input Sources.



Here new input sources can be added by clicking the '+'. I added 'British - PC'. Adding doesn't mean it will be used though. For this also check the 'Show Input menu in menu bar' option. This adds a country flag and the name of the input source. Clicking on this allows the input source to be changed. If you swap between a PC keyboard and the iMac keyboard (which I do from time to time) this is an easy way to swap.



What all this gives you is the '"' and '@' keys in the right place. Otherwise they're transposed. Note: backslash and backquote remain transposed.

Windows' key combos

The second step was obtaining the Windows' key combos. This requires mapping the Windows' combos to the corresponding OS X combos whilst preventing the Windows' combos being interpreted as something else. After some searching it seemed like the preferred solution to this is using a 3rd party program called KeyRemap4MacBook. According to various reviews it does the job well but configuring it, especially creating your own mappings is complicated. The former being down to the UI and the latter to the XML format. All these things are true but once you've got used to it, like a lot of things it's nowhere near as daunting as it first seems; and the document is very good too. Part of the motivation for this post is to record the configuration & steps for my benefit should I need to do it again.

KeyRemap4MacBook comes with a number of canned mappings. In addition to mapping across the board they can be limited to include or exclude a specific set of apps. In particular I make use of a set of pre-defined mappings from the 'For PC Users' section which won't be applied in VMs (generally running Windows, especially useful when running Windows 8 in Parallels from the Bootcamp partition) and terminals.

As I still use the Apply keyboard from time to time when I want to do very Apple-ly stuff I have the 'Don't remap  Apple's keyboards' option enabled.

What I use

The canned mappings I use from 'For PC Users' section are:
  • Use PC Style Copy/Paste
  • Use PC Style Undo
  • Use PC Style Save
  • Use PC Style Find
These can easily be seen these in KeyRemap4MacBook using the 'show enabled only' (from the many definitions) option:



Without doing very little work this meets the majority of my needs. In addition to the 'For PC Users' and 'General' section you may also notice the three re-mappings at the start. These are custom mappings I had to create. I'm not going to explain the XML format as this is available from the documentation. Instead, here are my custom mappings.

<?xml version="1.0"?>
<root>
 <appdef>
  <appname>INTELLIJ</appname>
  <equal>com.jetbrains.intellij</equal>
 </appdef>

 <replacementdef>
  <replacementname>MY_IGNORE_APPS</replacementname>
  <replacementvalue>VIRTUALMACHINE, TERMINAL, REMOTEDESKTOPCONNECTION, VNC, INTELLIJ</replacementvalue>
 </replacementdef>

 <replacementdef>
  <replacementname>MY_IGNORE_APPS_APPENIDX</replacementname>
  <replacementvalue>(Except in Virtual Machine, Terminal, RDC, VNC and IntelliJ)</replacementvalue>
 </replacementdef>


 <item>
  <name>Use PC style alt-TAB for application switching</name>
  <appendix>{{ MY_IGNORE_APPS_APPENIDX }}</appendix>
  <identifier>private.swap_alt-tab_and_cmd-tab</identifier>
  <not>{{ MY_IGNORE_APPS }}</not>
  <autogen>__KeyToKey__ KeyCode::TAB, ModifierFlag::OPTION_L, KeyCode::TAB, ModifierFlag::COMMAND_L</autogen>
 </item>

 <item>
  <name>Swap backslash and backquote for British PC keyboard</name>
  <identifier>private.swap_backslash_and_quote_for_britishpc</identifier>
  <autogen>__KeyToKey__ KeyCode::DANISH_DOLLAR, KeyCode::BACKQUOTE</autogen>
  <autogen>__KeyToKey__ KeyCode::BACKQUOTE, KeyCode::DANISH_DOLLAR</autogen>
 </item>

 <item>
  <name>Use PC Ctrl-Home/End to move to top/bottom of document</name>
  <appendix>{{ MY_IGNORE_APPS_APPENIDX }}</appendix>
  <identifier>private.use_PC_ctrl-home/end</identifier>
  <not>{{ MY_IGNORE_APPS }}</not>
  <autogen>__KeyToKey__ KeyCode::HOME, ModifierFlag::CONTROL_L, KeyCode::CURSOR_UP, ModifierFlag::COMMAND_L</autogen>
  <autogen>__KeyToKey__ KeyCode::HOME, ModifierFlag::CONTROL_R, KeyCode::CURSOR_UP, ModifierFlag::COMMAND_L</autogen>
  <autogen>__KeyToKey__ KeyCode::END, ModifierFlag::CONTROL_L, KeyCode::CURSOR_DOWN, ModifierFlag::COMMAND_L</autogen>
  <autogen>__KeyToKey__ KeyCode::END, ModifierFlag::CONTROL_R, KeyCode::CURSOR_DOWN, ModifierFlag::COMMAND_L</autogen>
 </item>

</root>

I didn't want these mappings other than swapping backslash and backquote to be applied in various apps. i.e. VMs, VNC & RDC (where Windows is running anyway) and Terminal where it interferes with bash. To enable this I used the <not> element giving a list of excluded apps. along with using the appendix element to state this in the description.

Rather than copy the list of apps. and description I used KeyRemap4MacBook's replacement macro feature. There is a list of builtin apps. that can be referred too but I also looked at the XML file from the source that contains the 'For PC Users' mapping.

The _L & _R refer to keys which appear twice: on the left & right side of the keyboard.

The format allows multiple mappings to be grouped. These don't have to be similar but this the intention, i.e. all the ctrl-home/end mappings are together. Each <autogen> entry is a separate mapping but they are enabled/disabled collectively.

The format isn't too bad. The weird thing from an XML perspective is the <autogen> element. This is source combo followed by combo to generate instead separated by a comma. I think it would be easier to understand if this element were broken down into child elements with say <to> and <from> elements.

This private.xml is also available as a GIST.

IntelliJ

IntelliJ complicates things slightly as it provides its own key-mapping functionality similar to that of KeyRemap4MacBook but solely for itself. This means that there can be a conflict with KeyRemap4MacBook.

I'm writing this a while after I originally implemented it. In fact part of the reason I'm writing this post at all is so I have a record of what's required. Since getting this working it looks like I've changed my IntelliJ Keymap (from Preferences). Originally it was set to 'Mac OS X' but is now set to 'Default'.

When is was set to 'Mac OS X' the KeyRemap4MacBook mappings worked well except that Ctrl-Home/End wouldn't work. This is because that combination is mapped to something else. Additionally the 'Mac OS X' mappings don't provide support for Ctrl-Left/Right-Arrow for hoping back and forth over words. My initial solution to this was to modify (by taking a copy) the 'Mac OS X' mapping:
  • Change 'Move Caret to Next Word' from 'alt ->' to 'ctrl->'.
  • Change 'Move Caret to Previous Word' from 'alt <-' to 'ctrl <-'.
  • Change 'Move Caret to Next Word with Selection' from 'alt shift ->' to 'ctrl shift ->'.
  • Change 'Move Caret to Previous Word with Selection' from 'alt shift <-' to 'ctrl shift <-'.
  • Change 'Move Caret to Text End' from 'cmd end' to 'ctrl end'.
  • Change 'Move Caret to Text Start' from 'cmd home' to 'ctrl home'.
However, it seems that the 'Default' key mappings are as per-Windows but when KeyRemap4MacBook is running they all conflict. In fact I may have missed this completely when initially figuring this out.

Therefore the far easier solution is to select the 'Default' IntelliJ mapping and using KeyRemap4MacBook make it aware of IntelliJ and exclude it from key re-mapping as per the other applications. This is the purpose of the appdef section in private.xml. KeyRemap4MacBook doesn't need definitions for other excluded apps. as these are built-in.

The mappings are not perfect. IntelliJ is great but this is now down to IntelliJ's mapping and having excluded it from KeyRemap4MacBook mapping. I still miss Ctrl-Left/Right-Arrow and Ctrl-Home/End in other apps. but hopefully this should just be a case of defining more mappings and the Ctrl-Z (undo) mapping effectiveness seems to vary.



The Future Of Computing

Phil Nash from level of indirection

The future is already here! - it's just not very evenly distributed.

I have some ideas about what computing will be like in the future but it is composed mostly of pieces we already have - or have the promise of. At the centre of my vision is the evolution of the Post-PC device

What is Post PC anyway?

Many people attribute this term to Steve Jobs, who certainly brought it to the mainstream in 2007, using it to describe iOS devices and how they would come to eclipse "traditional" PCs in sales and use. This is already coming to pass. But it was actually David Clark who coined the phrase, back in 1999. That article is really worth a read. You should go and read it now. Go on. I'll wait. (Actually I'll just carry on writing - but the appearance will be the same).

So while the Jobsian vision (initially, at least) refers to the reset in expectation, interaction and ease of use that iOS devices ushered in, Clarks original words encompass more - including Cloud Services, cashless payment systems, and most interestingly (to me) finer grained distribution of responsibilities.

It's that last one where I think the most opportunities are yet to play out.

For two or three decades we have obsessed over convergence. Traditional PC systems converged to a single device - the laptop. Post-PC devices have taken that to the next level - a single slab, fronted by a piece of glass that is both the display and primary input. These tiny devices also pack in cameras, extra sensors and even fingerprint scanners and replace what used to be dozens of separate devices. But they have also been born into a world where wireless communication technologies are ubiquitous and come in many forms. Many of their capabilities are distributed in "the cloud", or consist of sending things between devices or connecting wirelessly with additional "smart" peripherals such as cameras, fitness trackers, printers and other devices. They are intensely personal yet highly social. Autonomous yet democratised. Functions such as Airplay and its counterparts reinforce the idea that these devices are not isolated computing silos. They are participants in a computing ecosystem that is distributed at many different levels. And all so seamlessly that entire demographics that were previously written off as "computer illiterate" are regularly using these devices. They are barely even considered "computers" anymore. The term has come to be associated with that clunky, finicky, bulky thing you used to struggle to get to do anything you want.

This new generation of devices, finally, "just works".

The NeXT Steps

So where does it go from here? Have we reached the end of the evolution of the personal computing device?

Not by a long shot! We're just getting warmed up!

We have just crossed the threshold from general-purpose computers being primarily for the focused used of businesses and enthusiasts to being something that everyone uses and carries with them everywhere. That in itself has been opening up possibilities that had been hitherto unseen or simply not feasible.

The degree to which these devices and their interconnections have embedded themselves into our lives already is quite breath-taking when you take a step back. While, admittedly, I'm a bit of an early adopter, none of the following is particularly extreme:

On a typical, weekday, morning I am awoken by music served as an alarm from my phone. I get up and go to begin my bathroom routine. Part of that routine involves stepping onto a set of scales that take my weight and fat mass and automatically send the figures, via wi-fi, to a cloud service that is immediately accessible to my phone, collated together with a number of other metrics that are tracked over time.

Once finished and dressed I leave the house and go to my car, which automatically unlocks itself due to the proximity of the key fob in my pocket. I get in and push a button and the car starts. As I start driving the media system in the car has automatically connected, via bluetooth, to my phone, which is also still in my pocket, and continues playing the podcast that I had previously been listening to. I drive to the station and park the car.

As I get out I put my bluetooth headphones on and, at the push of another button, they too have connected to my phone (still in my pocket) and the podcast resumes once again. I get on the train and get my laptop out to do some development work. It connects via a personal wi-fi network to my phone for an internet connection (which, when I pick up LTE, is faster than my home broadband was only a few years ago) - all the time it is still sending audio to my headphones. Later I get off the train and walk to my office. As I walk my steps are being counted by a device on my belt that intermittently sends this information on to my phone via Bluetooth LE, where it is sent to the cloud service that is collating my health related measurements - including heart rate and blood pressure. Along my journey something interesting and unexpected happens. I take out my phone and take a photo, then continue on. As I get near the office a reminder pops up that I had set to go off in that proximity. Eventually I get to my desk where I put my phone in a dock to charge because battery technology is still struggling to keep up with all these demands!

We're only just getting started, so it's not all as seamless as it could be yet, but the story I've just recounted is real and usually all "just works" without a hitch. I think, as time goes on, these sort of experiences will become more reliable and encompass more things.

But that's the present - wasn't I going to be talking about the future? Well I apologise for burying the lede but it's important to remember how much of the future is already here (albeit not evenly distributed). And my vision is really an extension of the things already discussed. That may sound a little uninspiring - but remember that phenomenon of incremental advances suddenly creating whole new opportunities?

Evenly distributed

One of the criticisms often levelled at the current crop of Post-PC devices is that they are great for consumption, but less so for content creation - or "real work". Many contend that you still need a "real" PC for that. I don't think it's quite so black and white - but there do remain many tasks that are cumbersome to undertake with a tablet or smartphone. It won't always be that way, though. Although tablets with keyboards and mice, and hybrid operating systems, exist now - that's not the way of the future.

I believe that in the not too distant future touch-screens, keyboards, and other input devices will all be merely components of a distributed "system" that consists of both cloud services and local sharing of storage and processing. This system will scale seamlessly to the task at hand. Whether you need more computational power, a different input metaphor, or a different way to output you should be able to add what you need without missing a beat. Right now if your needs outgrow a tablet you have to switch to a whole different device (a laptop, say) - which may or may not sync over data you were working on - in this future you would just add the keyboard if you need it (more easily than now), add some extra processing units (you can do this now in certain limited ways), extra storage (again cloud services already play a role here - as does card based storage in some tablets) or even an extra display (technologies like AirPlay are showing the promise of this).

Each of these components would be what we call "smart". That is they are computers in their own right with enough processing power and sensors to be aware of their environment and how they connect and interact. Take a display, for example. The display itself would contain accelerometers and gyroscopes so it is aware of it's orientation in the real world and whether it is being moved - just like your tablet or smartphone does now. It would also know when another display is nearby, and if so how near and in what direction. Of course the display would be a touch-screen. Imagine you have an object on one display. You could start up a new display, place it next to the first one, touch the object and "flick" it over to the second display. All without any need to configure anything.

Now this system, distributed as it is, would need a centralised "brain". It must scale down to a single device that can be used in isolation. It would make sense for this to be what we currently think of as a smartphone. We would need to carry them with us everywhere and use them for communication, so it would be equipped with audio input and output and cameras - just as our current smartphones are. In fact they needn't be much different to the smartphones we have now. They would be more powerful - but needn't be much more powerful as they can scale up the processing power as needed with additional devices and/ or cloud services. And with all data synced to cloud services an alternate device could be picked up and made into your primary hub for the day as necessary.

Everyday revisited

Most of the pieces are already there. There are some challenges - mostly business-oriented rather than technical - but the trend is already in this direction. Yet it all seems very incremental. To see how transformative it would be consider a re-run of my story earlier, reworked to showcase these future technologies (and a few others to spice it up a bit).

It's a typical, weekday, morning. I am awoken by music serving as an alarm on my primary computing device (which will have a really cool name). I get up and go to begin my bathroom routine. Part of that routine involves having various health metrics samples and sent to a cloud service. Another part is that my bathroom mirror presents me with some curated information pertinent to the day ahead - the current weather, traffic conditions and any early appointments I have set. Perhaps also the days news headlines.

Once finished and dressed I leave the house and go to my car, which automatically unlocks itself due to the proximity of the computing device in my pocket. I get in and push a button and the car starts. As I start driving the media system in the car has automatically connected to my computing device, which is still in my pocket, and continues playing the podcast that I had previously been listening to. I drive to the station and park the car. My computing device knows that I have just parked in a car park and automatically communicates with the car park server and pays for my day's stay.

Just before I get out I ask the device to switch it's audio over to the earpieces embedded in my ears and the podcast resumes once again. I get on the train and get my tablet out to do some development work - which is, of course, already online. I might also fish out a keyboard - which automatically connects as it comes into proximity with the tablet. Later I get off the train and walk to my office. As I walk my steps are being counted by the peripheral device on my wrist where it is collated along with my other health measurements and sent to the cloud. Along my journey something interesting and unexpected happens. I bring out my device to take a photo. But I really want a good quality picture, so I quickly fish out a lens with a full size sensor from my bag, which wirelessly connects to my device and instantly beefs up the optics to professional standards. I take a great picture then continue on. As I get near the office a reminder pops up on my wrist that I had set to go off in that vicinity. Eventually I get to my desk where I put my device on the wireless charging pad as it connects to my keyboard and large displays and I continue the work I started on the train.

The task at hand

One consequence of this more distributed way of working is that the single-(main-)tasking metaphor that the iPhone doggedly champions is allowed to survive while still allowing multiple applications to run and be interactive. The metaphor becomes "one app per device". Each device is typically running one interactive application at a time - for some devices it is the same app at any time (a keyboard, for example). For a more general purpose device, such as a tablet, it may run one app, while a different app runs on the "phone" beside it. But the devices can see each other and documents and other data may be shared between them - probably using real-world metaphors like the "flick" mentioned earlier.

Conversely at any one time two or more devices may appear to be running a portion of the same app - but in truth they will be running their own instances - with tight integration between them.

My vision of the future is one of heterogenous, smart devices - some specialised, some generalised - participating in the fabric of a system that surrounds us - and which tends to recede into our surroundings. The seeds are there - and they're growing. I think the next decade is going to be an exciting and transformative time in technology - perhaps even more so than the last!

Postscript...

I had wanted to publish this post by New Year's Eve (2013) but didn't get time to finish up by then. I'm pushing it now, largely un-edited, to try and keep it relatively seasonal (but I may come back and edit more aggressively yet - it's much too rambling for my liking).

As I was finishing I saw blog post by Dave Addey - which he actually posted back in September - covering very similar material. I haven't had a chance to think how to work it in organically to this post (yet) but didn't want to miss the opportunity to link to it - so I'll do that explicitly here. Go read it now. Go on, I'll wait.