Top four JavaZone 2013 talk – The Unreasonable Effectiveness of Dynamic Typing

Rob Smallshire from Good With Computers

I'm very happy to see that my talk on The Unreasonable Effectiveness of Dynamic Typing was rated fourth of all the talks in the show. Thanks to everyone who attended and voted.

This talk is perhaps deliberately provocative, but only with the intention of provoking critical thinking and empiricism around the tools we use. I'm genuinely curious as to why programs in dynamic languages are as reliable as they are, although I confess I don't yet have many of the answers.

New Book: Becoming a Better Programmer

Pete Goodliffe from Pete Goodliffe

After many years of gestation my latest book is available for purchase as an early-access pre-release.

Called Becoming a Better Programmer, it is a handbook for people who are about code.

This early access edition already contains 14 chapters, and there are many more coming. There is a free "sample" version available so you get a taster of what you'll be purchasing.

As a pre-release, it's available at an introductory price. The price will go steadily upwards as the book nears completion. Buy now to enjoy the best value! (That's the sales pitch - I suck at that kind of thing.)

Get it from gum.co/becomingbetter. Join the book discussion here: moot.it/becomingbetter.

It would genuinely love to hear any feedback, praise or criticism that will help improve the book. Suggestions for topics to cover are also of real interest.

My honest hope is that this book does just what it says on the cover: helps many developers improve their skills, to become more productive programmers.

Buy it now!

Mutable structs in C#

Frances Buontempo from BuontempoConsulting

We know what this does, right?

    struct Pricer
    {
        public double Price;
        public long Size;

        public void AddExecution(long lastSize, double lastPrice)
        {
            Price = (Price * Size + lastSize * lastPrice) / (Size + lastSize);
            Size += lastSize;
        }
    }

    class PriceData
    {
        public Pricer pricer;
    }

    class Program
    {
        static void Main(string[] args)
        {
            Pricer price = new Pricer{Price = 0.0, Size = 0};
            for (int i = 0; i < 5; ++i)
            {
                Console.WriteLine("{0} {1}", price.Price, price.Size);
                price.AddExecution(1, 2.5 * (i + 1));
            }

            for (int i = 0; i < 5; ++i)
            {
                Console.WriteLine("{0} {1}", price.Price, price.Size);
                price.Price= 2.5 * (i + 1);
            }

            PriceData priceData = new PriceData();
            priceData.pricer = price;
            for (int i = 0; i < 5; ++i)
            {
                Console.WriteLine("{0} {1}", price.Price, price.Size);
                priceData.pricer.AddExecution(1, 2.5 * (i + 1));
            }
       }
    }

Eric Lippert tells us about mutating *readonly* structs: http://blogs.msdn.com/b/ericlippert/archive/2008/05/14/mutating-readonly-structs.aspx but even non-readonly structs can get us in a mess.

Speaking: Words in Code (ACCU 2014)

Pete Goodliffe from Pete Goodliffe

I'll be speaking at this year's excellent ACCU Conference 2014.

This year my talk is: Words in Code, a technical (and not so technical) appraisal of how developers write. It's a practical distillation of my fourteen years as a magazine columnist, multiple book projects, and more.

Come and enjoy it on Thursday 10th April at 10am. The conference's earlybird booking deadline is February the 14th. ACCU is one of the highlights of my developer year - it's a truly excellent conference. If you've not considered going, check it out!

The full synopsis is available on the session page:
As software developers we do not just write code. We write many, many words too.
We write documentation, comments, manuals, specifications, technical articles, wiki documentation, and more. Maybe even magazine articles and books.
This talk discusses some practicalities of writing well, both stylistically and practically. We'll talk about prose, but also about the right "geek" way of writing, the storage formats, toolchains, and the storage of our words.
We'll cover:
  • writing style
  • what's appropriate: what to write what not to write
  • keeping track: "source control" for words
  • toolchains: what toolsets to use to write and prepare output
  • markup languages vs "wysiwyg" tools
  • sharing your words with non-geeks
At the end of this talk, you'll have a good idea how to put together an example "document toolchain" taking source-controlled words in a humane markup style, and creating high-quality HTML, PDF (fully styled, print-ready) ePub and Kindle output, as well as Word-friendly versions.

Pdb ftw

Frances Buontempo from BuontempoConsulting

pdb - python debugger

If a script throws an exception, try running it in the debugger
python -m pdb myscript.py
For example given the following (rubbish) python program
C:\Dev\src>cat bad.py
def naughty():
raise Exception()
naughty()

if we just run it the exception tries to escape ...
C:\Dev\src>python bad.py
Traceback (most recent call last):
File "bad.py", line 4, in <module>
naughty()
File "bad.py", line 2, in naughty
raise Exception()
Exception

So, we know what line the problem's on. We could change the script to see what's going on using
import pdb; pdb.set_trace()
or we could run it under a debugger.

Run script through debugger

Invoke the pdb module in python and send it your script, e.g.
python -m pdb bad.py
This gives a (Pdb) prompt to type instructions in to:
> c:\dev\src\bad.py(1)<module>()
-> def naughty():
(Pdb)
Type 'c' for continue - it will halt when it gets an exception, as follows
(Pdb) c
Traceback (most recent call last):
File "C:\Python27\lib\pdb.py", line 1314, in main
pdb._runscript(mainpyfile)
File "C:\Python27\lib\pdb.py", line 1233, in _runscript
self.run(statement)
File "C:\Python27\lib\bdb.py", line 387, in run
exec cmd in globals, locals
File "<string>", line 1, in <module>
File "bad.py", line 1, in <module>
def naughty():
File "bad.py", line 2, in naughty
raise Exception()
Exception
Uncaught exception. Entering post mortem debugging
Running 'cont' or 'step' will restart the program
> c:\dev\src\bad.py(2)naughty()
-> raise Exception()
(Pdb)
At this point
 (Pdb) p <variable_name>
can be used to inspect variables, and
 (Pdb) a
shows any arguments to a function.

Main Pdb commands

Type 'q' for quit, 's' to step (maybe into another function) or 'n' to move on to the next line.
'w' shows where you are in the stack, 'u' moves up and 'd' moves down.
b(reak) [[filename:]lineno  sets a breakpoint.
There are more details in the manual: http://docs.python.org/2/library/pdb.html

Virtual functions in constructors and destructors

Frances Buontempo from BuontempoConsulting

Interview question.

What does this do?

class Base
{
public:
Base()
{
log();
}
virtual ~Base()
{
log();
}

//virtual void log() = 0;//note this compiles but doesn't link
virtual void log()
{
std::cout << "Base\n";
}
};

class Derived : public Base
{
public:
Derived()
{
log();
}
~Derived()
{
log();
}
virtual void log()
{
std::cout << "Derived\n";
}
};

int main()
{
Derived d;
}

I half remembered this http://www.artima.com/cppsource/nevercall.html so wasn't sure.

Keyboard configuration for Windows’ developers on OS X (& also IntelliJ)

Pete Barber from C#, C++, Windows &amp; other ramblings

Recently I've been doing some ActionScript programming. Rather than target a Flash Player app. I've been using ActionScript in combination with Adobe AIR in order to create an iOS app. This has meant I've been spending time in OS X and using IntelliJ with the the ActionScript/Flex/AIR plugin as my IDE.

Most of my previous work has been done on UNIX (so command lines & vi) and Windows. In particular I depend on the various Windows & Visual Studio editor key combinations plus the Insert, Delete, Home & End keys. For starters this means I use a PC keyboard with the iMac rather than the Apple keyboard as it lacks these keys; I'm also based in the UK so I use a British PC keyboard.

In addition to these keys I wanted the following combos to be available across all of OS X and any apps.
  • Alt-Tab to cycle through apps.
  • Ctrl-F for find.
  • Ctrl-S for save current document (I habitually press this whilst editing).
  • Ctrl-C & Ctrl-V for copy & paste.
  • Ctrl-Z for undo.
  • Obtain the correct behaviour for the '\|' key and the '`¬' key. They were swapped initially.
  • '@' & '"' correctly mapped.
Additionally, I wanted these combos to be available in IntelliJ:
  • Ctrl-Left Arrow & Ctrl-Right Arrow to move the previous/next word respectively
    • Plus their selected text equivalents.
  • Ctrl-Home/Ctrl-End to move to the top/bottom of the document being edited.
This post is a description of what I installed & configured to allow to
achieve this.

Configuring a British PC keyboard

The first step was to tell OS X I was using a PC keyboard, specifically a British one. This is achieved through the System Preferences->Keyboard->Input Sources.



Here new input sources can be added by clicking the '+'. I added 'British - PC'. Adding doesn't mean it will be used though. For this also check the 'Show Input menu in menu bar' option. This adds a country flag and the name of the input source. Clicking on this allows the input source to be changed. If you swap between a PC keyboard and the iMac keyboard (which I do from time to time) this is an easy way to swap.



What all this gives you is the '"' and '@' keys in the right place. Otherwise they're transposed. Note: backslash and backquote remain transposed.

Windows' key combos

The second step was obtaining the Windows' key combos. This requires mapping the Windows' combos to the corresponding OS X combos whilst preventing the Windows' combos being interpreted as something else. After some searching it seemed like the preferred solution to this is using a 3rd party program called KeyRemap4MacBook. According to various reviews it does the job well but configuring it, especially creating your own mappings is complicated. The former being down to the UI and the latter to the XML format. All these things are true but once you've got used to it, like a lot of things it's nowhere near as daunting as it first seems; and the document is very good too. Part of the motivation for this post is to record the configuration & steps for my benefit should I need to do it again.

KeyRemap4MacBook comes with a number of canned mappings. In addition to mapping across the board they can be limited to include or exclude a specific set of apps. In particular I make use of a set of pre-defined mappings from the 'For PC Users' section which won't be applied in VMs (generally running Windows, especially useful when running Windows 8 in Parallels from the Bootcamp partition) and terminals.

As I still use the Apply keyboard from time to time when I want to do very Apple-ly stuff I have the 'Don't remap  Apple's keyboards' option enabled.

What I use

The canned mappings I use from 'For PC Users' section are:
  • Use PC Style Copy/Paste
  • Use PC Style Undo
  • Use PC Style Save
  • Use PC Style Find
These can easily be seen these in KeyRemap4MacBook using the 'show enabled only' (from the many definitions) option:



Without doing very little work this meets the majority of my needs. In addition to the 'For PC Users' and 'General' section you may also notice the three re-mappings at the start. These are custom mappings I had to create. I'm not going to explain the XML format as this is available from the documentation. Instead, here are my custom mappings.

<?xml version="1.0"?>
<root>
 <appdef>
  <appname>INTELLIJ</appname>
  <equal>com.jetbrains.intellij</equal>
 </appdef>

 <replacementdef>
  <replacementname>MY_IGNORE_APPS</replacementname>
  <replacementvalue>VIRTUALMACHINE, TERMINAL, REMOTEDESKTOPCONNECTION, VNC, INTELLIJ</replacementvalue>
 </replacementdef>

 <replacementdef>
  <replacementname>MY_IGNORE_APPS_APPENIDX</replacementname>
  <replacementvalue>(Except in Virtual Machine, Terminal, RDC, VNC and IntelliJ)</replacementvalue>
 </replacementdef>


 <item>
  <name>Use PC style alt-TAB for application switching</name>
  <appendix>{{ MY_IGNORE_APPS_APPENIDX }}</appendix>
  <identifier>private.swap_alt-tab_and_cmd-tab</identifier>
  <not>{{ MY_IGNORE_APPS }}</not>
  <autogen>__KeyToKey__ KeyCode::TAB, ModifierFlag::OPTION_L, KeyCode::TAB, ModifierFlag::COMMAND_L</autogen>
 </item>

 <item>
  <name>Swap backslash and backquote for British PC keyboard</name>
  <identifier>private.swap_backslash_and_quote_for_britishpc</identifier>
  <autogen>__KeyToKey__ KeyCode::DANISH_DOLLAR, KeyCode::BACKQUOTE</autogen>
  <autogen>__KeyToKey__ KeyCode::BACKQUOTE, KeyCode::DANISH_DOLLAR</autogen>
 </item>

 <item>
  <name>Use PC Ctrl-Home/End to move to top/bottom of document</name>
  <appendix>{{ MY_IGNORE_APPS_APPENIDX }}</appendix>
  <identifier>private.use_PC_ctrl-home/end</identifier>
  <not>{{ MY_IGNORE_APPS }}</not>
  <autogen>__KeyToKey__ KeyCode::HOME, ModifierFlag::CONTROL_L, KeyCode::CURSOR_UP, ModifierFlag::COMMAND_L</autogen>
  <autogen>__KeyToKey__ KeyCode::HOME, ModifierFlag::CONTROL_R, KeyCode::CURSOR_UP, ModifierFlag::COMMAND_L</autogen>
  <autogen>__KeyToKey__ KeyCode::END, ModifierFlag::CONTROL_L, KeyCode::CURSOR_DOWN, ModifierFlag::COMMAND_L</autogen>
  <autogen>__KeyToKey__ KeyCode::END, ModifierFlag::CONTROL_R, KeyCode::CURSOR_DOWN, ModifierFlag::COMMAND_L</autogen>
 </item>

</root>

I didn't want these mappings other than swapping backslash and backquote to be applied in various apps. i.e. VMs, VNC & RDC (where Windows is running anyway) and Terminal where it interferes with bash. To enable this I used the <not> element giving a list of excluded apps. along with using the appendix element to state this in the description.

Rather than copy the list of apps. and description I used KeyRemap4MacBook's replacement macro feature. There is a list of builtin apps. that can be referred too but I also looked at the XML file from the source that contains the 'For PC Users' mapping.

The _L & _R refer to keys which appear twice: on the left & right side of the keyboard.

The format allows multiple mappings to be grouped. These don't have to be similar but this the intention, i.e. all the ctrl-home/end mappings are together. Each <autogen> entry is a separate mapping but they are enabled/disabled collectively.

The format isn't too bad. The weird thing from an XML perspective is the <autogen> element. This is source combo followed by combo to generate instead separated by a comma. I think it would be easier to understand if this element were broken down into child elements with say <to> and <from> elements.

This private.xml is also available as a GIST.

IntelliJ

IntelliJ complicates things slightly as it provides its own key-mapping functionality similar to that of KeyRemap4MacBook but solely for itself. This means that there can be a conflict with KeyRemap4MacBook.

I'm writing this a while after I originally implemented it. In fact part of the reason I'm writing this post at all is so I have a record of what's required. Since getting this working it looks like I've changed my IntelliJ Keymap (from Preferences). Originally it was set to 'Mac OS X' but is now set to 'Default'.

When is was set to 'Mac OS X' the KeyRemap4MacBook mappings worked well except that Ctrl-Home/End wouldn't work. This is because that combination is mapped to something else. Additionally the 'Mac OS X' mappings don't provide support for Ctrl-Left/Right-Arrow for hoping back and forth over words. My initial solution to this was to modify (by taking a copy) the 'Mac OS X' mapping:
  • Change 'Move Caret to Next Word' from 'alt ->' to 'ctrl->'.
  • Change 'Move Caret to Previous Word' from 'alt <-' to 'ctrl <-'.
  • Change 'Move Caret to Next Word with Selection' from 'alt shift ->' to 'ctrl shift ->'.
  • Change 'Move Caret to Previous Word with Selection' from 'alt shift <-' to 'ctrl shift <-'.
  • Change 'Move Caret to Text End' from 'cmd end' to 'ctrl end'.
  • Change 'Move Caret to Text Start' from 'cmd home' to 'ctrl home'.
However, it seems that the 'Default' key mappings are as per-Windows but when KeyRemap4MacBook is running they all conflict. In fact I may have missed this completely when initially figuring this out.

Therefore the far easier solution is to select the 'Default' IntelliJ mapping and using KeyRemap4MacBook make it aware of IntelliJ and exclude it from key re-mapping as per the other applications. This is the purpose of the appdef section in private.xml. KeyRemap4MacBook doesn't need definitions for other excluded apps. as these are built-in.

The mappings are not perfect. IntelliJ is great but this is now down to IntelliJ's mapping and having excluded it from KeyRemap4MacBook mapping. I still miss Ctrl-Left/Right-Arrow and Ctrl-Home/End in other apps. but hopefully this should just be a case of defining more mappings and the Ctrl-Z (undo) mapping effectiveness seems to vary.



The Future Of Computing

Phil Nash from level of indirection

The future is already here! - it's just not very evenly distributed.

I have some ideas about what computing will be like in the future but it is composed mostly of pieces we already have - or have the promise of. At the centre of my vision is the evolution of the Post-PC device

What is Post PC anyway?

Many people attribute this term to Steve Jobs, who certainly brought it to the mainstream in 2007, using it to describe iOS devices and how they would come to eclipse "traditional" PCs in sales and use. This is already coming to pass. But it was actually David Clark who coined the phrase, back in 1999. That article is really worth a read. You should go and read it now. Go on. I'll wait. (Actually I'll just carry on writing - but the appearance will be the same).

So while the Jobsian vision (initially, at least) refers to the reset in expectation, interaction and ease of use that iOS devices ushered in, Clarks original words encompass more - including Cloud Services, cashless payment systems, and most interestingly (to me) finer grained distribution of responsibilities.

It's that last one where I think the most opportunities are yet to play out.

For two or three decades we have obsessed over convergence. Traditional PC systems converged to a single device - the laptop. Post-PC devices have taken that to the next level - a single slab, fronted by a piece of glass that is both the display and primary input. These tiny devices also pack in cameras, extra sensors and even fingerprint scanners and replace what used to be dozens of separate devices. But they have also been born into a world where wireless communication technologies are ubiquitous and come in many forms. Many of their capabilities are distributed in "the cloud", or consist of sending things between devices or connecting wirelessly with additional "smart" peripherals such as cameras, fitness trackers, printers and other devices. They are intensely personal yet highly social. Autonomous yet democratised. Functions such as Airplay and its counterparts reinforce the idea that these devices are not isolated computing silos. They are participants in a computing ecosystem that is distributed at many different levels. And all so seamlessly that entire demographics that were previously written off as "computer illiterate" are regularly using these devices. They are barely even considered "computers" anymore. The term has come to be associated with that clunky, finicky, bulky thing you used to struggle to get to do anything you want.

This new generation of devices, finally, "just works".

The NeXT Steps

So where does it go from here? Have we reached the end of the evolution of the personal computing device?

Not by a long shot! We're just getting warmed up!

We have just crossed the threshold from general-purpose computers being primarily for the focused used of businesses and enthusiasts to being something that everyone uses and carries with them everywhere. That in itself has been opening up possibilities that had been hitherto unseen or simply not feasible.

The degree to which these devices and their interconnections have embedded themselves into our lives already is quite breath-taking when you take a step back. While, admittedly, I'm a bit of an early adopter, none of the following is particularly extreme:

On a typical, weekday, morning I am awoken by music served as an alarm from my phone. I get up and go to begin my bathroom routine. Part of that routine involves stepping onto a set of scales that take my weight and fat mass and automatically send the figures, via wi-fi, to a cloud service that is immediately accessible to my phone, collated together with a number of other metrics that are tracked over time.

Once finished and dressed I leave the house and go to my car, which automatically unlocks itself due to the proximity of the key fob in my pocket. I get in and push a button and the car starts. As I start driving the media system in the car has automatically connected, via bluetooth, to my phone, which is also still in my pocket, and continues playing the podcast that I had previously been listening to. I drive to the station and park the car.

As I get out I put my bluetooth headphones on and, at the push of another button, they too have connected to my phone (still in my pocket) and the podcast resumes once again. I get on the train and get my laptop out to do some development work. It connects via a personal wi-fi network to my phone for an internet connection (which, when I pick up LTE, is faster than my home broadband was only a few years ago) - all the time it is still sending audio to my headphones. Later I get off the train and walk to my office. As I walk my steps are being counted by a device on my belt that intermittently sends this information on to my phone via Bluetooth LE, where it is sent to the cloud service that is collating my health related measurements - including heart rate and blood pressure. Along my journey something interesting and unexpected happens. I take out my phone and take a photo, then continue on. As I get near the office a reminder pops up that I had set to go off in that proximity. Eventually I get to my desk where I put my phone in a dock to charge because battery technology is still struggling to keep up with all these demands!

We're only just getting started, so it's not all as seamless as it could be yet, but the story I've just recounted is real and usually all "just works" without a hitch. I think, as time goes on, these sort of experiences will become more reliable and encompass more things.

But that's the present - wasn't I going to be talking about the future? Well I apologise for burying the lede but it's important to remember how much of the future is already here (albeit not evenly distributed). And my vision is really an extension of the things already discussed. That may sound a little uninspiring - but remember that phenomenon of incremental advances suddenly creating whole new opportunities?

Evenly distributed

One of the criticisms often levelled at the current crop of Post-PC devices is that they are great for consumption, but less so for content creation - or "real work". Many contend that you still need a "real" PC for that. I don't think it's quite so black and white - but there do remain many tasks that are cumbersome to undertake with a tablet or smartphone. It won't always be that way, though. Although tablets with keyboards and mice, and hybrid operating systems, exist now - that's not the way of the future.

I believe that in the not too distant future touch-screens, keyboards, and other input devices will all be merely components of a distributed "system" that consists of both cloud services and local sharing of storage and processing. This system will scale seamlessly to the task at hand. Whether you need more computational power, a different input metaphor, or a different way to output you should be able to add what you need without missing a beat. Right now if your needs outgrow a tablet you have to switch to a whole different device (a laptop, say) - which may or may not sync over data you were working on - in this future you would just add the keyboard if you need it (more easily than now), add some extra processing units (you can do this now in certain limited ways), extra storage (again cloud services already play a role here - as does card based storage in some tablets) or even an extra display (technologies like AirPlay are showing the promise of this).

Each of these components would be what we call "smart". That is they are computers in their own right with enough processing power and sensors to be aware of their environment and how they connect and interact. Take a display, for example. The display itself would contain accelerometers and gyroscopes so it is aware of it's orientation in the real world and whether it is being moved - just like your tablet or smartphone does now. It would also know when another display is nearby, and if so how near and in what direction. Of course the display would be a touch-screen. Imagine you have an object on one display. You could start up a new display, place it next to the first one, touch the object and "flick" it over to the second display. All without any need to configure anything.

Now this system, distributed as it is, would need a centralised "brain". It must scale down to a single device that can be used in isolation. It would make sense for this to be what we currently think of as a smartphone. We would need to carry them with us everywhere and use them for communication, so it would be equipped with audio input and output and cameras - just as our current smartphones are. In fact they needn't be much different to the smartphones we have now. They would be more powerful - but needn't be much more powerful as they can scale up the processing power as needed with additional devices and/ or cloud services. And with all data synced to cloud services an alternate device could be picked up and made into your primary hub for the day as necessary.

Everyday revisited

Most of the pieces are already there. There are some challenges - mostly business-oriented rather than technical - but the trend is already in this direction. Yet it all seems very incremental. To see how transformative it would be consider a re-run of my story earlier, reworked to showcase these future technologies (and a few others to spice it up a bit).

It's a typical, weekday, morning. I am awoken by music serving as an alarm on my primary computing device (which will have a really cool name). I get up and go to begin my bathroom routine. Part of that routine involves having various health metrics samples and sent to a cloud service. Another part is that my bathroom mirror presents me with some curated information pertinent to the day ahead - the current weather, traffic conditions and any early appointments I have set. Perhaps also the days news headlines.

Once finished and dressed I leave the house and go to my car, which automatically unlocks itself due to the proximity of the computing device in my pocket. I get in and push a button and the car starts. As I start driving the media system in the car has automatically connected to my computing device, which is still in my pocket, and continues playing the podcast that I had previously been listening to. I drive to the station and park the car. My computing device knows that I have just parked in a car park and automatically communicates with the car park server and pays for my day's stay.

Just before I get out I ask the device to switch it's audio over to the earpieces embedded in my ears and the podcast resumes once again. I get on the train and get my tablet out to do some development work - which is, of course, already online. I might also fish out a keyboard - which automatically connects as it comes into proximity with the tablet. Later I get off the train and walk to my office. As I walk my steps are being counted by the peripheral device on my wrist where it is collated along with my other health measurements and sent to the cloud. Along my journey something interesting and unexpected happens. I bring out my device to take a photo. But I really want a good quality picture, so I quickly fish out a lens with a full size sensor from my bag, which wirelessly connects to my device and instantly beefs up the optics to professional standards. I take a great picture then continue on. As I get near the office a reminder pops up on my wrist that I had set to go off in that vicinity. Eventually I get to my desk where I put my device on the wireless charging pad as it connects to my keyboard and large displays and I continue the work I started on the train.

The task at hand

One consequence of this more distributed way of working is that the single-(main-)tasking metaphor that the iPhone doggedly champions is allowed to survive while still allowing multiple applications to run and be interactive. The metaphor becomes "one app per device". Each device is typically running one interactive application at a time - for some devices it is the same app at any time (a keyboard, for example). For a more general purpose device, such as a tablet, it may run one app, while a different app runs on the "phone" beside it. But the devices can see each other and documents and other data may be shared between them - probably using real-world metaphors like the "flick" mentioned earlier.

Conversely at any one time two or more devices may appear to be running a portion of the same app - but in truth they will be running their own instances - with tight integration between them.

My vision of the future is one of heterogenous, smart devices - some specialised, some generalised - participating in the fabric of a system that surrounds us - and which tends to recede into our surroundings. The seeds are there - and they're growing. I think the next decade is going to be an exciting and transformative time in technology - perhaps even more so than the last!

Postscript...

I had wanted to publish this post by New Year's Eve (2013) but didn't get time to finish up by then. I'm pushing it now, largely un-edited, to try and keep it relatively seasonal (but I may come back and edit more aggressively yet - it's much too rambling for my liking).

As I was finishing I saw blog post by Dave Addey - which he actually posted back in September - covering very similar material. I haven't had a chance to think how to work it in organically to this post (yet) but didn't want to miss the opportunity to link to it - so I'll do that explicitly here. Go read it now. Go on, I'll wait.

Review of Overload "editorials"

Frances Buontempo from BuontempoConsulting

I have run Charles Stross' code over my Overload "editorials" in the hope of generating some kind of end of year review. Any favourite phrases anyone?
 
---

the incoming information and produced metres of punched cards, for example musing on the name to the death of Ceefax. Started in and being replaced by keyboards. On Wednesday I got 0. On Wednesday I got 258 (plus some on accounts.

the perpetual call. There is the problem. Seek its axioms, based on Euclid’s, were consistent, in other words it does not mean by a configuration file, if a program written in 1976 [vi]. Many editors allow syntax high-lighting now, adding an.

the words 'Surreal' 'Mutation' standing out proudly. Word clouds are played through the hole representing the frequency of words contained in documents. A more traditional approach would present a histogram, with bars showing how many times an item appears. Wikipedia suggests.

the creation of the calculus gave ways to form the language, though paused for Turing. Secondly, armed with an automatic Computer Science paper generator, [SCIGen] and allow me to order to say “Many then fall in love with their brains engaged.

the system within the system, but when it’s dead. Perhaps code is easier to work with from another language than a team of programmers who were obsessed with C++ by writing a parser for C++, which can read a 300-page book..

the mouse (1968) and ways of solving problems to emerge. Introduction of the calculus gave ways of us the fore [Matthews]. Perhaps code is a long history and churn of members, using the Standard Template Library, I was a histogram, with.

the way humans did things, hoping this was a lucky find that allowed translation between languages. Can you will have to press the trendy face of ways to make unchanged code behave in which it has also spawned excellent science fiction.

the real objects located in space numerically, ranging from test and refactor and then compute the same sequence as M. Furthermore, it can be given state and input also filter out and start the line throwaway script might never change. I.

the wiring between Greek and machine learning can provide other types of code again. Electronic wizards can be given the instructions for a four year stint. Allow me to order search results by weight. Feel free to back me up on.

the natural numbers, there are two. The next state. It is trained to appreciate beauty. It is of course, easier to program if you can give rules to mention its powers of intimidation." [DASKeyboard] Research into the requirements or trying to.

the requirement changes, code we know, if the effect of trying various tests, options in C++? C++ is provable or falsifiable. This would allow all of mathematics to sound motifs or short tunes. The authors notice that a musical background made.

the debate you come down on. He then suggests "foldering may miss thereby allowing word-processing. They were also used as M. Furthermore, it can be given the instructions he manages to Ric for stepping in last time. I do often a.

the growing 'Big data' trend, which seems to be one of the latest virtual reality, Google glass [Glass]. A computer interface that allows editing changes the game. Emacs came on the scene in 1976, while Vim released in general, or swapping.

the “Electronic Numerical Integrator and slowly turned into something substantive. It's practically the opposite of engineering. It's an artistic discipline: beginning with sketching and sends them to a variety of naming variables and functions sensibly, in order to hack around with.

the prevalence of doing something. If it works, it easier than an answer. We have sometimes taken as “You have decayed away. Imagine that one day. A variety of ways of editing inputs for computers, so many technical books do you.

the idea of an editorial. How do you own that weigh less than this? How many books I own. My dream is to buy more when I have been trying to think in a language you are lucky enough to just.

the ACCU conference do not count. So, how it well as a strange word. When this wouldn't be necessary. The live speech has grown from Google’s machine learning. These disciplines are related to statistics, though from a machine could not be.

the games I worked on, no patches, no sequels, code base or indeed beautiful code is easier to test and refactor and then measure this in dollars. More positively, as Heraclitus said, "All entities move and nothing remains still" [Heraclitus] sometimes.

the hook. If any readers wish to continue and every Turing computable function.” [Turing_completeness] That was helpful, wasn’t it? Equivalently, it can simulate a universal language and useless.” As stated at the outset, we might need to learn to think deeply.

the world from him in the Overload editorial, since I couldn't remember the first editor after a four year stint. Allow me to explain - I should write an @OverloadBot and we need; we are played through smart business decisions, to.

Using IntelliJ, Adobe ActionScript and AIR SDK to create & package iOS 7 apps.

Pete Barber from C#, C++, Windows &amp; other ramblings

Just a quick post. Lately I've been learning ActionScript. Having seen how easy it is to get an ActionScript project for Flash Player running on Android using Adobe AIR I wanted to do the same for my iPhone. Getting stuff running on the AIR emulator and on the iOS simulator (under OS X) and AIR was pretty easy. In my case this was using IntelliJ as the IDE (rather than Flash Builder) coupled with Flex 4.6 SDK. The real fun started when I started to package my application for submission to the App Store, in particular creating the App icons.

The version of the AIR SDK that comes with the Flex 4.6 SDK is 3.1. However this isn't aware of the new iOS 7 App icons. It would seem a simple matter of adding additional entries to the Application Descriptor file, i.e. to support the the 152x152 icon just add

<image152x152>icon152.png</image152x152>

to the <icon> section. Unfortunately the schema knows this isn't valid (well doesn't know about) and you end up with the following error:

error 103: application.icon.image152x152 is an unexpected element/attribute

To fix, the first step is to download & install the latest version AIR SDK which is 3.9 (4.0 beta aside). This does not mean download & install the latest version of the Flex SDK as this contains an older version of the AIR SDK. Also, as this needs installing on top of the one present in the existing Flex SDK installation do not download the installer version, instead use the zip (Windows) or tbz2 (OS X). The following link takes you to both: http://www.adobe.com/devnet/air/air-sdk-download.html

Then extract these within the Flex SDK (you might want to take a copy of this first but if things go wrong you can always re-download it). The easiest way is to just copy/move the archive to the Flex SDK directory and extract the files there which will overwrite the existing ones.

NOTE: Up to this point the same thing occurred on both Windows & OS X. The following steps only worked on OS X. In particular updating the scheme in the Application Descriptor didn't work and when reverted back to 3.1 (& support for iOS 7 App Icons removed) then packaging the app. was a problem as the AIR SDK seemed to be missing various binaries to create the ARM binaries. I haven't pursued this further as I was working on OS X at this point.

In theory everything should work now. However if you proceed to package the app. it will still give the same 103 error. This is because the scheme version number in the Application Descriptor needs updating. Most likely the line will be:

<application xmlns="http://ns.adobe.com/air/application/3.1">

the 3.1 needs changing to 3.9.

This may not fix the problem though. If you're using IntelliJ (sorry don't know about Flash Builder) and have selected the 'Generated' option for the Application Descriptor then it appears by default IntelliJ (AIR?) creates this with a version of 3.1. In this case you'll need to stop using this option. Instead choose the 'Custom template' and either create your own or have IntelliJ (AIR?) generate one for you. If you choose the latter option then IntelliJ offers a drop down to specify the version. However, it only lists 3.1 to 3.8. Therefore this will need manually changing to 3.9.


At this point it should be possible to successfully package an iOS app with iOS 7 App Icon support.