Release news: Hungry Bunny & KeyChainItemCRUDKit

Pete Barber from C#, C++, Windows & other ramblings

Not a technical post today, just a bit of news on the things I've been working on.

Firstly, my latest SpriteKit game written in Swift is now available on the App Store. It's called Hungry Bunny and is effectively an endless runner/skill test. It's free with ads.



Secondly, now that Hungry Bunny is complete I've returned to working on another project. This is nowhere near complete but I got to the point where I needed to securely store an OAuth2 token on iOS. I came across the Keychain API. However, the API for this was long winded and I only wanted to use it in a simple manner. Therefore I created a Swift framework to provide CRUD access to it along with a higher level interface where any type conforming to NSCoding and be saved, loaded & deleted.

This is available on github and also as my first ever CocoaPod. All the docs are in the README plus there's an example iOS program (Single View App) and the Unit Tests.

Eulogy for my Dad

Frances Buontempo from BuontempoConsulting

My Dad loved many things and it therefore falls to me to mention mathematics. Non-geeks tend to say things like “He had a gift for that,” as though geeks know a magic incantation or are “naturally clever”. My Dad was clever. However, one of my lecturers at University frequently reminded me that “Genius is 1% inspiration and 99% perspiration.” David loved maths and was willing to spend many hours learning more, usually in order to teach his students or to share the latest puzzle he was thinking about with anyone willing to listen. Recently the puzzles had tended to be the Sunday Times Puzzler. After showing him how to program in Python he submitted a few that were accepted. You can still see them on the internet if you search. He has left ripples in the ether.
He cared about sharing results and ideas, and instilled in me the joy of someone moving from disbelief to confusion to understanding and conviction. The world is often a bigger and more amazing place than we first assume. I recall him getting a paper published he had written with some students. Not only did he credit the students, he also persuaded the publishers to include the negative results – things they tried that went wrong. Many academic papers avoid doing this, but he felt it is important to stop others from going down the same blind alleys and to learn from your mistakes.
I know he inspired many people. In my brief career as a teacher I met many who had been his students at Christchurch and they always spoke highly of him. He had a knack for explaining things and making sure you had understood. He was also willing to listen to me trying to explain things to him – including how to code in python and what I was trying to do with my “new work” chapter in my PhD thesis. He was willing to ask “Why?” and allowed me to ask as well. I have never grown out of this and that leaves me with an unsatisfiable curiosity. That makes it OK to ask “Why?” about his unexpected death. Not being a mathematical question, we are unlikely to get a clear and compelling answer, but it’s ok to ask.
The day after he died, I saw a nine digit number in large neon on the top of a building-front. It had all the digits except the number “1” – I forget which was repeated. I have no idea what the number meant, but I know if he’d been there he would have noticed it as well. Whenever I notice symmetries in tiles or paving slabs, or broken symmetries, curious numbers or patterns I will think of him. And have been doing for years. His excitement and curiosity about mathematics could be infectious if you were prone to it. Some people might say “Stop being a geek,” others just raise an eyebrow. Once in a while you’ll find someone else who’s noticed it too or looks when you point and wonders with you at the patterns and meaning that point to something greater in an otherwise chaotic seeming world.
I have no idea why the digit 1 was missing from the number on the building, let alone what the number was trying to convey, but having spotted a surprising number of physics books on the bookshelves of a man who claimed physics is just watered-down maths, I am reminded of a quote attributed to Feynman:
"I would rather have questions that can’t be answered than answers that can’t be questioned."
It’s always ok to ask “Why?” We may never really know but we may discover beautiful and interesting things on the way. Or perhaps I should end with another actual Feynman quote

“The most important thing I found out from [my father] is that if you asked any question and pursued it deeply enough, then at the end there was a glorious discovery of a general and beautiful kind.”


Debian Release Code Names – Aide Mémoire

Tim Pizey from Tim Pizey

The name series for Debian releases is taken from characters in the Pixar/Disney film Toy Story.

Sid

The unstable release is always called Sid as the character in the film took delight in breaking his toys.

A backronym: Still In Development.

Stretch

The current pending release is always called testing and will have been christened. At the time of writing the testing release is Stretch.

Jessie

Release 8.0
2015/04/26

Jessie is the current stable release.

After a considerable while a release will migrate from testing to stable, it will then become the Current Stable release and the previous version will join the (head) of the list of Obsolete Stable releases.

Wheezy

Release 7.0
2013/05/04

The current head of the list of Obsolete Stable releases.

Squeeze

Release 8.0
2011/02/06

Obsolete Stable release.

Lenny

Release 5.0
2009/02/14

Obsolete Stable release.

Etch

Release 4.0
2007/04/08

Obsolete Stable release.

Sarge

Release 3.1
2005/06/06

Obsolete Stable release.

Woody

Release 3.0
2002/07/19

Obsolete Stable release.

Potato

Release 2.2
2000/08/15

Obsolete Stable release.

Slink

Release 2.1
1999/03/09

Obsolete Stable release.

Hamm

Release 2.0
1998/07/24

Obsolete Stable release.

Bo

Release 1.3
1997/06/05

Obsolete Stable release.

Rex

Release 1.2
1996/12/12

Obsolete Stable release.

Buzz

Release 1.1
1996/06/17

Obsolete Stable release.

Previous versions did not have versions.

Testing legacy code by adding singletons

Frances Buontempo from BuontempoConsulting

This is not a good idea: Michael Feathers says "STOP IT NOW"

Testing legacy code

Many people have read Mike Feather's excellent book, "Working effectively with legacy code." including people on my team. Some people like Mocks. Watch this space - Overload 127 will contain an article asking if mocks are always the right thing to use.

My team has lots of legacy code, that is code without tests. We want to get it under test and I want these tests to run on our Jenkins box. I want any quick running tests to run on each checkin and email us if the build got broken and whoever broke it fix it. A girl can dream.

Stop it - you're doing it wrong


We seem to be developing a "pattern" whereby we introduce singletons in order to make our code testable. Yes, I just said introduce singletons in order to make the code testable.
I think this is happening because "we" (well, they) want to use gmock because it's brilliant. I could be wrong. Perhaps it doesn't matter why it is happening we just need to stop this and do something different.

Why does gmock make you write singletons?

Let's look at an example, with the names changed to protect the guilty.
Suppose you have some code like this (C++).

class Asset
{

    //miles and miles of public functions and comments
    double Value(std::string logMessage, double someIrrelevantNumberToLog);

  

};

double Asset::Value(std::string logMessage, double someIrrelevantNumberToLog)
{
    ENTERPRISE_INHOUSE_LOG_FRAMEWORK_THAT_PULLS_IN_THE_WORLD(info, logMessage, someIrrelevantNumberToLog);

    double value = 0.0;
    if (isSpot)
        value = spotValue(m_notional, m_exchangeRate);
    else
        value = futureValue(m_notional, m_exchangeRate);
    return value;

}

spotValue and futureValue are C functions that may or may not call COBOL or FORTRAN or similar.

We have ended up with some tests. Yay! Which use singletons. Boo!!
(Hope you like the comment being in red - as a warning rather than the odd convention of making them green in many IDEs).


No, but, HOW?

In order to test this, and armed with gmock we have something like mockSpotValue.h (namespaces and include guards left as an exercise for the reader for brevity)


#include <gmock>

class MockSpotValue 

{
    public:
    MOCK_CONST_METHOD(spotValue, double(double, double));


  void rest() 
  {
      Mock::VerifyAndClear(this);

  }
};

/**
 * Singleton
 * /
MockSpotValue & mockSpotValue();


Let's not point out this isn't a singleton. I'll leave the "mockSpotValue" instance create factory builder method as an exercise for the reader too. Making comments in red reminds me of being a teacher. It's the future. Or spot on. Depending on a boolean.

Now we use a linker seam to make our very own spotValue we can call in a test on a dev box.

double spotValue(double x, double y)
{
    mockSpotValue().(x,y);
}



And where's the test(s)?

Ah. Tests. Yes, having done this we should write some. Or maybe just one for brevity.

TEST_F(ValueTest, testGetSpotValueWithZeroNotional)
{
  MockSpotValue & valueApi = mockSpotValue();
  valueApi,reset();
  Asset asset;
  asset.makeSpot();//or something mad like that

  EXPECT_CALL(valueApi, spotValue(_, _)).WillOnce(Return(42));
  EXPECT_THAT(asset.Value(), DoubleEq(42.0));
}
I have simplified this. In order to get something like this Asset into a test we did some things with a sprout. This may require another blog post.

BUT that's a B(ad) U(nit) T(est)

How do we know this is a bad test? Because we have seen the singleton? Even without that the name smells. "testGetSpotValueWithZeroNotional" 
I have seen worse. I saw one call "testDefaultIsValid" which asserted that a thing constructed with defaults IS NOT valid. I digress.
So, testGetSpotValueWithZeroNotional. What are we testing? Can we make this test name clearly express what is tests?

The best I can come up with is testThatSpotAssetValueReturnsTheValueIToldTheMockToReturn or more simply
testThatGMockDoesWhatItIsSupposedToCosYouCannotTrustThesePeople

Help

I like that we are trying to get tests round legacy code. I just have a few qualms about how we are doing this. Please comment with suggestions on how to test this better.
I feel like banning mocks until we have written a few characterisation tests. At least there will be fewer singletons that way. Who ever heard of adding singletons in order to test code?




Eigenfaces FTW or the "Zebra/non-zebra decision boundary."

Frances Buontempo from BuontempoConsulting

Yesterday I attended the Karen Spärck Jones lecture at the BCS in London Dr Cordelia Schmid talked about computer vision, giving an overview of it's history through to the current state of the art. This is a tall order to get into an hour or so.

Let's see if I can summarise what she covered.

Still pictures and moving pictures need different techniques. For still pictures, we start with attempting to recognise objects or classes of objects. For moving pictures we might be spotting actions, as well as objects; maybe tuning a stringed instrument or celebrating a birthday. For still pictures, spotting a known chair in pictures is slightly easier than getting a program to spot any chair in pictures. How do you generalise the definition of chair anyway?

For the simpler case of a specific chair, or other object, you still need to deal with problems such as the different viewpoints, or different scales. The pixels of a bridge/chair/object close up will be completely different to the same bridge further away, or at a slightly different angle. Techniques started with edge detection, then moved on to projective invariants (and geometric and photometric invariants - light levels affect the pixel). I regarded this as akin to the difference between bitmaps and scalable vector graphics.

A milestone in the move away from edge detection to feature selection came with "Eigenfaces" - see Turk and Pentland. This uses principal component analysis.. In essence you find the line of best fit through the points, plotted in n-dimensional space, if you have n features. This is the first eigenvector. It's a vector, as it has direction. It's "eigen" as it is a peculiar, singular or *characteristic* - etymology slightly uncertain. If you project the data onto this, you will have lost lots of information. You then find a perpendicular line - the 2nd best fit line. And continue until you've captured enough information. This allows you to summarise datasets and is sometimes known as a feature reduction technique. Have you ever wonder how facebook recognises faces? Or how football programmes track how far a footballer has run? Actually the latter is more moving pictures, so I am ahead of myself.

These approaches look at the global scale - the whole picture. Next came local greyscale invariants, using a voting system to spot things. This can deal with photometric problems - varying light levels. Next we have SIFT - scale-invariant feature transform.

Mention was then made of wavelet filters and boosting feature selection, trained on positive and negative examples, such as pictures with a given object, say a car, and pictures without the object. The code is in OpenCV. I wonder if this is similar to AdaBoost.

Mention was then made of histograms of orientation - see Datal and Triggs. This is related to support vector machines, SVM, which finds a hyperplane between positive and negative examples. Some example still pictures were shown wherein this technique could be used to detect a, and I quote, "Zebra/non-zebra decision boundary." This may not seem like a day-to-day problem many of us face, but made the important point that you need training data near the boundary - for example other animals with stripes, and other things with a similar profile, like a motorbike. In a more general setting I was thinking about flushing out edge-cases in unit tests. The importance of a good set of representative training data was made - you need more than just edge cases, you want many cases away from the edges too. This also applies to automated testing. But I digress.

Finally we move on to the current state of the art - convolution neural networks (CNN), which I have not met before. The find "high-dimensional aggregated descriptors" - they have a huge number of nodes and several layers and require some serious computing power - GPU etc etc. As always there is a trade-off between speed and accuracy. I presume the hand-tuned network may be incomprehensible afterwards. I have worked on "feature extraction" from feed-forward neural networks before which represent a trained network as a decision tree so a human can understand what the program has discovered. I presume for CNNs this is neither possible nor desirable. It just needs to get the job done and find Wally^H^H^H^H^H zebras. I previous mentioned python to find Wally on El Reg. Aren't computers amazing?

Could a machine automatically tag things in a still picture? "Dog 1: Terror", "Man: John Smith". I wonder if we end up with CCTV automatically sending out Robocop to arrest people. Big brother is watching you and figuring out what you're doing.

This leads to the action recognition, mentioned at the start. Having got to a point where we tag things in a still picture, can we set the machines lose to do "weak supervised learning" - find an interesting thing in this video. We were shown examples of a programming picking out a bird or person etc moving in a video, Sometimes it worked, sometimes it didn't. Supervised learning involves giving training examples as input and getting the trained algo to find the same things in other inputs. For moving pictures describing the data - giving positive and negative examples would take hours. Would you go through frame by frame and label features? It would take far too long. Instead let it learn as it goes, setting it off with a few clues - here's a robin. Is there one in this movie? Or spot and label a moving thing - which happened to be a car moving very quickly so seeming to get much smaller - it didn't find that. It seems slow movements are easier to track than fast jerky ones. Though an algo did manage to draw a rectangle around a cat rolling about in another video. The two main techniques involved were dense trajectory features (Wang) and CNN features for optical flow (Simonyan). These made the front page in the last year or so.

A compelling throw away comment at the end was that hand-crafted models are NOT machine learning (ML).Most ML I have attempted before has left me to chose some parameters - how many iterations, how fast to move towards a solution, how many layers in my neural network. The machine has learnt nothing - it just did what it was told. True ML would let the machine find its own parameters. Of course, I have seen a few people trying to do this. It's all very exciting.

Somebody asked, "How come I don't see any of this in my day to day life?" I presume the usual - this is all so academic. Pay attention at the back, I say...

  • Have you ever been issued with an automatic speeding ticket? How did it find you?
  • Have you ever uploaded a picture to facebook and found little boxes around faces (and the odd random tree, but what do you expect?) 


My Dad once asked me how on earth the sports program he was watching could tell him how far a specific footballer had run in the course of a football match. This involves image recognition, including the optical flow - tracking an individual player over the course of a game, from various different angles, so captures many of the specific problems we mentioned above.  Unless they just use a pedometer.

Fascinating stuff. I wonder if the machines could spot things we haven't spotted. For example, speckles or shadows in medical scans or even x-ray machines at passport control/baggage checks, that people might miss. Or imagine facebook looked at your holiday snaps and sent you an advert for a clinic dealing with skin cancer, having spotted the stirrings of a carcinoma in your holiday tan. Would you want this?

Further extensions including pairing up audio information, so we can find youtube videos of tuning a guitar - made much easier if the spoken commentary says "tuning" and "guitar" as well as just having the pictures to go on. Combine this with smell and haptics and the machines will soon be writing their own drivel all over the internet. Welcome Skynet.


Drawing into bitmaps and saving as a PNG in Swift on OS X

Pete Barber from C#, C++, Windows &amp; other ramblings

Not an in depth post today. For a small iOS Swift/SpriteKit game I'm writing for fun I wanted a very basic grass sprite that could be scrolled; to create a parallax effect. This amounts to a 800x400 bitmap which contains sequential isosceles triangles of 40 pixels with random heights (of up to 400 pixels) and coloured using a lawn green colour.

Initially I was creating an SKShapeNode and creating the triangles above but when scrolling the redrawing of these hurt performance, especially when running on the iOS Simulator hence the desire to use a sprite.

I had a go at creating these with Photoshop. Whilst switching to a sprite improved performance the look of the triangles drawn by hand wasn't as good as the randomly generated ones. Therefore I thought I'd generate the sprite.

It wasn't really practical to do this on iOS as the file was needed in Xcode so I thought I'd try experimenting with a command line OS X (Cocoa) program in Swift. A GUI would possibly be nice to preview the results (and re-generate if needed) and to select the save-to file location but this solution sufficed.

I'd not done any non-iOS Swift development and never generated PNGs so various amounts of Googling and StackOverflow-ing was needed. Whilst the results of these searches were very helpful I didn't come across anything showing a complete program to create a bitmap, draw into it and then save so the finished program is presented below. It's also available as a gist.


1:  import Cocoa  
2:
3: private func saveAsPNGWithName(fileName: String, bitMap: NSBitmapImageRep) -> Bool
4: {
5: let props: [NSObject:AnyObject] = [:]
6: let imageData = bitMap.representationUsingType(NSBitmapImageFileType.NSPNGFileType, properties: props)
7:
8: return imageData!.writeToFile(fileName, atomically: false)
9: }
10:
11: private func drawGrassIntoBitmap(bitmap: NSBitmapImageRep)
12: {
13: var ctx = NSGraphicsContext(bitmapImageRep: bitmap)
14:
15: NSGraphicsContext.setCurrentContext(ctx)
16:
17: NSColor(red: 124 / 255, green: 252 / 255, blue: 0, alpha: 1.0).set()
18:
19: let path = NSBezierPath()
20:
21: path.moveToPoint(NSPoint(x: 0, y: 0))
22:
23: for i in stride(from: 0, through: SIZE.width, by: 40)
24: {
25: path.lineToPoint(NSPoint(x: CGFloat(i + 20), y: CGFloat(arc4random_uniform(400))))
26: path.lineToPoint(NSPoint(x: i + 40, y: 0))
27: }
28:
29: path.stroke()
30: path.fill()
31:
32: }
33:
34: let SIZE = CGSize(width: 800, height: 400)
35:
36: if Process.arguments.count != 2
37: {
38: println("usage: grass <file>")
39: exit(1)
40: }
41:
42: let grass = NSBitmapImageRep(bitmapDataPlanes: nil, pixelsWide: Int(SIZE.width), pixelsHigh: Int(SIZE.height), bitsPerSample: 8, samplesPerPixel: 4, hasAlpha: true, isPlanar: false, colorSpaceName: NSDeviceRGBColorSpace, bytesPerRow: 0, bitsPerPixel: 0)
43:
44: drawGrassIntoBitmap(grass!)
45: saveAsPNGWithName(Process.arguments[1], grass!)
46:

Naming is hard – or is it?

Phil Nash from level of indirection

Following Peter Hilton's excellent ACCU talk, at last week's conference in Bristol, "How to name things - the hardest problem in programming", a few of us were discussing some of the points raised - and some not raised.

He had discussed identifier length without any mention of Uncle Bob's guideline, whereby the length of a variable name should be proportional to it's scope (i.e. large or global scopes need longer, descriptive, names whereas in smaller, local, scopes shorter, more concise - even single letter - names are appropriate). This seemed all the more of an omission given that he later referenced the book, Clean Code.

It wasn't that Peter disgreed with Uncle Bob (who doesn't, half the time?) that surprised me but that he didn't even mention it in passing. I thought it was fairly well known. Actually I double checked and it is not discussed fully in the book, which only says, "The length of a name should correspond to the size of its scope". This is expanded considerably in Clean Coders (video) episode 2. Also, of course, this is not really "Uncle Bob's rule". Kevlin Henney recalls that he first heard of it in the 90s and it may well have been kicking around before that. Bob calls it "The Scope Rule".

Kevlin was one of those discussing this afterwards. After initially toying with The Scope Rule in the 90s he came to consider it not particularly useful. This, too, surprised me as I had found it worked quite well for me. Or so I thought. Further discussion with Kevlin led to the conclusion that I had read more of my own interpretation into The Scope Rule than I had realised! So I started musing over exactly what my interpretation was.

A transparent reference

As it happened a concept key to clarifying matters came from another great talk at the same conference just the day before. Didier Verna's "Referential transparency is overrated". In this talk Didier discussed various ways that useful idioms in Lisp required violating referential transparency. At one point he explained how "hidden" variables may be introduced by one macro that were then used by another. This worked because the thing being referred to was named very generically - so both macros agreed on the name. He drew on the term "Anaphora" from linguistics, which is where one part of an expression - usually a pronoun - stands in for a more specialised part - such as a person's name - introduced earlier in the context. For example, just now I used the word, "he" to refer to Didier Verna. It was clear who I was talking about because his was the most recently specified name in the current context. In fact I used this anaphoric term a couple of times - and many, many, times in this article. If I had had to fully qualify "Didier Verna" every time writing would quickly become very cumbersome. Anaphora is used very frequently in natural language - usually to good effect.

Scope Creep

I believe this is key to understanding why and when shorter identifiers can be used too. When I had been talking with Kevlin it had become apparent that we had different interpretations of the word, "scope". I realised that I had subconsciously expanded the specific technical meaning to include a more general idea of "context" - including the anaphoric context.

To make this clear I might write some (C++) code like this:

	std::string s = getNextString();
	if( !s.empty() )
		std::cout << "received string: " << s << std::endl;

Many corporate, or personal, coding standards would balk at such practice! Single character identifiers? Way to obfuscate the code!

But how has it obfuscated anything? Look at it as an anaphoric entity. In this case the variable name 's' is anaphoric. We know it is "the next string" because we saw it being introduced by the function call, "getNextString()". We then use it twice on the next couple of lines. There are no other strings being introduced in the same vicinity to confuse it with, and the context in which it is used is kept small. There is no ambiguity and the full identity is revealed in the immediate vicinity.

Sustainability

But what if we add more code, or move parts of this elsewhere? Certainly code evolves over time in ways that can make things less clear if we don't change them. That's true regardless. Naming of the entities at play should always form part of your consideration when refactoring or otherwise modifying existing code. Does it make this code less "sustainable" (to reference another property that Kevlin likes to talk about)? I don't think so. In the worst case, if you don't immediately notice that a short name has become unclear because it's usage has drifted out of anaphoric range, you'll notice the next time you look at it and, momentarily, think "why is there an free variable called 's' here? What on earth is that. You'll take a moment to find it's original declaration, work out what it is, then decide to encode that in the name by renaming the variable at that point. Variable renaming is one of the safest and most ubiquituous refactorings around so I have no qualms about deferring such identity expansion to such a time as it is needed.

Why?

But what about the other side of the argument? Is there any advantage to using a short, even single character, variable name in the first place?

This is often cast as a matter of optimising for typing speed - in world where we typically read these names many many times more than we write them.

While introducing, even small, speed bumps to writing code might discourage spending more time than necessary writing code (which in turn may discourage certain refactorings) it's not really about typing performance at all - It's about readability! Consider again the linguistic definition of anaphora: substituting an, unambiguous, subsequent reference to an entity with a shorter form (e.g. a pronoun) that means the same thing. We do this all the time in natural speech and the written word. Why? Because it would sound unnatural and cumbersome to fully qualify every entity we talk about all the time!

The same applies in programming. Where it is perfectly clear from the immediate context what an identifier refers to then using greater verbosity actually increases the cognitive friction! The more unnecessary and redundant noise and ceremony we can strip away from our code the easier it will be to read, in a shorter period of time. That fact that anaphora is so common in natural language should give us a clue as to our ability to code with it's use in a natural and efficient way.

Now I've only mentally organised my thoughts around this as a result of ruminating on those two talks - and some of the offshoot discussions - but I realise this is essentially how I had interpreted The Scope Rule. Now I've worked it through when I go back and compare it with what Mr Martin actually said his version sounds like a poor proxy for the anaphoric interpretation.

So naming - good naming - is still hard. We've only just discussed one narrow aspect here. But perhaps this has made some of it that little bit easier.

ACCU Conference 2015

Products, the Universe and Everything from Products, the Universe and Everything

We spent last week at the ACCU Conference in Bristol having our brains filled with gloopy tech goodness. It was (as ever) a real blast.

chandler_carruth presenting 'C++: Easier, Faster, Safer' at ACCU 2015

We had our demo rig with us as usual, and a steady stream of folks came along to chat to us and acquire caffeine from the expresso machine on the table next door. We came back absolutely exhausted, so no detailed photoblog this time I'm afraid!

However, for me the highlight was Chandler Carruth's closing keynote C++: Easier, Faster, Safer, in which he talked about how Google were using Clang and LLVM to (among other things) perform large scale automated refactoring (cue lots of furious scribbling...)

The synopsis of the keynote says it far better than I could:

Over the past five years, the prospect of developing large software projects in C++ has changed dramatically. We have had not one but two new language standards. An amazing array of new features are available today that make the language more elegant, expressive, and easy to use. But that isn't the only change in the last five years. LLVM and Clang have helped kick start a new ecosystem of tools that make developing C++ easier, faster, and safer than ever before.

This talk will cover practical ways you can use the tools we have built in the LLVM and Clang projects. It will show you what problems they solve and why those problems matter. It will also help teach you the most effective ways we have found to use all of these tools in conjunction with modern C++ coding patterns and practices. In short, it will show you how to make *your* C++ development experience easier, faster, and safer.

Next year's conference is provisionally scheduled for 19th-23rd April. See you there!

ACCU Conference 2015

Products, the Universe and Everything from Products, the Universe and Everything

We spent last week at the ACCU Conference in Bristol having our brains filled with gloopy tech goodness. It was (as ever) a real blast.

chandler_carruth presenting 'C++: Easier, Faster, Safer' at ACCU 2015

We had our demo rig with us as usual, and a steady stream of folks came along to chat to us and acquire caffeine from the expresso machine on the table next door. We came back absolutely exhausted, so no detailed photoblog this time I'm afraid!

However, for me the highlight was Chandler Carruth's closing keynote C++: Easier, Faster, Safer, in which he talked about how Google were using Clang and LLVM to (among other things) perform large scale automated refactoring (cue lots of furious scribbling...)

The synopsis of the keynote says it far better than I could:

Over the past five years, the prospect of developing large software projects in C++ has changed dramatically. We have had not one but two new language standards. An amazing array of new features are available today that make the language more elegant, expressive, and easy to use. But that isn't the only change in the last five years. LLVM and Clang have helped kick start a new ecosystem of tools that make developing C++ easier, faster, and safer than ever before.

This talk will cover practical ways you can use the tools we have built in the LLVM and Clang projects. It will show you what problems they solve and why those problems matter. It will also help teach you the most effective ways we have found to use all of these tools in conjunction with modern C++ coding patterns and practices. In short, it will show you how to make *your* C++ development experience easier, faster, and safer.

Next year's conference is provisionally scheduled for 19th-23rd April. See you there!

ACCU Conference 2015

Products, the Universe and Everything from Products, the Universe and Everything

We spent last week at the ACCU Conference in Bristol having our brains filled with gloopy tech goodness. It was (as ever) a real blast.
chandler_carruth presenting 'C++: Easier, Faster, Safer' at ACCU 2015
We had our demo rig with us as usual, and a steady stream of folks came along to chat to us and acquire caffeine from the expresso machine on the table next door. We came back absolutely exhausted, so no detailed photoblog this time I'm afraid! However, for me the highlight was Chandler Carruth's closing keynote C++: Easier, Faster, Safer, in which he talked about how Google were using Clang and LLVM to (among other things) perform large scale automated refactoring (cue lots of furious scribbling...) The synopsis of the keynote says it far better than I could:
Over the past five years, the prospect of developing large software projects in C++ has changed dramatically. We have had not one but two new language standards. An amazing array of new features are available today that make the language more elegant, expressive, and easy to use. But that isn't the only change in the last five years. LLVM and Clang have helped kick start a new ecosystem of tools that make developing C++ easier, faster, and safer than ever before.

This talk will cover practical ways you can use the tools we have built in the LLVM and Clang projects. It will show you what problems they solve and why those problems matter. It will also help teach you the most effective ways we have found to use all of these tools in conjunction with modern C++ coding patterns and practices. In short, it will show you how to make *your* C++ development experience easier, faster, and safer.
Next year's conference is provisionally scheduled for 19th-23rd April. See you there!