Scientific management of software production

Derek Jones from The Shape of Code

When Frederick Taylor investigated the performance of workers in various industries, at the start of the 1900’s, he found that workers organise their work to suit themselves; workers were capable of producing significantly more than they routinely produced. This was hardly news. What made Taylor’s work different was that having discovered the huge difference between actual worker output and what he calculated could be achieved in practice, he was able to change work practices to achieve close to what he had calculated to be possible. Changing work practices took several years, and the workers did everything they could to resist it (Taylor’s The principles of scientific management is an honest and revealing account of his struggles).

Significantly increasing worker output pushed company profits through the roof, and managers everywhere wanted a piece of the action; scientific management took off. Note: scientific management is not a science of work, it is a science of the management of other people’s work.

The scientific management approach has been successfully applied to production where most of the work can be reduced to purely manual activities (i.e., requiring little thinking by those who performed them). The essence of the approach is to break down tasks into the smallest number of component parts, to simplify these components so they can be performed by less skilled workers, and to rearrange tasks in a way that gives management control over the production process. Deskilling tasks increases the size of the pool of potential workers, decreasing labor costs and increasing the interchangeability of workers.

Given the almost universal use of this management technique, it is to be expected that managers will attempt to apply it to the production of software. The software factory was tried, but did not take-off. The use of chief programmer teams had its origins in the scarcity of skilled staff; the idea is that somebody who knows what they were doing divides up the work into chunks that can be implemented by less skilled staff. This approach is essentially the early stages of scientific management, but it did not gain traction (see “Programmers and Managers: The Routinization of Computer Programming in the United States” by Kraft).

The production of software is different in that once the first copy has been created, the cost of reproduction is virtually zero. The human effort invested in creating software systems is primarily cognitive. The division between management and workers is along the lines of what they think about, not between thinking and physical effort.

Software systems can be broken down into simpler components (assuming all the requirements are known), but can the implementation of these components be simplified such that they can be implemented by less skilled developers? The process of simplification is practical when designing a system for repetitive reproduction (e.g., making the same widget again and again), but the first implementation of anything is unlikely to be simple (and only one implementation is needed for software).

If it is not possible to break down the implementation such that most of the work is easy to do, can we at least hire the most productive developers?

How productive are different developers? Programmer productivity has been a hot topic since people started writing software, but almost no effective research has been done.

I have no idea how to measure programmer productivity, but I do have some ideas about how to measure their performance (a high performance programmer can have zero productivity by writing programs, faster than anybody else, that don’t do anything useful, from the client’s perspective).

When the same task is repeatedly performed by different people it is possible to obtain some measure of average/minimum/maximum individual performance.

Task performance improves with practice, and an individual’s initial task performance will depend on their prior experience. Measuring performance based on a single implementation of a task provides some indication of minimum performance. To obtain information on an individual’s maximum performance they need to be measured over multiple performances of the same task (and of course working in a team affects performance).

Should high performance programmers be paid more than low performance programmers (ignoring the issue of productivity)? I am in favour of doing this.

What about productivity payments, e.g., piece work?

This question is a minefield of issues. Manual workers have been repeatedly found to set informal quotas amongst themselves, i.e., setting a maximum on the amount they will produce during a shift (see “Money and Motivation: An Analysis of Incentives in Industry” by William Whyte). Thankfully, I don’t think I will be in a position to have to address this issue anytime soon (i.e., I don’t see a reliable measure of programmer productivity being discovered in the foreseeable future).

Performance variation in 2,386 ‘identical’ processors

Derek Jones from The Shape of Code

Every microprocessor is different, random variations in the manufacturing process result in transistors, and the connections between them, being fabricated with more/less atoms. An atom here and there makes very little difference when components are built from millions, or even thousands, of atoms. The width of the connections between transistors in modern devices might only be a dozen or so atoms, and an atom here and there can have a noticeable impact.

How does an atom here and there affect performance? Don’t all processors, of the same product, clocked at the same frequency deliver the same performance?

Yes they do, an atom here or there does not cause a processor to execute more/less instructions at a given frequency. But an atom here and there changes the thermal characteristics of processors, i.e., causes them to heat up faster/slower. High performance processors will reduce their operating frequency, or voltage, to prevent self-destruction (by overheating).

Processors operating within the same maximum power budget (say 65 Watts) may execute more/less instructions per second because they have slowed themselves down.

Some years ago I spotted a great example of ‘identical’ processor performance variation, and the author of the example, Barry Rountree, kindly sent me the data. In the weeks before Christmas I finally got around to including the data in my evidence-based software engineering book. Unfortunately I could not figure out what was what in the data (relearning an important lesson: make sure to understand the data as soon as it arrives), thankfully Barry came to the rescue and spent some time doing software archeology to figure out the data.

The original plots showed frequency/time data of 2,386 Intel Sandy Bridge XEON processors (in a high performance computer at the Lawrence Livermore National Laboratory) executing the EP benchmark (the data also includes measurements from the MG benchmark, part of the NAS Parallel benchmark) at various maximum power limits (see plot at end of post, which is normalised based on performance at 115 Watts). The plot below shows frequency/time for a maximum power of 65 Watts, along with violin plots showing the spread of processors running at a given frequency and taking a given number of seconds (my code, code+data on Barry’s github repo):

Frequency vs Time at 65 Watts

The expected frequency/time behavior is for processors to lie along a straight line running from top left to bottom right, which is roughly what happens here. I imagine (waving my software arms about) the variation in behavior comes from interactions with the other hardware devices each processor is connected to (e.g., memory, which presumably have their own temperature characteristics). Memory performance can have a big impact on benchmark performance. Some of the other maximum power limits have very different, and benchmark, measurements have very different characteristics (see below).

More details and analysis in the paper: An empirical survey of performance and energy efficiency variation on Intel processors.

Intel’s Sandy Bridge is now around seven years old, and the number of atoms used to fabricate transistors and their connectors has shrunk and shrunk. An atom here and there is likely to produce even more variation in the performance of today’s processors.

A previous post discussed the impact of a variety of random variations on program performance.

Below is a png version of the original plot I saw:

Frequency vs Time at all power levels

Modular vs. monolithic programs: a big performance difference

Derek Jones from The Shape of Code

For a long time now I have been telling people that no experiment has found a situation where the treatment (e.g., use of a technique or tool) produces a performance difference that is larger than the performance difference between the subjects.

The usual results are that differences between people is the source of the largest performance difference, successive runs are the next largest (i.e., people get better with practice), and the smallest performance difference occurs between using/not using the technique or tool.

This is rather disheartening news.

While rummaging through a pile of books I had not looked at in many years, I (re)discovered the paper “An empirical study of the effects of modularity on program modifiability” by Korson and Vaishnavi, in “Empirical Studies of Programmers” (the first one in the series). It’s based on Korson’s 1988 PhD thesis, with the same title.

There were four experiments, involving seven people from industry and nine students, each involving modifying a 900(ish)-line program in some way. There were two versions of each program, they differed in that one was written in a modular form, while the other was monolithic. Subjects were permuted between various combinations of program version/problem, but all problems were solved in the same order.

The performance data was published in the paper, so I fitted various regressions models to it (code+data). There is enough information in the data to separate out the effects of modular/monolithic, kind of problem and subject differences. Because all subjects solved problems in the same order, it is not possible to extract the impact of learning on performance.

The modular/monolithic performance difference was around twice as large as the difference between subjects (removing two very poorly performing subjects reduces the difference to 1.5). I’m going to have to change my slides.

Would the performance difference have been so large if all the subjects had been experienced developers? There is not a lot of well written modular code out there, and so experienced developers get lots of practice with spaghetti code. But, even if the performance difference is of the same order as the difference between developers, that is still a very worthwhile difference.

Now there are lots of ways to write a program in modular form, and we don’t know what kind of job Korson did in creating, or locating, his modular programs.

There are also lots of ways of writing a monolithic program, some of them might be easy to modify, others a tangled mess. Were these programs intentionally written as spaghetti code, or was some effort put into making them easy to modify?

The good news from the Korson study is that there appears to be a technique that delivers larger performance improvements than the difference between people (replication needed). We can quibble over how modular a modular program needs to be, and how spaghetti-like a monolithic program has to be.

Performance of Java 2D drawing operations (part 3: image opacity)

Andy Balaam from Andy Balaam's Blog

Series: operations, images, opacity

Not because I was enjoying it, I seemed compelled to continue my quest to understand the performance of various Java 2D drawing operations. I’m hoping to make my game Rabbit Escape faster, especially on the Raspberry Pi, so you may see another post sometime actually trying this stuff out on a Pi.

But for now, here are the results of my investigation into how different patterns of opacity in images affects rendering performance.

You can find the code here: gitlab.com/andybalaam/java-2d-performance.

Results

  • Images with partially-opaque pixels are no slower than those with fully-opaque pixels
  • Large transparent areas in images are drawn quite quickly, but transparent pixels mixed with non-transparent are slow

Advice

  • Still avoid any transparency whenever possible
  • It’s relatively OK to use large transparent areas on images (e.g. a fixed-size animation where a character moves through the image)
  • Don’t bother restricting pixels to be either fully transparent or fully opaque – partially-opaque is fine

Opacity patterns in images

Non-transparent images drew at 76 FPS, and transparent ones dropped to 45 FPS.

I went further into investigating transparency by creating images that were:

  • All pixels 50% opacity (34 FPS)
  • Half pixels 0% opacity, half 100%, mixed up (34 FPS)
  • Double the size of the original image, but the extra area is fully transparent, and the original area is non-transparent (41 FPS)

I concluded that partial-opacity is not important to performance compared with full-opacity, but that large areas of transparency are relatively fast compared with images with complex patterns of transparency and opacity.

Numbers

Transparency and opacity

Test FPS
large nothing 90
large images20 largeimages 76
large images20 largeimages transparentimages 45
large images20 largeimages transparent50pcimages 34
large images20 largeimages transparent0pc100pcimages 34
large images20 largeimages transparentareaimages 41

Feedback please

Please do get back to me with tips about how to improve the performance of my experimental code.

Feel free to log issues, make merge requests or add comments to the blog post.

Performance of Java 2D drawing operations (part 2: image issues)

Andy Balaam from Andy Balaam's Blog

Series: operations, images

In my previous post I examined the performance of various drawing operations in Java 2D rendering. Here I look at some specifics around rendering images, with an eye to finding optimisations I can apply to my game Rabbit Escape.

You can find the code here: gitlab.com/andybalaam/java-2d-performance.

Results

  • Drawing images with transparent sections is very slow
  • Drawing one large image is slower than drawing many small images covering the same area(!)
  • Drawing images outside the screen is slower than not drawing them at all (but faster than drawing them onto a visible area)

Advice

  • Avoid transparent images where possible
  • Don’t bother pre-rendering your background tiles onto a single image
  • Don’t draw images that are off-screen

Images with transparency

All the images I used were PNG files with a transparency layer, but in most of my experiments there were no transparent pixels. When I used images with transparent pixels the frame rate was much slower, dropping from 78 to 46 FPS. So using images with transparent pixels causes a significant performance hit.

I’d be grateful if someone who knows more about it can recommend how to improve my program to reduce this impact – I suspect there may be tricks I can do around setComposite or setRenderingHint or enabling/encouraging hardware acceleration.

Composite images

I assumed that drawing a single image would be much faster than covering the same area of the screen by drawing lots of small images. In fact, the result was the opposite: drawing lots of small images was much faster than drawing a single image covering the same area.

The code for a single image is:

g2d.drawImage(
    singleLargeImage,
    10,
    10,
    null
)

and for the small images it is:

for (y in 0 until 40)
{
    for (x in 0 until 60)
    {
        g2d.drawImage(
            compositeImages[(y*20 + x) % compositeImages.size],
            10 + (20 * x),
            10 + (20 * y),
            null
        )
    }
}

The single large image was rendered at 74 FPS, whereas covering the same area using repeated copies of 100 images was rendered at 80 FPS. I ran this test several times because I found the result surprising, and it was consistent every time.

I have to assume some caching (possibly via accelerated graphics) of the small images is the explanation.

Drawing images off the side of the screen

Drawing images off the side of the screen was faster than drawing them in a visible area, but slower than not drawing them at all. I tested this by adding 10,000 to the x and y positions of the images being drawn (I also tested subtracting 10,000 with similar results). Not drawing any images ran at 93 FPS, drawing images on-screen at 80 FPS, and drawing them off-screen only 83 FPS, meaning drawing images off the side takes significant time.

Advice: check whether images are on-screen, and avoid drawing them if not.

Numbers

Transparency

Test FPS
large nothing 95
large images20 largeimages 78
large images20 largeimages transparentimages 46

Composite images

(Lots of small images covering an area, or a single larger image.)

Test FPS
large nothing 87
large largesingleimage 74
large compositeimage 80

Offscreen images

Test FPS
large nothing 93
large images20 largeimages 80
large images20 largeimages offscreenimages 83

Feedback please

Please do get back to me with tips about how to improve the performance of my experimental code.

Feel free to log issues, make merge requests or add comments to the blog post.

Performance of Java 2D drawing operations

Andy Balaam from Andy Balaam's Blog

I want to remodel the desktop UI of my game Rabbit Escape to be more
convenient and nicer looking, so I took a new look at game-loop-style graphics rendering onto a canvas in a Java 2D (Swing) UI.

Specifically, how fast can it be, and what pitfalls should I avoid when I’m doing it?

Results

  • Larger windows are (much) slower
  • Resizing images on-the-fly is very slow, even if they are the same size every time
  • Drawing small images is fast, but drawing large images is slow
  • Drawing rectangles is fast
  • Drawing text is fast
  • Drawing Swing widgets in front of a canvas is fast
  • Creating fonts on-the-fly is a tiny bit slow

Code

You can find the full code (written in Kotlin) at gitlab.com/andybalaam/java-2d-performance.

Basically, we make a JFrame and a Canvas and tell them not to listen to repaints (i.e. we control their drawing).

val app = JFrame()
app.ignoreRepaint = true
val canvas = Canvas()
canvas.ignoreRepaint = true

Then we add any buttons to the JFrame, and the canvas last (so it displays behind):

app.add(button)
app.add(canvas)

Now we make the canvas double-buffered and get hold of a buffer image for it:

app.isVisible = true
canvas.createBufferStrategy(2)
val bufferStrategy = canvas.bufferStrategy
val bufferedImage = GraphicsEnvironment
    .getLocalGraphicsEnvironment()
    .defaultScreenDevice
    .defaultConfiguration
    .createCompatibleImage(config.width, config.height)

Then inside a tight loop we draw onto the buffer image:

val g2d = bufferedImage.createGraphics()
try
{
    g2d.color = backgroundColor
    g2d.fillRect(0, 0, config.width, config.height)

    ... the different drawing operations go here ...

and then swap the buffers:

    val graphics = bufferStrategy.drawGraphics
    try {
        graphics.drawImage(bufferedImage, 0, 0, null)
        if (!bufferStrategy.contentsLost()) {
            bufferStrategy.show()
        }
    } finally {
        graphics.dispose()
    }
} finally {
    g2d.dispose()
}

Results

Baseline: some rectangles

I decided to compare everything against drawing 20 rectangles at random points on the screen, since that seems like a minimal requirement for a game.

My test machine is an Intel Core 2 Duo E6550 2.33GHz with 6GB RAM and a GeForce GT 740 graphics card (I have no idea whether it is being used here – I assume not). I am running Ubuntu 18.04.1 Linux, OpenJDK Java 1.8.0_191, and Kotlin 1.3.20-release-116. (I expect the results would be identical if I were using Java rather than Kotlin.)

I ran all the tests in two window sizes: 1600×900 and 640×480. 640.×480 was embarrassingly fast for all tests, but 1600×900 struggled with some of the tasks.

Drawing rectangles looks like this:

g2d.color = Color(
    rand.nextInt(256),
    rand.nextInt(256),
    rand.nextInt(256)
)
g2d.fillRect(
    rand.nextInt(config.width / 2),
    rand.nextInt(config.height / 2),
    rand.nextInt(config.width / 2),
    rand.nextInt(config.height / 2)
)

In the small window, the baseline (20 rectangles) ran at 553 FPS. In the large window it ran at 87 FPS.

I didn’t do any statistics on these numbers because I am too lazy. Feel free to do it properly and let me know the results – I will happily update the article.

Fewer rectangles

When I reduced the number of rectangles to do less drawing work, I saw small improvements in performance. In the small window, drawing 2 rectangles instead of 20 increased the frame rate from 553 to 639, but there is a lot of noise in those results, and other runs were much closer. In the large window, the same reduction improved the frame rate from 87 to 92. This is not a huge speed-up, showing that drawing rectangles is pretty fast.

Adding fixed-size images

Drawing pre-scaled images looks like this:

g2d.drawImage(
    image,
    rand.nextInt(config.width),
    rand.nextInt(config.height),
    null
)

When I added 20 small images (40×40 pixels) to be drawn in each frame, the performance was almost unchanged. In the small window, the run showing 20 images per frame (as well as rectangle) actually ran faster than the one without (561 FPS versus 553), suggesting the difference is negligible and I should do some statistics. In the large window, the 20 images version ran at exactly the same speed (87 FPS).

So, it looks like drawing small images costs almost nothing.

When I moved to large images (400×400 pixels), the small window slowed down from 553 to 446 FPS, and the large window slowed from 87 to 73 FPS, so larger images clearly have an impact, and we will need to limit the number and size of images to keep the frame rate acceptable.

Scaling images on the fly

You can scale an image on the fly as you draw onto a Canvas. (Spoiler: don’t do this!)

My code looks like:

val s = config.imageSize
val x1 = rand.nextInt(config.width)
val y1 = rand.nextInt(config.height)
val x2 = x1 + s
val y2 = y1 + s
g2d.drawImage(
    unscaledImage,
    x1, y1, x2, y2,
    0, 0, unscaledImageWidth, unscaledImageHeight,
    null
)

Note the 10-argument form of drawImage is being used. You can be sure you have avoided this situation if you use the 4-argument form from the previous section.

Note: the resulting image is the same size every time, and the Java documentation implies that scaled images may be cached by the system, but I saw a huge slow-down when using the 10-argument form of drawImage above.

On-the-fly scaled images slowed the small window from 446 to 67 FPS(!), and the large window from 73 to 31 FPS, meaning the exact same rendering took over twice as long.

Advice: check you are not using one of the drawImage overloads that scales images! Pre-scale them yourself (e.g. with getScaledInstance as I did here).

Displaying text

Drawing text on the canvas like this:

g2d.font = Font("Courier New", Font.PLAIN, 12)
g2d.color = Color.GREEN
g2d.drawString("FPS: $fpsLastSecond", 20, 20 + i * 14)

had a similar impact to drawing small images – i.e. it only affected the performance very slightly and is generally quite fast. The small window slowed from 553 to 581 FPS, and the large window from 87 to 88.

Creating the font every time (as shown above) slowed the process a little more, so it is worth moving the font creating out of the game loop and only doing it once. The slowdown just for creating the font was 581 to 572 FPS in the small window, and 88 to 86 FPS in the large.

Swing widgets

By adding Button widgets to the JFrame before the Canvas, I was able to display them in front. Their rendering and focus worked as expected, and they had no impact at all on performance.

The same was true when I tried adding these widgets in front of images rendered on the canvas (instead of rectangles).

Turning everything up to 11

When I added everything I had tested all at the same time: rectangles, text with a new font every time, large unscaled images, and large window, the frame rate reduced to 30 FPS. This is a little slow for a game already, and if we had more images to draw it could get even worse. However, when I pre-scaled the images the frame rate went up to 72 FPS, showing that Java is capable of running a game at an acceptable frame rate on my machine, so long as we are careful how we use it.

Numbers

Small window (640×480)

Test FPS
nothing 661
rectangles2 639
rectangles20 553
rectangles20 images2 538
rectangles20 images20 561
rectangles20 images20 largeimages 446
rectangles20 images20 unscaledimages 343
rectangles20 images20 largeimages unscaledimages 67
rectangles20 text2 582
rectangles20 text20 581
rectangles20 text20 newfont 572
rectangles20 buttons2 598
rectangles20 buttons20 612

Large window (1200×900)

Test FPS
large nothing 93
large rectangles2 92
large rectangles20 87
large rectangles20 images2 87
large rectangles20 images20 87
large rectangles20 images20 largeimages 73
large rectangles20 images20 unscaledimages 82
large rectangles20 images20 largeimages unscaledimages 31
large rectangles20 text2 89
large rectangles20 text20 88
large rectangles20 text20 newfont 86
large rectangles20 buttons2 88
large rectangles20 buttons20 87
large images20 buttons20 largeimages 74
large rectangles20 images20 text20 buttons20 largeimages newfont 72
large rectangles20 images20 text20 buttons20 largeimages unscaledimages newfont 30

Feedback please

Please do get back to me with tips about how to improve the performance of my experimental code.

Feel free to log issues, make merge requests or add comments to the blog post.

Impact of group size and practice on manual performance

Derek Jones from The Shape of Code

How performance varies with group size is an interesting question that is still an unresearched area of software engineering. The impact of learning is also an interesting question and there has been some software engineering research in this area.

I recently read a very interesting study involving both group size and learning, and Jaakko Peltokorpi kindly sent me a copy of the data.

That is the good news; the not so good news is that the experiment was not about software engineering, but the manual assembly of a contraption of the experimenters devising. Still, this experiment is an example of the impact of group size and learning (through repeating the task).

Subjects worked in groups of one to four people and repeated the task four times. Time taken to assemble a bespoke, floor standing rack with some odd-looking connections between components (the image in the paper shows an image of something that might function as a floor standing book-case, if shelves were added, apart from some component connections getting in the way) was measured.

The following equation is a very good fit to the data (code+data). There is theory explaining why log(repetitions) applies, but the division by group-size was found by suck-it-and-see (in another post I found that time spent planning increased with teams size).

There is a strong repetition/group-size interaction. As the group size increases, repetition has less of an impact on improving performance.

time = 0.16+ 0.53/{group size} - log(repetitions)*[0.1 + {0.22}/{group size}]

The following plot shows one way of looking at the data (larger groups take less time, but the difference declines with practice):

Time taken (hours) for various group sizes, by repetition.

and here is another (a group of two is not twice as fast as a group of one; with practice smaller groups are converging on the performance of larger groups):

Time taken (hours) for various repetitions, by group size.

Would the same kind of equation fit the results from solving a software engineering task? Hopefully somebody will run an experiment to find out :-)

Publishing information on project progress: will it impact delivery?

Derek Jones from The Shape of Code

Numbers for delivery date and cost estimates, for a software project, depend on who you ask (the same is probably true for other kinds of projects). The people actually doing the work are likely to have the most accurate information, but their estimates can still be wildly optimistic. The managers of the people doing the work have to plan (i.e., make worst/best case estimates) and deal with people outside the team (i.e., sell the project to those paying for it); planning requires knowledge of where things are and where they need to be, while selling requires being flexible with numbers.

A few weeks ago I was at a hackathon organized by the people behind the Project Data and Analytics meetup. The organizers (Martin Paver & co.) had obtained some very interesting project related data sets. I worked on the Australian ICT dashboard data.

The Australian ICT dashboard data was courtesy of the Queensland state government, which has a publicly available dashboard listing digital project expenditure; the Victorian state government also has a dashboard listing ICT expenditure. James Smith has been collecting this data on a monthly basis.

What information might meaningfully be extracted from monthly estimates of project delivery dates and costs?

If you were running one of these projects, and had to provide monthly figures, what strategy would you use to select the numbers? Obviously keep quiet about internal changes for as long as possible (today’s reduction can be used to offset a later increase, or vice versa). If the client requests changes which impact date/cost, then obviously update the numbers immediately; the answer to the question about why the numbers changed is that, “we are responding to client requests” (i.e., we would otherwise still be on track to meet the original end-points).

What is the intended purpose of publishing this information? Is it simply a case of the public getting fed up with overruns, with publishing monthly numbers is seen as a solution?

What impact could monthly publication have? Will clients think twice before requesting an enhancement, fearing public push back? Will companies doing the work make more reliable estimates, or work harder?

Project delivery dates/costs change because new functionality/work-to-do is discovered, because the appropriate staff could not be hired and other assorted unknown knowns and unknowns.

Who is looking at this data (apart from half a dozen people at a hackathon on the other side of the world)?

Data on specific projects can only be interpreted in the context of that project. There is some interesting research to be done on the impact of public availability on client and vendor reporting behavior.

Will publication have an impact on performance? One way to get some idea is to run an A/B experiment. Some projects have their data made public, others don’t. Wait a few years, and compare project performance for the two publication regimes.

Time taken to compile a source file

Derek Jones from The Shape of Code

How long will it take to compile a source file?

When computers were a lot slower than they are today, this question was of general interest. Job scheduling is more effective when reliable runtime estimates are available, and developers want to know if there is enough time to get a coffee before the compile finishes.

An embarrassing fact about compile time performance, used to be that a large percentage of compile time was spent doing lexical analysis [“The cost of lexical analysis”, I cannot find an online copy]. Why was this embarrassing? Compiler writers like to boast about all the fancy optimizations their compiler does; but doing fancy stuff consumes lots of resources, so why were compilers spending so much of their time doing simple things like lexical analysis? The reality was that fancy compiler optimizations were not commercially viable until developer computers contained tens of megabytes of memory, i.e., very few pre-1990 compilers did any real optimization (people are still fussing over lexer performance).

An analysis of the data in Captain Dennis Miller’s Masters thesis (late Rome period), finds compile time is proportional to the square root of the number of tokens in the source (code+data); more complicated models are a slightly better fit. Where did square root come from? I expected a linear relationship, but would be willing to go with log. The measurements are from Ada compilers in the mid 1980s. I know several people who worked on Ada compilers during that time, and they were implementing the latest fancy optimizations (Ada was going to be the next big thing and the venture capital was flowing; big companies, with big computers were going to be paying lots of money to use Ada, but then microcomputers came along). I think that square root is driven by OS resource limitations, the compilers are using lots of memory and a noticeable amount of time is spent swapping.

So computers got a lot faster and people lost interest in estimates of how long it would take to compile individual files. I have not seen any interest in predicting how long it would take to compile whole projects (just complaints about how long it takes). There has been some work on progress indicators, updated as compilation progresses, which is a step in the right direction. Perhaps somebody has recorded compile time information and thrown machine learning at it; I usually ignore machine learning papers applied to software engineering and perhaps I have missed something. Pointers to project compile time prediction work welcome.

Then along came just-in-time compilation. Now people want to estimate how long it will take to generate machine code from some intermediate form, that is being interpreted.

The plot below (thanks to Rafael Auler for kindly supplying the data from his paper) shows the time taken to generate code from functions containing a given number of LLVM instructions (an intermediate code), at optimization level O3. The red line is a regression fit to one of the ‘arms’ and shows constant time for less than 100’ish instructions and then a linear relationship. I have no idea why the time is roughly constant for a large number of functions.

Time taken to convert functions containing a given number of LLVM instructions to machine code

There is a lot of variation for function containing the same number of instructions. This is to be expected when lots of different optimizations are being tried; sometimes a function will contain lots of the kind of code that a particular optimization spends lot of times process and sometimes the code will not contain anything interesting (i.e., no optimizations are found).