Visual Lint 6.5.5.300 has been released

Products, the Universe and Everything from Products, the Universe and Everything

This is a recommended maintenance update for Visual Lint 6.0 and 6.5. The following changes are included:
  • Fixed an MSBuild parsing bug which was preventing Visual Studio system include folders from being read in some circumstances.
  • Fixed a bug which could prevent the VisualLintGui code editor determining the location of PC-lint Plus indirect files in order to open them from a context menu.
  • Updated the values of _MSC_VER and _MSC_FULL_VER in the PC-lint Plus indirect file co-rb-vs2017.lnt for compatibility with Visual Studio 2017 v15.8.3. This change is needed to fix a fatal error in yvals_core.h if _MSC_VER is less than 1915.
  • Added a PC-lint Plus compatible version of lib-stl.lnt to the installer as it is not currently supplied with PC-lint Plus.
  • Added additional indirect files needed for analysing Visual Studio 2012, 2013 and 2015 codebases with PC-lint Plus 1.2 to the installer.
  • If a project intermediate files folder does not currently exist, it will not be referenced with a -i (include folder) directive on generated PC-lint or PC-lint Plus command lines. This avoids extraneous 686 warnings "(Warning -- option '-i' is suspicious: absolute path is not accessible)".

    Note that if build artifacts (e.g. .tlh or .tli files) are required for analysis purposes, analysing without the intermediate folder will most likely result in analysis errors. In this case, performing a build and re-analysing the affected files/projects should fix it.
Download Visual Lint 6.5.5.300

More Continuous #NoProjects questions

Allan Kelly from Allan Kelly Associates

QA-2018-10-24-14-20.jpg

Three short questions and answers to finish off my series of left over questions about #NoProjects, #NoEstimates and the Continuous model.

Q4: How do we prioritize and organize requests on a product that are from opposite business owners? – for example legal (who wants to reduce the risk and annoy more customers) and sales (who want to increase the features and simplify life) can be arbitrated in a backlog?

You can think of this as “which is worth more apples or milk?” It is difficult to compare two things which are actually different. Yes they are both work requests – or fruit – and each can make a case but at the end of the day you can’t make everything number 1 priority.

In real life we solve this problem with money.

Walk into your local supermarket. Apples, oranges and milk are both price in the same currency, sterling for me, Francs for the person who asked this question, maybe Euro’s or Dollars for you. So if we can assign value points to each request we are half way to solving the problem.

Now sales will argue that without their request there is no real money so whatever they ask for is worth more. And legal will argue that nobody wants to go to jail so their request must be worth more. You can set your analyst to work to calculate a value but a) this will take time and b) even when they have an answer people will dispute it.

Therefore, I would estimate a value – planning poker style. With an estimates value there is no pretence of “right” or “correct”. Each party gives a position and a discussion follows. With luck the different sides converge, if they don’t then I average. Once all requests are valued you have a first cut at prioritisation.

Q5: How to evaluate the number of people you need to maintain software?

I don’t. This is a strategic decision.

Sure someone somewhere needs to decide how much capacity – often expressed as people – will be allocated to a particular activity but rather than base this on need I see this as another priority decision. If a piece of software is important to an organization then it deserves more maintenance, and if it is not important it deserves less.

You could look at the size of the backlog, or the rate of new requests and contrast this with the rate at which work gets done. This would allow you to come up with an estimate of how many people are needed to support a product. But where is the consideration of value?

Instead you say something like: “This product is a key part of our business but the days of big changes are gone. Therefore one person will be assigned to look after the software.”

If in three months more people in the business are demanding more changes to the software and you can see opportunities to extract more value – however you define value – then that decision might be revised. Maybe a second person is assigned.

Or maybe you decide that maintaining this product isn’t delivering more value so why bother? Reduce work to only that needed to keep it going.

Q6: How do you evaluate the fact that your application becomes twice as fast (or slower) when you add a new feature in a short period of time?

Answering this question requires that the team has a clearly defined idea of what value is. Does the organization value execution speed? Does the organization value up-time? Does the organization value capacity?

Hopefully some of this will have come out of the value estimation exercise in Q4, if not the analysis is just going to take a bit longer. The thing to remember is: what does the change do for the business/customers/clients? Being faster is no use in itself, but doing X faster can be valuable.

The real problem here is time. Some changes lead to improvements which can be instantly measured. But there are plenty of changes where the improvements take time to show benefit. Here you might need to rely on qualitative feedback in the short run (“Sam says it is easier to use because it is faster”). Still I would keep trying to evaluate what happens and see if you can make some quantitive assessment later.

Notice that Q4 and Q6 are closely related. If you have a clear understanding of why you are doing something (Q4) then it becomes easier to tell if you have delivered the expected value (Q6). And in trying to understand what value you have delivered then you refine your thinking about the value you might deliver with future work.

Another feedback cycle.


These questions concludes the series of question carried over from the #NoEstimates/#NoProjects workshop in Zurich – see also How should we organize our teams?Dealing with unplanned but urgent workHow do we organise with a parallel team? – if you would like me to answer your question in this blog then please just e-mail me.


The #NoProjects books Project Myopia and Continuous Digital discuss these and similar issues in depth and are both available to buy in electronic or physical form from Amazon.

CDMyopia-2018-10-24-14-20.jpg

The post More Continuous #NoProjects questions appeared first on Allan Kelly Associates.

LintProject Pro End of Life Notice

Products, the Universe and Everything from Products, the Universe and Everything

LintProject Pro is a command line only product which can perform a basic per-file analysis of a C/C++ codebase using PC-lint or CppCheck. In many ways it was the proof of concept for Visual Lint, and although it has served us well, it's getting a bit long in the tooth now.

For example, unlike Visual Lint Build Server Edition (which has inherited its capabilities), LintProject Pro only makes use of a single CPU core when running analysis and doesn't support current analysis tools such as PC-lint Plus.

The interfaces to the two products are however very similar as the command line interface of Visual Lint Build Server is based on that of LintProject Pro. In fact, Visual Lint Build Server Edition can do everything LintProject Pro can - along with much, much more.

As such we think it is now finally time to put LintProject Pro out to pasture, and to make that easier we are offering a migration path from LintProject Pro to Visual Lint Build Server Edition. This involves trading in each existing LintProject Pro licence purchased before 23rd October 2018 for a 25% discount on a corresponding Visual Lint Build Server Edition licence. As such LintProject Pro has now been removed from our online store.

To take advantage of the upgrade, just write to us quoting which LintProject Pro licence (or licences) you wish to trade-in.

We've tried to keep this process clear and simple. The value of the discount offered exceeds that of the LintProject Pro licence, so this is a lower cost route to obtain an equivalent PC-lint Plus compatible product than (for example) refunding any existing LintProject Pro licences and purchasing Visual Lint Build Server Edition licences at full price.

If you have any questions, just ask.

LintProject Pro End of Life Notice

Products, the Universe and Everything from Products, the Universe and Everything

LintProject Pro is a command line only product which can perform a basic per-file analysis of a C/C++ codebase using PC-lint or CppCheck. In many ways it was the proof of concept for Visual Lint, and although it has served us well, it's getting a bit long in the tooth now.

For example, unlike Visual Lint Build Server Edition (which has inherited its capabilities), LintProject Pro only makes use of a single CPU core when running analysis and doesn't support current analysis tools such as PC-lint Plus.

The interfaces to the two products are however very similar as the command line interface of Visual Lint Build Server is based on that of LintProject Pro. In fact, Visual Lint Build Server Edition can do everything LintProject Pro can - along with much, much more.

As such we think it is now finally time to put LintProject Pro out to pasture, and to make that easier we are offering a migration path from LintProject Pro to Visual Lint Build Server Edition. This involves trading in each existing LintProject Pro licence purchased before 23rd October 2018 for a 25% discount on a corresponding Visual Lint Build Server Edition licence. As such LintProject Pro has now been removed from our online store.

To take advantage of the upgrade, just write to us quoting which LintProject Pro licence (or licences) you wish to trade-in.

We've tried to keep this process clear and simple. The value of the discount offered exceeds that of the LintProject Pro licence, so this is a lower cost route to obtain an equivalent PC-lint Plus compatible product than (for example) refunding any existing LintProject Pro licences and purchasing Visual Lint Build Server Edition licences at full price.

If you have any questions, just ask.

LintProject Pro End of Life Notice

Products, the Universe and Everything from Products, the Universe and Everything

LintProject Pro is a command line only product which can perform a basic per-file analysis of a C/C++ codebase using PC-lint or CppCheck. In many ways it was the proof of concept for Visual Lint, and although it has served us well, it's getting a bit long in the tooth now. For example, unlike Visual Lint Build Server Edition (which inherited its capabilities), LintProject Pro doesn't support PC-lint Plus and only makes use of a single CPU core when running analysis. The interfaces to the two products are however very similar as the command line interface of Visual Lint Build Server is based on that of LintProject Pro. In fact, Visual Lint Build Server Edition can do everything LintProject Pro can - along with much, much more. As such we think it is now finally time to put LintProject Pro out to pasture, and to make that easier we are offering a migration path from LintProject Pro to Visual Lint Build Server Edition. This involves trading in each existing LintProject Pro licence puchased before 23rd October 2018 for a 25% discount on a corresponding Visual Lint Build Server Edition licence. As such LintProject Pro will be removed from our online store very soon. To take advantage of the upgrade, just write to us quoting which LintProject Pro licence (or licences) you wish to trade-in. We've tried to keep this process clear and simple. The value of the discount offered exceeds that of the LintProject Pro licence, so this is a lower cost route to obtain an equivalent PC-lint Plus compatible product than (for example) refunding any existing LintProject Pro licences and purchasing Visual Lint Build Server Edition licences at full price. If you have any questions, just ask.

The Art of Prolog – reading another classic programming text

Timo Geusch from The Lone C++ Coder's Blog

I did have to learn some Prolog when I was studying CS and back then it was one of those “why do we have to learn this when everybody is programming in C or Turbo Pascal” (yes, I’m old). For some strange reason things clicked for me quicker with Prolog than Lisp, which I now […]

The post The Art of Prolog – reading another classic programming text appeared first on The Lone C++ Coder's Blog.

On The Rich Get Richer – student

student from thus spake a.k.

The Baron's latest wager set Sir R----- the task of surpassing his score before he reached eight points as they each cast an eight sided die, each adding one point to their score should the roll of their die be less than or equal to it. The cost to play for Sir R------ was one coin and he should have had a prize of five coins had he succeeded.

A key observation when figuring the fairness of this wager is that if both Sir R----- and the Baron cast greater than their present score then the state of play remains unchanged. We may therefore ignore such outcomes, provided that we adjust the probabilities of those that we have not to reflect the fact that we have done so.

Elm JSON decoder examples

Andy Balaam from Andy Balaam's Blog

I find JSON decoding in Elm confusing, so here are some thoughts and examples.

Setup

$ elm --version
0.19.0
$ mkdir myproj; cd myproj
$ elm init
...
$ elm install elm/json
...

To run the “Demo” parts of the examples below, type them into the interactive Elm interpreter. To try them out, start it like this:

$ elm repl

and import the library you need:

import Json.Decode as D

Scroll to “Concepts” at the bottom for lots of waffling about what is really going on, but if you’re looking to copy and paste concrete examples, here we are:

Examples

JSON object to Record

type alias MyRecord =
    { i : Int
    , s : String
    }

recordDecoder : D.Decoder MyRecord
recordDecoder =
    D.map2
        MyRecord
        (D.field "i" D.int)
        (D.field "s" D.string)

Demo:

> type alias MyRec = {i: Int, s: String}
> myRecDec = D.map2 MyRec (D.field "i" D.int) (D.field "s" D.string)
<internals> : D.Decoder MyRec
> D.decodeString myRecDec "{\"i\": 3, \"s\": \"bar\"}"
Ok { i = 3, s = "bar" }
    : Result D.Error MyRec

JSON array of ints to List

intArrayDecoder : D.Decoder (List Int)
intArrayDecoder =
    D.list D.int

Demo:

> myArrDec = D.list D.int
<internals> : D.Decoder (List Int)
> D.decodeString myArrDec "[3, 4]"
Ok [3,4] : Result D.Error (List Int)

JSON array of strings to List

stringArrayDecoder : D.Decoder (List String)
stringArrayDecoder =
    D.list D.string

Demo:

> myArrDec2 = D.list D.string
<internals> : D.Decoder (List String)
> D.decodeString myArrDec2 "[\"a\", \"b\"]"
Ok ["a","b"] : Result D.Error (List String)

JSON object to Dict

intDictDecoder : D.Decoder (Dict String Int)
intDictDecoder =
    D.dict D.int

Demo:

> myDictDecoder = D.dict D.int
<internals> : D.Decoder (Dict.Dict String Int)
> D.decodeString myDictDecoder "{\"a\": \"b\"}"
Err (Field "a" (Failure ("Expecting an INT") <internals>))
    : Result D.Error (Dict.Dict String Int)
> D.decodeString myDictDecoder "{\"a\": 3}"
Ok (Dict.fromList [("a",3)])
    : Result D.Error (Dict.Dict String Int)

To build a Dict of String to String, replace D.int above with
D.string.

JSON array of objects to List of Records

type alias MyRecord =
    { i : Int
    , s : String
    }

recordDecoder : D.Decoder MyRecord
recordDecoder =
    D.map2
        MyRecord
        (D.field "i" D.int)
        (D.field "s" D.string)


listOfRecordsDecoder : D.Decoder (List MyRecord)
listOfRecordsDecoder =
    D.list recordDecoder

Demo:

> import Json.Decode as D
> type alias MyRec = {i: Int, s: String}
> myRecDec = D.map2 MyRec (D.field "i" D.int) (D.field "s" D.string)
<internals> : D.Decoder MyRec
> listOfMyRecDec = D.list myRecDec
<internals> : D.Decoder (List MyRec)
> D.decodeString listOfMyRecDec "[{\"i\": 4, \"s\": \"one\"}, {\"i\": 5, \"s\":\"two\"}]"
Ok [{ i = 4, s = "one" },{ i = 5, s = "two" }]
    : Result D.Error (List MyRec)

Concepts

What is a Decoder?

A Decoder is something that describes how to take in JSON and spit out something. The “something” part is written after Decoder, so e.g. Decoder Int describes how to take in JSON and spit out an Int.

The Json.Decode module contains a function that is a Decoder Int. It’s called int:

> D.int
<internals> : D.Decoder Int

In some not-all-all-true way, a Decoder is sort of like a function:

-- This is a lie, but just pretend with me for a sec
Decoder a : SomeJSON -> a
-- That was a lie

To actually run your a Decoder, provide it to a function like decodeString:

> D.decodeString D.int "45"
Ok 45 : Result D.Error Int

So the actually-true way of getting an actual function is to combine decodeString and a decoder like int:

> D.decodeString D.int
<function> : String -> Result D.Error Int

When you apply decodeString to int you get a function that takes in a String and returns either an Int or an error. The error could be because the string you passed was not valid JSON:

> D.decodeString D.int "foo bar"
Err (Failure ("This is not valid JSON! Unexpected token o in JSON at position 1") )
    : Result D.Error Int

or because the parsed JSON does not match what the Decoder you supplied expects:

> D.decodeString D.int "\"45\""
Err (Failure ("Expecting an INT") )
    : Result D.Error Int

(We supplied a String containing a JSON string, but the int Decoder expects to find a JSON int.)

Side note: ints and floats are treated as different, even though the JSON Spec treats them all as just “Numbers”:

> D.decodeString D.int "45.2"
Err (Failure ("Expecting an INT") )
    : Result D.Error Int

What is a Value?

Elm has a type that represents JSON that has been parsed (actually, parsed and stored in a JavaScript object) but not interpreted into a useful Elm type. You can make one using the functions inside Json.Encode:

> import Json.Encode as E
> foo = E.string "foo"
 : E.Value

You can even turn one of these into a String containing JSON using encode:

> E.encode 0 foo
"\"foo\"" : String

or interpret the Value as useful Elm types using decodeValue:

> D.decodeValue D.string foo
Ok "foo" : Result D.Error String

(When JSON values come from JavaScript, e.g. via flags, they actually come as Values, but you don’t usually need to worry about that.)

However, what you can’t do is pull Values apart in any way, other than the standard ways Elm gives you. So any custom Decoder that you write has to be built out of existing Decoders.

How do I write my own Decoder?

If you want to make a Decoder that does custom things, build it from the existing magic Decoders, give it a type that describes the type it outputs, and insert your code using one of the mapN functions.

For example, to decode only ints that are below 100:

> under100 i = if i < 100 then D.succeed i else (D.fail "Not under 100")
<function> : number -> D.Decoder number
> intUnder100 = D.int > D.andThen under100
<internals> : D.Decoder Int
> D.decodeString intUnder100 "50"
Ok 50 : Result D.Error Int
> D.decodeString intUnder100 "500"
Err (Failure ("Not under 100") <internals>)
    : Result D.Error Int

Here, we use the andThen function to transform the Int value coming from calling the int function into a Decoder Int that expresses success or failure in terms of decoding. When we do actual decoding using the decodeString funcion, this is transformed into the more familiar Result values like Ok or Err.

If you want to understand the above, pay close attention to the types of under100 and intUnder100.

If you want to write a Decoder that returns some complex type, you should build it using the mapN functions.

For example, to decode strings into arrays of words split by spaces:

> splitIntoWords = String.split " "
<function> : String -> List String
> words = D.map splitIntoWords D.string
<internals> : D.Decoder (List String)
> D.decodeString words "\"foo bar baz\""
Ok ["foo","bar","baz"]
    : Result D.Error (List String)

Above we used map to transform a Decoder String (the provided string function) into a Decoder (List String) by mapping it over a function (splitIntoWords) that transforms a String into a List String.

Again, to understand this, look carefully at the types of splitIntoWords
and words.

How do I build up complex Decoders?

Complex decoders are built by combining simple ones. Many functions that make decoders take another decoder as an argument. A good example is “JSON array of objects to List of Records” above – there we make a Decoder MyRecord and use it to decode a whole list of records by passing it as an argument to list, so that it returns a Decoder (List MyRecord) which can take in a JSON array of JSON objects, and return a List of MyRecords.

Why is this so confusing?

Because Decoders are not functions, but they feel like functions. In fact they are opaque descriptions of how to interpret JSON that the Elm runtime uses to make Elm objects for you out of Values, which are opaque objects that underneath represent a piece of parsed JSON.

Students vs. professionals in software engineering experiments

Derek Jones from The Shape of Code

Experiments are an essential component of any engineering discipline. When the experiments involve people, as subjects in the experiment, it is crucial that the subjects are representative of the population of interest.

Academic researchers have easy access to students, but find it difficult to recruit professional developers, as subjects.

If the intent is to generalize the results of an experiment to the population of students, then using student as subjects sounds reasonable.

If the intent is to generalize the results of an experiment to the population of professional software developers, then using student as subjects is questionable.

What it is about students that makes them likely to be very poor subjects, to use in experiments designed to learn about the behavior and performance of professional software developers?

The difference between students and professionals is practice and experience. Professionals have spent many thousands of hours writing code, attending meetings discussing the development of software; they have many more experiences of the activities that occur during software development.

The hours of practice reading and writing code gives professional developers a fluency that enables them to concentrate on the problem being solved, not on technical coding details. Yes, there are students who have this level of fluency, but most have not spent the many hours of practice needed to achieve it.

Experience gives professional developers insight into what is unlikely to work and what may work. Without experience students have no way of evaluating the first idea that pops into their head, or a situation presented to them in an experiment.

People working in industry are well aware of the difference between students and professional developers. Every year a fresh batch of graduates start work in industry. The difference between a new graduate and one with a few years experience is apparent for all to see. And no, Masters and PhD students are often not much better and in some cases worse (their prolonged sojourn in academia means that have had more opportunity to pick up impractical habits).

It’s no wonder that people in industry laugh when they hear about the results from experiments based on student subjects.

Just because somebody has “software development” in their job title does not automatically make they an appropriate subject for an experiment targeting professional developers. There are plenty of managers with people skills and minimal technical skills (sub-student level in some cases)

In the software related experiments I have run, subjects were asked how many lines of code they had read/written. The low values started at 25,000 lines. The intent was for the results of the experiments to be generalized to the population of people who regularly wrote code.

Psychology journals are filled with experimental papers that used students as subjects. The intent is to generalize the results to the general population. It has been argued that students are not representative of the general population in that they have spent more time reading, writing and reasoning than most people. These subjects have been labeled as WEIRD.

I spend a lot of time reading software engineering papers. If a paper involves human subjects, the first thing I do is find out whether the subjects were students (usual) or professional developers (not common). Authors sometimes put effort into dressing up their student subjects as having professional experience (perhaps some of them have spent a year or two in industry, but talking to the authors often reveals that the professional experience was tutoring other students), others say almost nothing about the identity of the subjects. Papers describing experiments using professional developers, trumpet this fact in the abstract and throughout the paper.

I usually delete any paper using student subjects, some of the better ones are kept in a subdirectory called students.

Software engineering researchers are currently going through another bout of hand wringing over the use of student subjects. One paper makes the point that a student based experiment is a good way of validating an experiment that will later involve professional developers. This is a good point, but ignored the problem that researchers rarely move on to using professional subjects; many researchers only ever intend to run student-based experiments. Also, they publish the results from the student based experiment, which are at best misleading (but academics get credit for publishing papers, not for the content of the papers).

Researchers are complaining that reviews are rejecting their papers on student based experiments. I’m pleased to hear that reviewers are rejecting these papers.

The best or most compiler writers born in February?

Derek Jones from The Shape of Code

Some years ago, now, I ran a poll asking about readers’ month of birth and whether they had worked on a compiler. One hypothesis was that the best compiler writers are born in February, an alternative hypothesis is that most compiler writers are born in February.

I have finally gotten around to analyzing the data and below is the Rose diagram for the 82, out of 132 responses, compiler writers (the green arrow shows the direction and magnitude of the mean; code+data):

Rose diagram of birth month of compiler writers

At 15% of responses, February is the most common month for compiler writer birthdays. The percentage increases to 16%, if weighted by the number of births in each month.

So there you have it, the hypothesis that most compiler writers are born in February is rejected, leaving the hypothesis that the best compiler writers are born in February. How could this not be true :-)

What about the birth month of readers who are not compiler writers? While the mean direction and length are more-or-less the same, for the two populations, the Rose diagram shows that the shape of the distributions are different:

Rose diagram of birth month of non-compiler writers