The 2019 Huawei cyber security evaluation report

Derek Jones from The Shape of Code

The UK’s Huawei cyber security evaluation centre oversight board has released it’s 2019 annual report.

The header and footer of every page contains the text “SECRET”, which I assume is its UK government security classification. It lends an air of mystique to what is otherwise a meandering management report.

Needless to say, the report contains the usually puffery, e.g., “HCSEC continues to have world-class security researchers…”. World class at what? I hear they have some really good mathematicians, but have serious problems attracting good software engineers (such people can be paid a lot more, and get to do more interesting work, in industry; the industry demand for mathematicians, outside of finance, is weak).

The most interesting sentence appears on page 11: “The general requirement is that all staff must have Developed Vetting (DV) security clearance, …”. Developed Vetting, is the most detailed and comprehensive form of security clearance in UK government (to quote Wikipedia).

Why do the centre’s staff have to have this level of security clearance?

The Huawei source code is not that secret (it can probably be found online, lurking in the dark corners of various security bulletin boards).

Is the real purpose of this cyber security evaluation centre, to find vulnerabilities in the source code of Huawei products, that GCHQ can then use to spy on people?

Or perhaps, this centre is used for training purposes, with staff moving on to work within GCHQ, after they have learned their trade on Huawei products?

The high level of security clearance applied to the centre’s work is the perfect smoke-screen.

The report claims to have found “Several hundred vulnerabilities and issues…”; a meaningless statement, e.g., this could mean one minor vulnerability and several hundred spelling mistakes. There is no comparison of the number of vulnerabilities found per effort invested, no comparison with previous years, no classification of the seriousness of the problems found, no mention of Huawei’s response (i.e., did Huawei agree that there was a problem).

How many vulnerabilities did the centre find that were reported by other people, e.g., the National Vulnerability Database? This information would give some indication of how good a job the centre was doing. Did this evaluation centre find the Huawei vulnerability recently disclosed by Microsoft? If not, why not? And if they did, why isn’t it in the 2019 report?

What about comparing the number of vulnerabilities found in Huawei products against the number found in vendors from the US, e.g., CISCO? Obviously back-doors placed in US products, at the behest of the NSA, need not be counted.

There is some technical material, starting on page 15. The configuration and component lifecycle management issues raised, sound like good points, from a cyber security perspective. From a commercial perspective, Huawei want to quickly respond to customer demand and a dynamic market; corners are likely to be cut off good practices every now and again. I don’t understand why the use of an unnamed real-time operating system was flagged: did some techie gripe slip through management review? What is a C preprocessor macro definition doing on page 29? This smacks of an attempt to gain some hacker street-cred.

Reading between the lines, I get the feeling that Huawei has been ignoring the centre’s recommendations for changes to their software development practices. If I were on the receiving end, I would probably ignore them too. People employed to do security evaluation are hired for their ability to find problems, not for their ability to make things that work; also, I imagine many are recent graduates, with little or no practical experience, who are just repeating what they remember from their course work.

Huawei should leverage its funding of a GCHQ spy training centre, to get some positive publicity from the UK government. Huawei wants people to feel confident that they are not being spied on, when they use Huawei products. If the government refuses to play ball, Huawei should shift its funding to a non-government, open evaluation center. Employees would not need any security clearance and would be free to give their opinions about the presence of vulnerabilities and ‘spying code’ in the source code of Huawei products.

Visual Lint 7.0 has been released

Products, the Universe and Everything from Products, the Universe and Everything

The first public build of Visual Lint 7.0 has just been uploaded to our website.

As of today, Visual Lint 7.0 replaces Visual Lint 6.5 as the current supported Visual Lint version. Customers with active Visual Lint 6.x priority support, floating and site licence subscriptions should shortly receive updated licence keys for the new version, and upgrades for Visual Lint 5.x and 6.x per user licences will become available in our online store soon. Older editions can be upgraded manually - please contact us for details.

In addition, most customers who have purchased per-user Visual Lint licences since the start of January will shortly receive new Visual Lint 7.x compatible licence keys.

Full details of the changes in this version are as follows:

General:

  • Replaced Visual Lint Standard Edition with Visual Lint Personal Edition.

    Note: Visual Lint Personal Edition is licenced for use by individual and freelance developers rather than organisations. If your organisation has more than one member of staff, you must use Visual Lint Professional Edition or above.

    As such organisations which have purchased Standard Edition licences in the past must upgrade them to Visual Lint 7.x Professional Edition or above if they wish to use Visual Lint 7.x.

    See Visual Lint Product Editions for details of the available product editions.

Host Environments:

  • Added support for Microsoft Visual Studio 2019 to the Visual Studio plug-in. VisualLintGui and VisualLintConsole have also been updated to support Visual Studio 2019 solution and project files.

    Note: Support for the v142 toolset is not yet complete so you may (for example) find that you need to add details of the Visual Studio 2019 system include folders to your PC-lint Plus std.lnt file in order that your analysis tool can locate system include files.

  • Removed support for Windows XP, Windows Vista and Windows Server 2003 (although the software should still function on these platforms, we no longer test on them).

Analysis Tools:

  • PC-lint Plus is now recognised as a distinct analysis tool from PC-lint 8.0/9.0. As a result it is now straightforward to switch between the two products.

  • Added PC-lint Plus specific compiler indirect file co-rb-vs2019.lnt for Visual Studio 2019.

  • The environment variable _RB_PLATFORM on the generated PC-lint Plus command line now includes the string name of the platform rather than an enumerated value.

  • Added the environment variable _RB_TOOLCHAIN to the generated PC-lint Plus command line. This is used to determine which compiler indirect file to use in Atmel Studio projects where the AVR platform (which can be either 8 bit or 32 bit) is used. In such cases the toolchain (e.g. 'com.Atmel.AVRGCC32') is unambiguous and can be used instead.

  • Added a PC-lint Plus indirect file (co-rb-as7.lnt) for Atmel Studio 7.x to the installer. This conditionally invokes compiler indirect files for either ARM (32 bit), AVR (8 bit) or AVR (32 bit) compilers based upon the active platform and toolchain.

  • Added additional PC-lint Plus suppression directives to the indirect file lib-rb-win32.lnt supplied within the installer.

Installation:

  • The installer will now ask you to close third party development environments (devenv.exe, atmelstudio.exe, eclipse.exe etc.) before installation can continue only if the corresponding plugin is selected for installation.

  • Removed the PC-lint 8.0 message database.

Configuration:

  • If it has not yet been configured, Visual Lint no longer prompts to run the Configuration Wizard when it is started. Instead, the user is prompted only when they attempt to start manual or background analysis.

Analysis:

  • When a Visual Studio solution is loaded, the version reported is now that corresponding to the highest version of platform toolset in the solution. e.g. If a Visual Studio 2019 project contains Visual Studio 2015 and 2017 projects, the solution will be reported as being for Visual Studio 2017 for analysis purposes.

User Interface:

  • The elapsed time in the Manual Analysis Dialog is now displayed in minutes and seconds rather than just seconds.

  • Increased the size of the "Active Analysis Tool" and "Options" dialogs.

  • The Message Lookup View can now switch dynamically between message databases for PC-lint and PC-lint Plus.

  • Revised Configuration Wizard page text to differentiate between PC-lint and PC-lint Plus where appropriate.

  • "Project" nodes in the VisualLintGui Projects Display now include details of the active configuration and project type in the same way as the root "Solution" node.

  • The Analysis Status, Statistics, History and Results Displays now indicate the name of the active analysis tool.

Bug Fixes:

  • Fixed a potential crash in the Analysis Results Display.

  • Fixed a bug which prevented PC-lint Plus analysis from being run on an IncrediBuild grid.

  • Fixed a couple of bugs in the Configuration Wizard which affected PC-lint Plus.

  • Fixed a bug in some of the displays which could cause one of the columns to be oversized.

  • PC-lint Plus user defined messages (8000-8999) are now correctly categorised as informational.

  • Fixed a bug which could cause PC-lint Plus command lines containing the obsolete PC-lint +linebuf or +macrobuf directives to be generated if the analysis tool was set to PC-lint but the analysis tool installation folder was overridden to point to a PC-lint Plus installation folder.

Download Visual Lint 7.0

Visual Lint 7.0 has been released

Products, the Universe and Everything from Products, the Universe and Everything

The first public build of Visual Lint 7.0 has just been uploaded to our website.

As of today, Visual Lint 7.0 replaces Visual Lint 6.5 as the current supported Visual Lint version. Customers with active Visual Lint 6.x priority support, floating and site licence subscriptions should shortly receive updated licence keys for the new version, and upgrades for Visual Lint 5.x and 6.x per user licences will become available in our online store soon. Older editions can be upgraded manually - please contact us for details.

In addition, most customers who have purchased per-user Visual Lint licences since the start of January will shortly receive new Visual Lint 7.x compatible licence keys.

Full details of the changes in this version are as follows:

General
  • Replaced Visual Lint Standard Edition with Visual Lint Personal Edition.

    Note: Visual Lint Personal Edition is licenced for use by individual and freelance developers rather than organisations. If your organisation has more than one member of staff, you must use Visual Lint Professional Edition or above.

    As such organisations which have purchased Standard Edition licences in the past must upgrade them to Visual Lint 7.x Professional Edition or above if they wish to use Visual Lint 7.x.

    See Visual Lint Product Editions for details of the available product editions.
Host Environments:
  • Added support for Microsoft Visual Studio 2019 to the Visual Studio plug-in. VisualLintGui and VisualLintConsole have also been updated to support Visual Studio 2019 solution and project files.

    Note: Support for the v142 toolset is not yet complete so you may (for example) find that you need to add details of the Visual Studio 2019 system include folders to your PC-lint Plus std.lnt file in order that your analysis tool can locate system include files.
  • Removed support for Windows XP, Windows Vista and Windows Server 2003 (although the software should still function on these platforms, we no longer test on them).
Analysis Tools:
  • PC-lint Plus is now recognised as a distinct analysis tool from PC-lint 8.0/9.0. As a result it is now straightforward to switch between the two products.
  • Added PC-lint Plus specific compiler indirect file co-rb-vs2019.lnt for Visual Studio 2019.
  • The environment variable _RB_PLATFORM on the generated PC-lint Plus command line now includes the string name of the platform rather than an enumerated value.
  • Added the environment variable _RB_TOOLCHAIN to the generated PC-lint Plus command line. This is used to determine which compiler indirect file to use in Atmel Studio projects where the AVR platform (which can be either 8 bit or 32 bit) is used. In such cases the toolchain (e.g. 'com.Atmel.AVRGCC32') is unambiguous and can be used instead.
  • Added a PC-lint Plus indirect file (co-rb-as7.lnt) for Atmel Studio 7.x to the installer. This conditionally invokes compiler indirect files for either ARM (32 bit), AVR (8 bit) or AVR (32 bit) compilers based upon the active platform and toolchain.
  • Added additional PC-lint Plus suppression directives to the indirect file lib-rb-win32.lnt supplied within the installer.
Installation:
  • The installer will now ask you to close third party development environments (devenv.exe, atmelstudio.exe, eclipse.exe etc.) before installation can continue only if the corresponding plugin is selected for installation.
  • Removed the PC-lint 8.0 message database.
Configuration:
  • If it has not yet been configured, Visual Lint no longer prompts to run the Configuration Wizard when it is started. Instead, the user is prompted only when they attempt to start manual or background analysis.
Analysis:
  • When a Visual Studio solution is loaded, the version reported is now that corresponding to the highest version of platform toolset in the solution. e.g. If a Visual Studio 2019 project contains Visual Studio 2015 and 2017 projects, the solution will be reported as being for Visual Studio 2017 for analysis purposes.
User Interface:
  • The elapsed time in the Manual Analysis Dialog is now displayed in minutes and seconds rather than just seconds.
  • Increased the size of the "Active Analysis Tool" and "Options" dialogs.
  • The Message Lookup View can now switch dynamically between message databases for PC-lint and PC-lint Plus.
  • Revised Configuration Wizard page text to differentiate between PC-lint and PC-lint Plus where appropriate.
  • "Project" nodes in the VisualLintGui Projects Display now include details of the active configuration and project type in the same way as the root "Solution" node.
  • The Analysis Status, Statistics, History and Results Displays now indicate the name of the active analysis tool.
Bug Fixes:
  • Fixed a potential crash in the Analysis Results Display.
  • Fixed a bug which prevented PC-lint Plus analysis from being run on an IncrediBuild grid.
  • Fixed a couple of bugs in the Configuration Wizard which affected PC-lint Plus.
  • Fixed a bug in some of the displays which could cause one of the columns to be oversized.
  • PC-lint Plus user defined messages (8000-8999) are now correctly categorised as informational.
  • Fixed a bug which could cause PC-lint Plus command lines containing the obsolete PC-lint +linebuf or +macrobuf directives to be generated if the analysis tool was set to PC-lint but the analysis tool installation folder was overridden to point to a PC-lint Plus installation folder.

Download Visual Lint 7.0.0.307

Visual Lint 7.0 has been released

Products, the Universe and Everything from Products, the Universe and Everything

The first public build of Visual Lint 7.0 has just been uploaded to our website.

As of today, Visual Lint 7.0 replaces Visual Lint 6.5 as the current supported Visual Lint version. Customers with active Visual Lint 6.x priority support, floating and site licence subscriptions should shortly receive updated licence keys for the new version, and upgrades for Visual Lint 5.x and 6.x per user licences will become available in our online store soon. Older editions can be upgraded manually - please contact us for details.

In addition, most customers who have purchased per-user Visual Lint licences since the start of January will shortly receive new Visual Lint 7.x compatible licence keys.

Full details of the changes in this version are as follows:

General
  • Replaced Visual Lint Standard Edition with Visual Lint Personal Edition.

    Note: Visual Lint Personal Edition is licenced for use by individual and freelance developers rather than organisations. If your organisation has more than one member of staff, you must use Visual Lint Professional Edition or above.

    As such organisations which have purchased Standard Edition licences in the past must upgrade them to Visual Lint 7.x Professional Edition or above if they wish to use Visual Lint 7.x.

    See Visual Lint Product Editions for details of the available product editions.
Host Environments:
  • Added support for Microsoft Visual Studio 2019 to the Visual Studio plug-in. VisualLintGui and VisualLintConsole have also been updated to support Visual Studio 2019 solution and project files.

    Note: Support for the v142 toolset is not yet complete so you may (for example) find that you need to add details of the Visual Studio 2019 system include folders to your PC-lint Plus std.lnt file in order that your analysis tool can locate system include files.
  • Removed support for Windows XP, Windows Vista and Windows Server 2003 (although the software should still function on these platforms, we no longer test on them).
Analysis Tools:
  • PC-lint Plus is now recognised as a distinct analysis tool from PC-lint 8.0/9.0. As a result it is now straightforward to switch between the two products.
  • Added PC-lint Plus specific compiler indirect file co-rb-vs2019.lnt for Visual Studio 2019.
  • The environment variable _RB_PLATFORM on the generated PC-lint Plus command line now includes the string name of the platform rather than an enumerated value.
  • Added the environment variable _RB_TOOLCHAIN to the generated PC-lint Plus command line. This is used to determine which compiler indirect file to use in Atmel Studio projects where the AVR platform (which can be either 8 bit or 32 bit) is used. In such cases the toolchain (e.g. 'com.Atmel.AVRGCC32') is unambiguous and can be used instead.
  • Added a PC-lint Plus indirect file (co-rb-as7.lnt) for Atmel Studio 7.x to the installer. This conditionally invokes compiler indirect files for either ARM (32 bit), AVR (8 bit) or AVR (32 bit) compilers based upon the active platform and toolchain.
  • Added additional PC-lint Plus suppression directives to the indirect file lib-rb-win32.lnt supplied within the installer.
Installation:
  • The installer will now ask you to close third party development environments (devenv.exe, atmelstudio.exe, eclipse.exe etc.) before installation can continue only if the corresponding plugin is selected for installation.
  • Removed the PC-lint 8.0 message database.
Configuration:
  • If it has not yet been configured, Visual Lint no longer prompts to run the Configuration Wizard when it is started. Instead, the user is prompted only when they attempt to start manual or background analysis.
Analysis:
  • When a Visual Studio solution is loaded, the version reported is now that corresponding to the highest version of platform toolset in the solution. e.g. If a Visual Studio 2019 project contains Visual Studio 2015 and 2017 projects, the solution will be reported as being for Visual Studio 2017 for analysis purposes.
User Interface:
  • The elapsed time in the Manual Analysis Dialog is now displayed in minutes and seconds rather than just seconds.
  • Increased the size of the "Active Analysis Tool" and "Options" dialogs.
  • The Message Lookup View can now switch dynamically between message databases for PC-lint and PC-lint Plus.
  • Revised Configuration Wizard page text to differentiate between PC-lint and PC-lint Plus where appropriate.
  • "Project" nodes in the VisualLintGui Projects Display now include details of the active configuration and project type in the same way as the root "Solution" node.
  • The Analysis Status, Statistics, History and Results Displays now indicate the name of the active analysis tool.
Bug Fixes:
  • Fixed a potential crash in the Analysis Results Display.
  • Fixed a bug which prevented PC-lint Plus analysis from being run on an IncrediBuild grid.
  • Fixed a couple of bugs in the Configuration Wizard which affected PC-lint Plus.
  • Fixed a bug in some of the displays which could cause one of the columns to be oversized.
  • PC-lint Plus user defined messages (8000-8999) are now correctly categorised as informational.
  • Fixed a bug which could cause PC-lint Plus command lines containing the obsolete PC-lint +linebuf or +macrobuf directives to be generated if the analysis tool was set to PC-lint but the analysis tool installation folder was overridden to point to a PC-lint Plus installation folder.

Download Visual Lint 7.0.0.307

PowerShell’s Call Operator (&) Arguments with Embedded Spaces and Quotes

Chris Oldwood from The OldWood Thing

I was recently upgrading a PowerShell script that used the v2 nunit-console runner to use the v3 one instead when I ran across a weird issue with PowerShell. I’ve haven’t found a definitive bug report or release note yet to describe the change in behaviour, hence I’m documenting my observation here in the meantime.

When running the script on my desktop machine, which runs Windows 10 and PowerShell v5.x it worked first time, but when pushing the script to our build server, which was running Windows Server 2012 and PowerShell v4.x it failed with a weird error that suggested the command line being passed to nunit-console was borked.

Passing Arguments with Spaces

The v3 nunit-console command line takes a “/where” argument which allows you to provide a filter to describe which test cases to run. This is a form of expression and the script’s default filter was essentially this:

cat == Integration && cat != LongRunning

Formatting this as a command line argument it then becomes:

/where:“cat == Integration && cat != LongRunning”

Note that the value for the /where argument contains spaces and therefore needs to be enclosed in double quotes. An alternative of course is to enclose the whole argument in double quotes instead:

“/where:cat == Integration && cat != LongRunning”

or you can try splitting the argument name and value up into two separate arguments:

/where “cat == Integration && cat != LongRunning”

I’ve generally found these command-line argument games unnecessary unless the tool I’m invoking is using some broken or naïve command line parsing library [1]. (In this particular scenario I could have removed the spaces too but if it was a path, like “C:\Program Files\Xxx”, I would not have had that luxury.)

PowerShell Differences

What I discovered was that on PowerShell v4 when an argument has embedded spaces it appears to ignore the embedded quotes and therefore sticks an extra pair of quotes around the entire argument, which you can see here:

> $where='/where:"cat == Integration"'; & cmd /c echo $where
"/where:"cat == Integration""

…whereas on PowerShell v5 it “notices” that the value with spaces is already correctly quoted and therefore elides the outer pair of double quotes:

> $where='/where:"cat == Integration"'; & cmd /c echo $where
/where:"cat == Integration"

On PowerShell v4 only by removing the spaces, which I mentioned above may not always be possible, can you stop it adding the outer pair of quotes:

> $where='/where:"cat==Integration"'; & cmd /c echo $where
/where:"cat==Integration"

…of course now you don’t need the quotes anymore :o). However, if for some reason you are formatting the string, such as with the –f operator that might be useful (e.g. you control the value but not the format string).

I should point out that this doesn’t just affect PowerShell v4, I also tried it on my Vista machine with PowerShell v2 and that exhibited the same behaviour, so my guess is this was “fixed” in v5.

[1] I once worked with an in-house C++ based application framework that completely ignored the standard parser that fed main() and instead re-parsed the arguments, very badly, from the raw string obtained from GetCommandLine().

Using Black-Scholes in software engineering gives a rough lower bound

Derek Jones from The Shape of Code

In the financial world, a call option is a contract that gives the buyer the option (but not the obligation) to purchase an asset, at an agreed price, on an agreed date (from the other party to the contract).

If I think that the price of jelly beans is going to increase, and you disagree, then I might pay you a small amount of money for the right to buy a jar of jelly beans from you, in a month’s time, at today’s price. A month from now, if the price of Jelly beans has gone down, I buy a jar from whoever at the lower price, but if the price has gone up, you have to sell me a jar at the previously agreed price.

I’m in the money if the price of Jelly beans goes up, you are in the money if the price goes down (I paid you a premium for the right to purchase at what is known as the strike price).

Do you see any parallels with software development here?

Let’s say I have to rush to complete implementation some functionality by the end of the week. I might decide to forego complete testing, or following company coding practices, just to get the code out. At a later date I can decide to pay the time needed to correct my short-cuts; it is possible that the functionality is not used, so the rework is not needed.

This sounds like a call option (you might have thought of technical debt, which is, technically, the incorrect common usage term). I am both the buyer and seller of the contract. As the seller of the call option I received the premium of saved time, and the buyer pays a premium via the potential for things going wrong. Sometime later the seller might pay the price of sorting out the code.

A put option involves the right to sell (rather than buy).

In the financial world, speculators are interested in the optimal pricing of options, i.e., what should the premium, strike price and expiry date be for an asset having a given price volatility?

The Black-Scholes equation answers this question (and won its creators a Nobel prize).

Over the years, various people have noticed similarities between financial options thinking, and various software development activities. In fact people have noticed these similarities in a wide range of engineering activities, not just computing.

The term real options is used for options thinking outside of the financial world. The difference in terminology is important, because financial and engineering assets can have very different characteristics, e.g., financial assets are traded, while many engineering assets are sunk costs (such as drilling a hole in the ground).

I have been regularly encountering uses of the Black-Scholes equation, in my trawl through papers on the economics of software engineering (in some cases a whole PhD thesis). In most cases, the authors have clearly failed to appreciate that certain preconditions need to be met, before the Black-Scholes equation can be applied.

I now treat use of the Black-Scholes equation, in a software engineering paper, as reasonable cause for instant deletion of the pdf.

If you meet somebody talking about the use of Black-Scholes in software engineering, what questions should you ask them to find out whether they are just sprouting techno-babble?

  • American options are a better fit for software engineering problems; why are you using Black-Scholes? An American option allows the option to be exercised at any time up to the expiry date, while a European option can only be exercised on the expiry date. The Black-Scholes equation is a solution for European options (no optimal solution for American options is known). A sensible answer is that use of Black-Scholes provides a rough estimate of the lower bound of the asset value. If they don’t know the difference between American/European options, well…
  • Partially written source code is not a tradable asset; why are you using Black-Scholes? An assumption made in the derivation of the Black-Scholes equation is that the underlying assets are freely tradable, i.e., people can buy/sell them at will. Creating source code is a sunk cost, who would want to buy code that is not working? A sensible answer may be that use of Black-Scholes provides a rough estimate of the lower bound of the asset value (you can debate this point). If they don’t know about the tradable asset requirement, well…
  • How did you estimate the risk adjusted discount rate? Options involve balancing risks and getting values out of the Black-Scholes equation requires plugging in values for risk. Possible answers might include the terms replicating portfolio and marketed asset disclaimer (MAD). If they don’t know about risk adjusted discount rates, well…

If you want to learn more about real options: “Investment under uncertainty” by Dixit and Pindyck, is a great read if you understand differential equations, while “Real options” by Copeland and Antikarov contains plenty of hand holding (and you don’t need to know about differential equations).

Get the element index when iterating with an indexed_view

Anthony Williams from Just Software Solutions Blog

One crucial difference between using an index-based for loop and a range-based for loop is that the former allows you to use the index for something other than just identifying the element, whereas the latter does not provide you with access to the index at all.

The difference between index-based for loops and range-based for loops means that some people are unable to use simple range-based for loops in some cases, because they need the index.

For example, you might be initializing a set of worker threads in a thread pool, and each thread needs to know it's own index:

std::vector<std::thread> workers;

void setup_workers(unsigned num_threads){
    workers.resize(num_threads);
    for(unsigned i=0;i<num_threads;++i){
        workers[i]=std::thread(&my_worker_thread_func,i);
    }
}

Even though workers has a fixed size in the loop, we need the loop index to pass to the thread function, so we cannot use range-based for. This requires that we duplicate num_threads, adding the potential for error as we must ensure that it is correctly updated in both places if we ever change it.

jss::indexed_view to the rescue

jss::indexed_view provides a means of obtaining that index with a range-based for loop: it creates a new view range which wraps the original range, where each element holds the loop index, as well as a reference to the element of the original range.

With jss::indexed_view, we can avoid the duplication from the previous example and use the range-based for:

std::vector<std::thread> workers;

void setup_workers(unsigned num_threads){
    workers.resize(num_threads);
    for(auto entry: jss::indexed_view(workers)){
        entry.value=std::thread(&my_worker_thread_func,entry.index);
    }
}

As you can see from this example, the value field is writable: it is a reference to the underlying value if the iterator on the source range is a reference. This allows you to use it to modify the elements in the source range if they are non-const.

jss::indexed_view also works with iterator-based ranges, so if you have a pair of iterators, then you can still use range-based for loops. For example, the following code processes the elements up to the first zero in the supplied vector, or the whole vector if there is no zero.

void foo(std::vector<int> const& v){
    auto end=std::find(v.begin(),v.end(),0);
    for(auto entry: jss::indexed_view(v.begin(),end)){
        process(entry.index,entry.value);
    }
}

Finally, jss::indexed_view can also be used with algorithms that require iterator-based ranges, so our first example could also be written as:

std::vector<std::thread> workers;

void setup_workers(unsigned num_threads){
    workers.resize(num_threads);
    auto view=jss::indexed_view(workers);
    std::for_each(view.begin(),view.end(),[](auto entry){
        entry.value=std::thread(&my_worker_thread_func,entry.index);
    });
}

Final words

Having to use non-ranged for loop to get the loop index introduces a potential source of error: it is easy to mistype the loop index either in the for-loop header, or when using it to get the indexed element, especially in nested loops.

By using jss::indexed_view to wrap the range, you can eliminate this particular source of error, as well as making it clear that you are iterating across the entire range, and that you need the index.

Get the source from github and use it in your project now.

Posted by Anthony Williams
[/ cplusplus /] permanent link
Tags: , , , , ,
Stumble It! stumbleupon logo | Submit to Reddit reddit logo | Submit to DZone dzone logo

Comment on this post

Follow me on Twitter

CI/CD Server Inline Scripts

Chris Oldwood from The OldWood Thing

As you might have already gathered if you’d read my 2014 post “Building the Pipeline - Process Led or Product Led?” I’m very much in favour of developing a build and deployment process locally first, then automating that, rather than clicking buttons in a dedicated CI/CD tool and hoping I can debug it later. I usually end up at least partially scripting builds anyway [1] to save time waiting for the IDE to open [2] when I just need some binaries for a dependency, so it’s not wasted effort.

Inline Scripts

If other teams prefer to configure their build or deployment through a tool’s UI I don’t really have a problem with that if I know I can replay the same steps locally should I need to test something out as the complexity grows. What I do find disturbing though is when some of the tasks use inline scripts to do something non-trivial, like perform the entire deployment. What’s even more disturbing is when that task script is then duplicated across environments and maintained independently.

Versioning

There are various reasons why we use a version control tool, but first and foremost they provide a history, which implies that we can trace back any changes that have been made and we have a natural backup should we need to roll back or restore the build server.

Admittedly most half-decent build and deployment tools come with some form of versioning built in which you gives that safety net. However having that code versioned in a separate tool and repository from the main codebase means that you have to work harder to correlate what version of the system requires what version of the build process. CI/CD tools tend to present you with a fancy UI for looking at the history rather than giving you direct access to, say, it’s internal git repo. And even then what the tool usually gives you is “what” changed, but does not also provide the commentary on “why” it was changed. Much of what I wrote in my “Commit Checklist” equally applies to build and deployment scripts as it does production code.

Although Jenkins isn’t the most polished of tools compared to, say, TeamCity it is pretty easy to configure one of the 3rd party plugins to yank the configuration files out and check them into the same repo as the source code along with a suitable comment. As a consequence any time the repo is tagged due to a build being promoted the Jenkins build configuration gets included for free.

Duplication

My biggest gripe is not with the versioning aspect though, which I believe is pretty important for any non-trivial process, but it’s when the script is manually duplicated across environments. Having no single point of truth, from a logic perspective, is simply asking for trouble. The script will start to drift as subtleties in the environmental differences become enshrined directly in the logic rather than becoming parameterised behaviours.

The tool’s text editor for inline script blocks is usually a simple edit box designed solely for trivial changes; anything more significant is expected to be handled by pasting into a real editor instead. But we all know different people like different editors and so this becomes another unintentional source of difference as tabs and spaces fight for domination.

Fundamentally there should be one common flow of logic that works for every environment. The differences between them should boil down to simple settings, like credentials, or cardinality of resources, e.g. the number of machines in the cluster. Occasionally there may be custom branches in logic, such as the need for a proxy server, but it should be treated as a minor deviation that could apply to any environment, but just happens to only be applicable to, say, one at the moment.

Testability

This naturally leads onto the inherent lack of testability outside of the tool and workflow. It’s even worse if the script makes use of some variable substitution system that the CI/CD tool provides because that means you have to manually fix-up the code before running it outside the tool, or keep running it in the tool and use printf() style debugging by looking at the task’s output.

All script engines I’m aware of accept arguments, so why not run the script as an external script and pass the arguments from the tool in the tried and tested way? This means the tool runs it pretty much the same way you do except perhaps for some minor environmental differences, like user account or current working directory which are all common problems and easily overcome. Most modern scripting languages come with a debugger too which seems silly to give up.

Of course this doesn’t mean that you have to make every single configuration setting a separate parameter to the script, that would be overly complicated too. Maybe you just provide one parameter which is a settings file for the environment with a bunch of key/value pairs. You can then tweak the settings as appropriate while you test and debug. While idempotence and the ideas behind Desired State Configuration (DSC) are highly desirable, there is no reason we can’t also borrow from the Design for Testability guidebook here too by adding features making it easier to test.

Don’t forget that scripting languages often come with unit test frameworks these days too which can allow you to mock out code which has nasty side-effects so you can check your handling and orchestration logic. For example PowerShell has Pester which really helps bring some extra discipline to script development; an area which has historically been tough due to the kinds of side-effects created by executing the code.

Complexity

When an inline script has grown beyond the point where Hoare suggests “there are obviously no deficiencies”, which is probably anything more than a trivial calculation or invocation of another tool, then it should be decomposed into smaller functional units. Then each of these units can be tested and debugged in isolation and perhaps the inline script then merely contains a couple of lines of orchestration code, which would be trivial to replicate at a REPL / prompt.

For example anything around manipulating configuration files is a perfect candidate for factoring out into a function or child script. It might be less efficient to invoke the same function a few times rather than read and write the file once, but in the grand scheme of things I’d bet it’s marginal in comparison to the rest of the build or deployment process.

Many modern scripting languages have a mechanism for loading some sort of module or library of code. Setting up an internal package manager is a pretty heavyweight option in comparison to publishing a .zip file of scripts but if it helps keep the script complexity under control and provides a versioned repository that can be reliably queried at execution time, then why not go for that instead?

Scripts are Artefacts

It’s easy to see how these things happen. What starts off as a line or two of script code eventually turns into a behemoth before anyone realises it’s not been versioned and there are multiple copies. After all, the deployment requirements historically come up at the end of the journey, after the main investment in the feature has already happened. The pressure is then on to get it live, and build & deployment, like tests, is often just another second class citizen.

The Walking Skeleton came about in part to push back against this attitude and make the build pipeline and tests part and parcel of the whole delivery process; it should not be an afterthought. This means it deserves the same rigour we apply elsewhere in our process.

Personally I like to see everything go through the pipeline, by which I mean that source code, scripts, configuration, etc. all enter the pipeline as versioned inputs and are passed along until the deployed product pops out the other end. The way you build your artefacts is inherently tied to the source code and project configuration that produces it. Configuration, whether it be infrastructure or application settings, is also linked to the version of the tools, scripts, and code which consumes it. It’s more awkward to inject version numbers into scripts, like you do with binaries, but even pushing them through the pipeline in a .zip file with version number in the filename makes a big difference to tracking the “glue”.

Ultimately any piece of the puzzle that directly affects the ability to safely deliver continuous increments of a product needs to be held in high regard and treated with the respect it deserves.

 

[1] See “Cleaning the Workspace” for more about why I don’t trust my IDE to clean up after itself.

[2] I’m sure I could load Visual Studio, etc. in “safe mode” to avoid waiting for all the plug-ins and extensions to initialise but it still seems “wrong” to load an entire IDE just to invoke the same build tool I could invoke almost directly from the command line myself.

Story Generators

Allan Kelly from Allan Kelly Associates

iStock-913773630small-2019-03-22-17-35.jpg

Recently I’ve been looking again at Jobs to be Done and OKRs (Objectives and Key Results). I increasingly see them as story generators and a potential solution to the tyranny of the backlog I described last time.

When I first looked at Jobs to be Done (and OKRs actually) I wondered if they constituted a fourth, top, level on top of Epics, Stories and Tasks. I’ve long argued against having more than three levels of things to do (or requirements as we used to call them.) There are big meaningful things to do (stories), really big things which we don’t as yet understand but look really valuable (epics) and the immediate small things to do right now (tasks).

Actually, I’d rather think most things can be dealt with by two levels and one level is the even better. So adding a fourth “even bigger” thing on top of Epics just felt wrong. Technologists (like myself) have a tendency to map everything into hierarchies; inverted trees with fractal like branches. But not everything is, or should be, a hierarchy, mapping the world into a tree like structure can add complications.

Unlike stories (and epics and tasks) Jobs to be Done don’t really lend themselves to the transactional “Done”. While you could put a Job all the way to Done on your Kanban board and track it from “To do” to “Done” in reality the customer job still exists. Sure you’ve improved it but you can improve it again – another example of Stable Intermediate Forms. This seems to be the great potential of Jobs to be Done, they keep on giving: as much as you improve your product to help with the job you can still improve it some more.

So each time you analyse the Job to be Done you should be able to find more stories to deliver to improve it. Hence the Job to be Done is not a “story” to do, it is a Story Generator. Every time you look at the job to be done you find more stories, every time you examine the result of the latest improvement you find more stories. The job will never be done. Some might see that as a bad thing but that also means the job presents a stable focus for ongoing work.

The same might be true of OKRs but in a slightly different way. Because the objective is reviewed periodically – every quarter or so – it lacks the continuity of Jobs to be Done but perhaps allows the team to switch targets, maybe it is stable enough.

The key results may well be stories in their own right, or they may be things which lead to stories. Either way one can expect some key results to be achieved and marked as done regularly. As they fall they are either replaced by new key results building towards the objective (which themselves lead to stories) or new key results are added for new objectives.

I’m sure there are other story generators out there but the key thing for me is not the mechanism but the existence of the generator. Once you have a story generator you do not need a big backlog of things to do. The generator will replenish the backlog whenever you need more stories – either because you have done them or the value has fallen.

Using a generator removes the need to have a big backlog which removes the tyranny of the backlog. The team are now free(r) to concentrate on delivering value towards their objective.

Finally, I wonder if anyone has used both OKRs and Jobs to be Done together? Right now they feel like alternative generators to me, having both seems like a bit like overkill. Although I accept that maybe OKRs are more corporate and Jobs to be Done are more product focused. Anyone got any experience using them together?


Like this post? – Like to receive these posts by e-mail?

Subscribe to my newsletter & receive a free eBook “Xanpan: Team Centric Agile Software Development”

Check out my latest books – Continuous Digital and Project Myopia – and now Project Myopia audio edition

The post Story Generators appeared first on Allan Kelly Associates.

Describing software engineering is terms of a traditional science

Derek Jones from The Shape of Code

If you were asked to describe the ‘building stuff’ side of software engineering, by comparing it with one of the traditional sciences, which science would you choose?

I think a lot of people would want to compare it with Physics. Yes, physics envy is not restricted to the softer sciences of humanities and liberal arts. Unlike physics, software engineering is not governed by a handful of simple ‘laws’, it’s a messy collection of stuff.

I used to think that biology had all the necessary important characteristics needed to explain software engineering: evolution (of code and products), species (e.g., of editors), lifespan, and creatures are built from a small set of components (i.e., DNA or language constructs).

Now I’m beginning to think that chemistry has aspects that are a better fit for some important characteristics of software engineering. Chemists can combine atoms of their choosing to create whatever molecule takes their fancy (subject to bonding constraints, a kind of syntax and semantics for chemistry), and the continuing existence of a molecule does not depend on anything outside of itself; biological creatures need to be able to extract some form of nutrient from the environment in which they live (which is also a requirement of commercial software products, but not non-commercial ones). Individuals can create molecules, but creating new creatures (apart from human babies) is still a ways off.

In chemistry and software engineering, it’s all about emergent behaviors (in biology, behavior is just too complicated to reliably say much about). In theory the properties of a molecule can be calculated from the known behavior of its constituent components (e.g., the electrons, protons and neutrons), but the equations are so complicated it’s impractical to do so (apart from the most simple of molecules; new properties of water, two atoms of hydrogen and one of oxygen, are still being discovered); the properties of programs could be deduced from the behavior its statements, but in practice it’s impractical.

What about the creative aspects of software engineering you ask? Again, chemistry is a much better fit than biology.

What about the craft aspect of software engineering? Again chemistry, or rather, alchemy.

Is there any characteristic that physics shares with software engineering? One that stands out is the ego of some of those involved. Describing, or creating, the universe nourishes large egos.