Further Still On Natural Analogarithms – student

student from thus spake a.k.

For several months now my fellow students and I have been exploring -space, being the set of infinite dimensional vectors whose elements are the powers of the prime factors of the roots of rational numbers, which we chanced upon whilst attempting to define a rational valued logarithmic function for such numbers.
We have seen how we might define functions of roots of rationals employing the magnitude of their associated -space vectors and that the iterative computation of such functions may yield cyclical sequences, although we conspicuously failed to figure a tidy mathematical rule governing their lengths.
The magnitude is not the only operation of linear algebra that we might bring to bear upon such roots, however, and we have lately busied ourselves investigating another.

New design, for improved readability!

Anders Schau Knatten from C++ on a Friday

Hi, and welcome to to a new and much cleaner “C++ on a Friday”! The old theme had too small fonts and had a bit too much “design” going on, making it a bit distracting to read.

The new theme has larger fonts, good responsivity, a nice colour scheme, and is overall much more pleasant to read. Enjoy the improved reading experience! :)

 

A 1948 viewpoint on developer vs. computer time

Derek Jones from The Shape of Code

For a long time now developer time has been a lot more expensive than computer time. The idea that developers should organize what they do, so as to maximize the efficiency of computer time rather than their own time, is considered to be an echo from a bygone age.

Until recently, I thought the transition from this bygone age, when computer time was considered more important than developer time, started in the late 1960s. Don’t ask me why I thought this, put it down to personal bias.

I was recently reading A Survey of Eniac Operations and Problems: 1946-1952, published in 1952, and what did I find:

“Early in 1948, R. F. Clippinger and some of his associates, in the course of coding the solution of …, were forced to adopt a different method of using the Eniac in order to fit their problem on the machine. …. The experience with this method (first discussed in reference 1), led J. von Neumann to suggest the use of a serial code for control of the Eniac. Such a code was devised and employed with the Eniac beginning in March 1948. Operation of the Eniac with this code was several times slower than either the original method of direct programming or the code for parallel operation. However, the resulting simplification of coding techniques and other advantages far outweighed this disadvantage.

In other words, in 1948, the people using one of the few computers in the world, which clocked at 100KHz, considered developer time to be more important than computer time.

How should we organize our teams?

Allan Kelly from Allan Kelly Associates

StartingPoint-2018-09-18-12-23.jpg

Q1: How should we organize our teams?
My team is owner of different trading platforms and the core services around it. But we depend heavily on other products (e.g. financial feeds, client identification, services to send orders to stock markets, etc.). And of course each of the team managing these services have other platforms that are their clients.

When Vasco Duarte and I ran the #NoEstimates/#NoProjects workshop (or #NoNoWorkshop as I think of it) in Switzerland last month the attendees asked some good questions. With Project Myopia done and published, and Continuous Digital almost done it seems like a good time to repeat, and elaborate, the answers publicly. This will take a few blog posts to work through.

(I now have several Continuous Digital workshops and briefings available, please let me know what you think. Vasco and I are looking at repeating the workshop in London later this year, please get in touch if you are interested.)

The picture above is the way I see the question, if you have another interpretation, or another scenario please let me know.

The Continuous Digital model is for stable, long standing, autonomous, value seeking teams staffed with all the skills they need. Much of my thinking derives from Amoeba Management. Importantly each team needs to see how it adds value. In this case the business facing teams can see this – they enable the business do make money. But the back office teams find it hard see how they add value.

Now there are several possible answers to this question most of which involve some sort of re-organizations.

Option 1: Share the value

This solution does not involve reorganisation and comes straight from the pages of Amoeba management: allocate some portion of the value earned by the business facing teams to the teams they depend on. For example, the Trading platforms team might generate $10m each year. It could not do this without the services of the other three teams. Therefore some portion of Trading team’s earned value is passed to those teams.

Think about it, Trading Platforms affectively buys the services of three other teams. If those teams did not exist Trading Platforms would need to do that work themselves. Therefore those teams are contributing and deserve some credit.

This requires a serious conversation and probably needs more senior managers to intervene. Indeed, in Amoeba Management, Kazuo Inamori says that such decisions were among the most difficult ones facing Kyocera and often required more senior managers to make the final decision.

Nor is it always clear who buys from who, does a Sales Amoeba earn the value and pass part of it to the manufacturing team who build the product. Or does the Manufacturing Amoeba hire the Sales Amoeba to get their product to customers and therefore book the revenue and pass some to sales?

In the case above one might find it better to consider the value of the whole trading team including both the traders and the programmers who make the platform. Or perhaps the traders rent the platform from the technologists.

According to Inamori Kyocera standing allocations are set between teams. Alternatively one might create an internal market in which teams bought services from others on a piecemeal basis. On the one hand I like that idea model because it would allow for negotiation and trade-offs. On the other hand I imagine it creating a whole new set of bureaucracy, politics and internal sales. On balance, I’d fix the allocations and review periodically.

Option 2: Vertical slice

If you look at the picture above you might replace the word “team” with “library” or “services” and you would have a module dependency chart. Conway’s Law is at work – the organization and system reflect each other. (Although without knowing the history here it is difficult to say whether this was Conway’s Law or Reverse Conway’s Law at work.)

The services can stay as they are but we just disband the back-office teams and pass their responsibilities to the (enlarged) business teams.

Vertical-2018-09-18-12-23.jpg

The three teams will need to co-operate and co-ordinate with each other as they now have shared responsibilities. This itself can be a problem – two developers changing the same code anyone? But the world has moved on. Technology has improved.

In the days of SCCS, Visual Source-Unsafe, manual testing and monthly deployments it was a pain to have two teams working on the same code. But distributed source code control, automated testing and continuous delivery make this option far more viable than it once was.

On the plus side each team can work at their own pace on their own priorities and knowledge is spread around. On the downside teams can still trip up each other, they may duplicate work and specialist knowledge can get lost. (Note I am not saying “nobody has overall design authority” is a downside because while a single Linus can be an advantage it can also be a liability.)

One more problem here: this solution directly breaks Conway’s Law. In theory it could work but quite possibly the homomorphic force behind Conway’s Law might reassert itself. This might create some problems further down the line so needs monitoring.

Option 3: Independence

Taking option 2 to the extreme you might even separate the teams completely. Again there are plus and minuses.

Indepence-2018-09-18-12-23.jpg

On the one hand the teams are completely independent, they can move at their own pace, with their own priorities, value is clearly attributed and there is now resilience in the system and risk is reduced.

However, there is duplication. Not only does this mean more work it means that there may be inconsistencies, a client recognised by Trading might not be recognised by Yet Another.

Both options 2 and 3 demand larger teams and this option might requires more people overall. One can’t be sure because teams might come up with innovative solutions or come up with some new mechanism for sharing.

I’m sure some readers will discount this option very quickly but there are big benefits to complete independence – particularly when teams are separated geographically (e.g. Trading in London, Some Other in Frankfurt and Yet Another in Singapore) or when they are addressing different markets. One of the dangers of shared modules is that they become bloated by generic features nobody really wants but someone has to pay for.

This approach might also be advantageous when the company is in a growth and innovation mode. Let each team grow as fast as they can and innovate. In time a “winner” might emerge or common elements appear naturally.

Another variation on option 3 would be to have one team take the lead. Say Trading, this would be a larger team who developed the share services as part of their business facing work. But they would not “genericise” those services. The other, smaller teams, would do what they needed, when they needed, to service their own value streams.


That is three options. I could come up with some more, none is perfect. The important things are:

  • Create a clear way for teams to see the effects of their work and share in the value.
  • Allow teams autonomy in decision making and reduce dependencies.
  • Keep it simple so everyone can see cause and effect.
  • And of course, keep the teams stable – don’t break them up.

If you have any questions about Continuous Digital and #NoProjects please mail them over and I’ll do my best to answer them in this blog.

Receive these posts by e-mail?

Subscribe to my newsletter & receive a free eBook “Xanpan: Team Centric Agile Software Development”

The post How should we organize our teams? appeared first on Allan Kelly Associates.

Major players in evidence-based software engineering

Derek Jones from The Shape of Code

Who are the major players in evidence-based software engineering?

How might ‘majorness’ of players be calculated? For me, the amount of interesting software engineering data they have made publicly available is the crucial factor. Any data published in a book, paper or report is enough to be considered interesting. How interesting is data published on a web page? This is a tough question, let’s dodge the question to start with, and consider the decades before the start of 2000.

In the academic world performance is based on number of papers published, the impact factor of where they were published and number of citations of those papers. This skews the results in favor of those with lots of students (who tack their advisor’s name on the end of papers published) and those who are good at marketing.

Historians of computing have primarily focused on the evolution of hardware and are slowly moving to discuss software (perhaps because microcomputers have wiped out nearly every hardware vendor). So we will have to wait perhaps a decade or two for tentative/definitive historian answer.

The 1950s

Computers and Automation is a criminally underused resource (a couple of PhDs worth of primary data here). A lot of the data is hardware related, but software gets a lot more than a passing mention.

The US military published lots of hardware data, but software does not get mentioned much.

The 1960s

Computers and Automation are still publishing.

The US military still publishing data; again mostly hardware related.

Datamation, a weekly news magazine, published a lot of substantial material on the software and hardware ecosystems as they evolved.

Kenneth Knight’s analysis of computer performance is an example of the kind of data analysis that many people undertook for hardware, which was rarely done for software.

The 1970s

The US military are still leading the way; we are in the time of Rome. Air Force officers studying for a Master’s degree publish more software engineering data than all academics combined over this and the next two decades.

“Data processing technology and economics” by Montgomery Phister is 720 A4 pages packed with graphs and tables of numbers. Despite citing earlier sources, this has become the primary source for a lot of subsequent researchers; this is understandable in a pre-internet age. Now we have Bitsavers and the Internet Archive, and the cited primary source can be downloaded.

NASA is surprisingly low volume.

The 1980s

Rome falls (i.e., the work gets outsourced to a university) and the false prophets (i.e., academics doing non-evidence based work) multiply and prosper. There are hushed references to trouble makers performing unclean acts experiments in the wilderness.

A few people working in the wilderness, meaning that the quantity of data being produced drops by at least an order of magnitude.

The 1990s

Enough time has passed for people to be able to refer to the wisdom of the ancients.

There are still people in the wilderness howling at the moon, and performing unclean acts experiments.

The 2000s

Repositories of Open source and bug reports grow and prosper. Evidence-based software engineering research starts to become mainstream.

There are now groups of people doing software engineering research.

What about individuals as major players? A vaguely scientific way of rating individual impact, on evidence-based software engineering, is to count the number of papers they have published, that are cited by a book claiming to discuss all the important/interesting publicly available software engineering data (code+data).

The 1,521 papers cited, by such a book, had 3,716 authors, of which 3,095 were different. The authors who appeared most often are listed below (count on the right, and yes, at number 2 is a theoretician; I have cited myself nine times, but two of those are to web sites hosting data).

Magne Jorgensen 17
Anne Chao 11
Dag I. K. Sjoberg 10
Massimiliano Di Penta 10
Ahmed E. Hassan 8
Christian Bird 8
Stanislas Dehaene 8
Giuliano Antoniol 7
Thomas Zimmermann 7
Alexander Serebrenik 6
Dror G. Feitelson 6
Gregorio Robles 6
Krzysztof Czarnecki 6
Lutz Prechelt 6
Victor R. Basili 6

The number of authors/papers follows the usual pattern of many people writing one paper.

Number of evidence-based papers written by an author

Who might I have missed? The business school researchers don’t get a mention because their data is often covered by a confidentiality agreement. The machine learning crowd are just embarrassing.

Suggestions for major players welcome.

Direct access to SonarQube Postgresql Database

Tim Pizey from Tim Pizey

I want to change to change the name of a sonarqube project. This cannot be done without performing another analysis. You can just do it in SQL https://stackoverflow.com/questions/30511849/how-to-rename-a-project-in-sonarqube-5-1 but you have to be able to login to the database. Postgresql is very secure. A quick fix is to edit /var/lib//pgsql/pg_hba.conf change local connections from ident to trust:

# TYPE DATABASE USER CIDR-ADDRESS METHOD

# "local" is for Unix domain socket connections only
local all all trust
# IPv4 local connections:
host all all 127.0.0.1/32 trust
# IPv6 local connections:
host all all ::1/128 trust

Now you can edit:

psql -U sonarqube -W sonar
and finally:

UPDATE projects
SET name = 'NEW_PROJECT_NAME',
long_name = 'NEW_PROJECT_NAME'
WHERE kee = 'PROJECT_KEY'

My experience upgrading to Elm 0.19

Andy Balaam from Andy Balaam's Blog

Elm is unstable, so upgrading to the next version can be painful. Here’s what I needed to do to upgrade from 0.18 to 0.19.

  • Replace elm-package.json and tests/elm-package.json with elm.json – e06f5a1728
  • Switch to the new elm-testb964b7c7a
  • Re-arrange Main, and how we call it from JavaScript – 0c118c49f
  • Stop using eeue56/elm-all-dict (since it’s not ported to 0.19 and porting it looked hard due to a lack of Debug.crash) – fe100f256
  • Replace toString with String.fromX or Debug.toString – 9e78163d0a3
  • Stop “shadowing” names by making new variables with the same name as another in the scope – 9688a621de
  • Adapt to the changed Html.style function – b991ab4f
  • Stop using Debug.crash – f98a70ad1
  • Adapt to the changes in the Regex module – 856762a4
  • Stop using tuples with more than 3 parts – 472c0bb7

The lack of Debug.crash is really, really painful, especially for a library like eeue56/elm-all-dict that has lots of invariants that are hard or impossible to enforce via the type system. On the other hand, if Elm can give a hard guarantee that there will be no runtime errors, this seems pretty cool. The problem is that some code may well have to return the wrong answer silently, instead of crashing, which could be much worse than crashing in some use-cases.

I was annoyed by the lack of more-than-3-part tuples, but even as I did the work to change my code, I saw it get better, so it’s hard to argue with.

The hardest part to work out was how to run the tests. Fortunately the tests themselves needed almost no changes. I just needed to do this:

rm -r tests/elm-stuff
rm tests/elm-package.json
sudo npm install -g elm-test@0.19.0-beta8
elm-test install
elm-test

My next job is to check out the –optimize compiler flag, and the advice on making the code smaller and faster.

Visual Lint 6.5.4.298 has been released

Products, the Universe and Everything from Products, the Universe and Everything

This is a recommended maintenance update for Visual Lint 6.5. The following changes are included:

  • If a "Lint" folder without the hidden attribute exists in a solution/workspace folder Visual Lint will no longer attempt to use it to store analysis results and will create a new ".visuallint" folder instead. This prevents Visual Lint from assuming that a user-created "Lint" folder is one which was created by an earlier (pre-v5.0) version of Visual Lint.
  • Fixed a crash which could occur when files were saved in the Eclipse plug-in. The crash seemed to particularly affect plug-in installations running within Texas Instruments Code Composer Studio and configured for per-project analysis with the "Re-analyse saved files using the preferred method" option set.
  • The project variables $(CEVER), $(ARCHFAM) and $(_ARCHFAM_) are now automatically defined when analysing Visual Studio 2008 projects for the NetDCU9 (ARMV4I) platform.
  • Corrected the "Supported development environments" help topic to reflect the fact that Atmel AVR Studio 5 and Atmel Studio 6.x/7.x are now supported via a dedicated plug-in.
  • Updated the PC-lint Plus message database to reflect changes in PC-lint Plus 1.2. Note that the definitions for Clang errors 5905, 5916 and 5922 have been omitted as the PC-lint Plus -dump_messages directive does not reveal either their titles or descriptions.

Download Visual Lint 6.5.4.298