Choose The Right Tools For Your Software Project Every Time, Guaranteed

Matthew Reynolds
6 min readJul 11, 2019

--

With a billion-one-one open source projects out there, how do you choose the right software platform and tools to use for a project?

A recent article of mine — Will We Ever See Anything Like .NET Again?– got a decent amount of hits, one question it raises is that if we’re not choosing .NET for a project, what should we choose?

The problem we have today is that rather than having two dominant “ivory tower” vendors, we have a billion-and-one smaller open source projects all vying for attention. It’s become much harder to make the “right” choice — and indeed even defining what value of “right” you’re aiming for is not a trivial decision.

An investment that an individual makes in a technology is significant. Whilst a good developer can pick up and work with anything, there is an obvious advantage that comes from familiarity, and that familiarity compounds over time. For example, I’ve been using .NET almost daily since the beta in 2001. By one back-of-a-napkin calculation, that gives me around 130,000 hours of use of C# and .NET.

What that means is that if I have to open up an editor and “File — New” a new project, if I do it in C# and .NET I can get right down to work and not struggle and, importantly, not make any rookie mistakes.

The point is that that’s true of all of us — we all have “go to” tools that we use to do our work, and if we use novel tools as opposed to established tools, by definition we will not get as good results.

From an individual’s perspective, this means that although we will always seek to gain proficiency in any tool that we are given there is a path of least resistance to the tools that we used on the last project we did. Over time this compounds, and although ours a discipline where learning is vital, it tends to suggest that any choice we make is a long-term choice. Therefore, an individual’s bias is towards familiarity.

From the organisation’s perspective, the bias is towards maintainability. Any project represents an investment, and any organisation will look to realise the benefits of investment over long periods. Once the individual development is done, software projects switch into a maintenance mode, and it is well-known that major changes cannot happen at this point. You are stuck with whatever decisions you made on day one.

Whilst both of these perspectives have different biases, the actual decision is a) a risk mitigation decision, and b) will have a long-term effect. What you choose today in 2019 will have either a personal or organisational effect to at least 2024 and possible even 2029.

However, what is being offered by the market doesn’t operate on those sorts of timescales. By way of an example, in 2014 I started a project in AngularJS (as opposed to Angular) — and at the time it was a reasonable choice. Five years later, I would never start a project based on AngularJS because it’s been superceded. But I still have a project based on AngularJS, and five years experience at building AngularJS. The choice I made in 2014 has been embedded, and its value started dropping off the moment I made it.

What we need then is a method that allows us to make an appropriate choice given that a) there is far more choice for each component, and b) the lifetimes of these new tools are very short. The lifetimes are so short in fact that a choice we make for Project A likely won’t make sense for Project B. What we need to do is bias our decision making so that we are more likely to choose a tool that has a lifetime that tends towards ten years, as opposed to a tool that has a lifetime that tends towards five years.

What we’re actually seeing in software tool markets are network effects playing out. (It’s still a market, even if there’s no money — attention replaces the value from money in our case.) Network effects work by lowering the “cost” of entry to subsequent users, relative to the benefits of being within the network. Buying a smartphone — whichever platform — gives you access to a profoundly powerful network for a cost that is practical zero compared the benefit of being within that network. The same applies to software tools. Once the mass of users/developers gets to a certain size, it becomes very difficult to disrupt that network. What we see happen with networks generally is that a networks value can precipitously drop to zero when a new network with greater benefits becomes available. For reference, see fax machines to email, BlackBerry to iPhone, etc.

What we are looking to do is make choices that represent investment in networks that are going to grow, and remain safe against competition from competing networks — i.e. we need to choose tools that represent networks that have greater longevity by way of resilience to disruption. For example, if you choose Vue.js today — an “also ran” between React and Angular — what you’re really betting on is that one of React or Angular dies off and Vue.js takes its place. I can say with some confidence that I don’t think that will happen. Vue.js is an also-ran, and will always be an also-ran. It therefore is not a safe choice from an individual perspective, or an organisation perspective.

The reason why I say this is because of the Pareto Principle. This principle is well known as the 80/20 rule — that in life we tend to see this split played out in all sorts of arenas. Somewhere where we commonly see it is in market adoption — there tends to always be two big players, with everyone else scrabbling around for the scraps. There is always a Coke and a Pepsi, and some “also rans”. Same with iOS and Android, McDonald’s and Burger King, Windows and Linux, Intel and AMD, and so on. Pretty much any market has a dominant brand, and some other large brand snapping at its heels. That plays out in exactly the same in the market for software tools.

If we think about React vs Angular vs Vue.js, two of those will own 80% of the market, and all the others will take up the 20%. We know in that specific case that React and Angular are the two dominant players, and that if we were to do a quantitive analysis, we’d expect to see they’d share 80% of the market. That is going to most likely going to be true of every decision we make up and down the stack. This is why I can say with confidence that Vue.js will only get the sort of adoption that we see with React or Angular if one of those two die off. The market is unlikely to split into thirds — we are much more likely to see a Pareto-balanced market for anything that we choose. We won’t see one of React or Angular go away, given that they have such a mass now — if we see anything we’ll see them both die off, replaced by a new technique, i.e. a new “network” will affect them both. What that is, who can say.

This makes some decisions we make very easy. If we need to choose a tool, we just need make an assessment of the market and choose which of the two dominant players we want to side with. It is much more likely that disruption will nullify the benefit of either of the dominant players over the long term. There was a time when renting dedicated servers was the de facto choice for hosting, and the dominant player in that market was Rackspace. Therefore, choosing Rackspace was a safe choice. Today, that same customers is more likely to be balancing out a choice between Amazon AWS and Microsoft Azure. The point being here is that you are unlikely to make a bad choice if you flipped a coin and said “heads its AWS, tails its Azure”. Whatever the market is, you are unlikely to make a “bad” choice if you just choose one or the other of the dominant pair.

This method can go a long way to helping you choose between dominant players in a market. The next problem is that how do you know which players are the dominant ones if you can’t tell by intuition. I’ll cover that in a later article.

--

--

Matthew Reynolds
Matthew Reynolds

Written by Matthew Reynolds

I help non-technology people build technology businesses. Check out my course at www.FractionalMatt.com/course

No responses yet