Software/Hardware business models


In the PC age - when most of us came into contact with computers for the first time - the notion, that hardware and software are separated from each other was created. One would purchase a computer, install an operating system and run some applications. This might lead to the idea that hard- and software are two separate worlds. However this was and is not always a valid assumption.
Actually the first digital computers like the German Zuse Z3 where purely made out of hardware. Operations like the calculation of sum and product were hard-coded into electrical circuits. This approach allowed for very efficient computing operations, but was not very flexible. Only the advent of programmable microcomputers gave programmers the opportunity to write code without having to worry too much about the underlying hardware. Only if operations where too slow, one would consider to use a dedicated chip for a certain kind of calculation, for example Fast Fourier Transformations (FFT). Recently, Application Specific Integrated Chips (ASIC) gain more momentum, especially in the area of encryption and communications. On the other hand there is also hardware now that can be programmed directly to achieve highest performance. Field Programmable Gate Arrays (FPGA) are a prominent example here. This shows that a clear line between hardware and software can not be drawn.

In the 1980ies, when IBM released its first personal computer (PC), Bill Gates sold non- exclusive licenses of Microsoft’s disk operating system DOS to IBM, reserving his own right to sell MS-DOS on his own. This clever strategy eventually started the PC revolution and lowered the market entry barrier for hardware makers to sell their IBM-compatible PCs. Most consumers who could not afford the original IBM PCs preferred to buy cheaper PCs and MS-DOS and MS-Windows became the de-facto standard. Since the price for the software was always the same, the fight for the cheapest hardware prices became “race to the bottom”. This observation can be made in virtually all price-driven markets.

Other companies decide to offer complete solutions, building hardware as well as software by their own. In many cases this helps to assure a high quality standards and assure a smooth interaction of hardware and software. These advantages play off best when it comes to performance optimization and saving energy, perhaps the most crucial aspect of mobile and wearable computing. This is the reason why Apple has such an incredible advantage and sets a very high market entry barrier for the premium segment. Even the tech giant HP with its WebOS approach decided not to pick a fight with their neighbors in Cupertino. Similarly to the PC age, in the realm of cheap mobile computing consumer products where Google and Samsung dominate the market another “race to the bottom” is going on right now.

What lessons can be learned? In general, for technology companies there are two ways to go. Either decide to put lots of effort into creating a complete solution, delivering the whole technology stack, or focussing just on one aspect and contribute to an existing eco system. The first approach is very hard, but may pay off in the long term. The second approach is a lot easier but makes a company replaceable. There are still many waves to come in the industry, and learning from the past the paradigms might be still the same. It remains to use which way to choose.