MSc-IT Study Material
June 2010 Edition

Computer Science Department, University of Cape Town
| MIT Notes Home | Edition Home |

The Productivity Paradox

Until 1973 productivity increased in the United States, particularly during the 1950s where productivity increased quite dramatically. However since 1973, through the 1980s and into the 1990s productivity in the US has stopped increasing at such a rate; it has still increased, but not nearly so dramatically. This period has been precisely the time during which industry has heavily invested in information technology.

Apparently the economic measures are telling us that implementing information technology has been worthless, it has hardly affected industries' ability to improve their productivity at all, indeed in some of the more pessimistic studies it is claimed that productivity has actually decreased with the implementation of information technology.

Cause and effect

Care needs to be taken with this paradox through. Just because productivity flattened out at the same time as implementation of information technology got under way does not imply cause and effect. The concurrence of these two events may simply be coincidence, the flattening in productivity may be caused by many other factors and it may be that the implementation of IT systems has mitigated the effect.

Landauer (1995) traces this argument through, looking at several ‘econometric' models which try to assign cause and effect to the flattening of productivity. The details are very complex but the procedure is as follows: an estimate is made of what productivity is expected to be and then the difference between expected and actual productivity is measured and candidates (such as crime, pollution, etc.) are suggested that might fill this shortfall.

Landauer argues that total US output is 1.5% below expectations since 1973 and that the ‘usual suspects' cannot be made to explain this shortfall. 1.5% roughly equates to $30 billion per year of ‘missing' productivity. IT investments yielded 13.3% less than expected which over an average investment of $225 billion results in $30 billion. Landauer suggests that this is the missing $30 billion. The argument is compelling, but not a proof that IT investment caused the productivity paradox.

One should always be very careful with such arguments and hold in mind Benjamin Disraeli's famous maxim: ‘There are three kinds of lies: lies, damned lies and statistics.'

An American phenomenon

The productivity paradox seems to be largely an American phenomenon; it reveals itself clearly in the American economic figures, but is less obvious in the figures of other industrialised countries.

America has shown the greatest growth in the industrialised world in the last fifty years, not least because compared to other countries it did not suffer the wholesale industrial and work force devastation caused by the second world war. Europe and Japan's economies were shattered by the war and much of the late 1940s and 1950s were spent trying to recover. Economic figures for Europe and Japan were therefore completely altered to the extent that trying to ‘factor out' the effect of the war in the economic figures (i.e. trying to calculate how Europe and Japan's economies would have fared economically if it were not for the war) is completely impossible.

It is not really possible to compare the economic figures of America with the rest of industrialised world in a substantive way. The productivity paradox is present in other country's economic figures, but not in such a pronounced way as it is for America.

Computerisation versus information technology

We need to be clear about what we mean by ‘information technology'.

‘Computerisation' has greatly improved productivity, where computerisation is the complete replacement of a low skilled job with automated machinery.

An anecdotal example:

I worked as a bank clerk in the 1980s in a major UK bank, having followed in the footsteps of my father who started work as a clerk in the 1950s. When I started work the process of bookkeeping was almost entirely automated, the calculation of credit and debits was performed by a centralised computer system.

Thirty years earlier the bank employed a great many people to simply calculate customers' balances and maintain their account statements. When a debit or credit was received by a bank for a customer's account then the customers' appropriate account statement would be removed from a drawer and a clerk would write the details of this new debit or credit (in the clerk's best handwriting) onto the statement and then the statement would be refiled. A myriad other ledgers and totals would be maintained by hand and accrued at the end of the day to ensure that all the balances were maintained and errors were not made. If errors were found then an extremely arduous process of checking all the calculations made that day needed to be done to find and rectify the errors.

This process was dull, mind-numbing, and required very great precision from the workers; qualities that are not naturally human, but qualities that are very well suited to computers.

In the 1960s and 70s this process was automated with considerable success, so much so that maintaining a bank account became so cheap for banks that they were able to offer free banking to personal customers and the percentage of the population who had bank accounts exploded. Whereas when my father started work he spent most of his time in a back office balancing ledgers, I spent most of my time liaising with customers, discussing financial packages with them, sorting out problems they may have or simply discussing the weather. I was able to add considerably more value to the bank and their customers than my father could have, even though, all in all, we were paid roughly the same. Hence; a large increase in productivity for the bank.

Interestingly however, subsequent moves by the banks; into automatic teller machines, phone and on-line banking means that human liaison with customers has been taken almost completely out of the system. My wages are paid directly into my account, I pay for goods by debit card, or by withdrawing cash from a hole in the wall machine and I query my balance by liaising with a computerised voice on a phone line. I cannot remember the last time I actually spoke to a bank clerk.

The banks successfully replaced their accounting functionality by computers and this freed up staff to improve the communications with customers. However recent moves have tried to replace these communications by automated systems. Automated systems are not good at communicating with people (not nearly as good as people are anyway) and therefore it could be predicted that more recent moves to automate banking systems will not show anywhere near the productivity gains that moves to automate the bookkeeping functionality did.

This brings us back to the productivity paradox; computers are very good at replacing people when the jobs that people do can be captured mathematically and are dull and repetitious, but when computers augment a job – where IT is used as a tool by users to help do their job – the benefits of doing so are much more ambiguous. The productivity paradox arises when IT systems that require a large degree of input from their user population are implemented. Hence the theory that it is an inappropriately small concern for users in the design of IT systems causes the productivity paradox. We will return to this later.

Refutations of the productivity paradox

The productivity paradox has exercised the economics and computer science communities considerably, and there have been many arguments advanced that the productivity paradox is simply a statistical mirage.

Productivity is a measure of how well an industry is doing may inappropriate for modern information technologies. Even though productivity has not increased, other measures of how well a company is doing have shown considerable gains. IT has caused an improvement in how industry works, but economists may be looking in the wrong places for those signs.

Productivity as a measure of industrial, not IT success?

Productivity is a measure of industrial success, and it has been argued that it is no more than that – a measure that is useful when a production system has tangible inputs: raw materials, labour and energy and has tangible outputs: services that can be sold with an explicit cost, or widgets that can be put in boxes and shipped to customers. Systems that have a considerable IT component typically generate products and advantages that are much less easy to quantify than numbers of widgets in boxes.

There has been much recent comment on the emergence of a ‘knowledge economy' where the main product of a system is an improvement in knowledge for the system's consumers. Information technology is about storing information, moving that information around and most crucially translating information from one form to another. A successful IT system will gather raw data, process that data into information and deliver that information to consumers in a timely manner and useful format so that the consumers' knowledge is improved. Much play is made of the fluidity of such systems; out of date information may be worse than useless and the time window on when information may be useful is narrowing all the time.

Until recently stock market prices were reported daily in the financial press and this daily update of information was considered appropriate for most investors. Now technology has allowed stock prices to be delivered to our desktop computers or even mobile phones that is accurate to the minute and the daily update provided by newspapers seems archaic.

Technology can offer these apparent benefits, and there are very difficult to quantify; they are likely to be beyond the scope of equations for calculating industrial productivity.

New technology linked with new management procedures?

Table 2.1.  The effect of new management structures and IT on productivity. (From Brynjolfsson and Hill, 1998)

 Little Investment in ITHigh Investment in IT
Considerable management restructuring Small increase in productivityLarge increase in productivity
Low management restructuring No effect on productivityDecrease in productivity


Furthermore the implementation of IT systems may be need to be linked with new systems of management within companies. Old industrialised companies tended to have very hierarchical structures with major decisions made at the top of the hierarchy and fed down through layers of middle management with less and less decision making responsibility to the unskilled work force who simply do what they are told.

Computerisation has to a large extent removed unskilled workers, and information technology has allowed the skilled work force access to the information necessary in order to make more responsible decisions. Therefore modern company structure tends to be much ‘flatter' with executive decisions dispersed among employees who have much more responsibility for themselves and their actions.

This flatter management structure is well suited to IT systems and there is evidence that if IT systems and flatter management structures are implemented at the same time then there are the sort of productivity gains for companies that should be expected.

If, however, new management structures are implemented without IT systems then productivity remains quite flat, whereas if new IT is implemented without flat management structures then productivity actually decreases. (See above figure)

This effect may be crucial to the productivity paradox; there are productivity gains in implementing IT systems, but those gains may be obscured by old and outdated management practices and structures.

Management structures may be much more difficult to change than implementing IT systems; management is essentially about the placing and responsibility for power within an organisation. Flattening the management structure is about dispersing that power. In order to implement a flattened management structure the executive must take the decision to disperse their own power, in effect to become less powerful in an organisation that they may own, or have considerable investment in. Persuading the executive to do this is notoriously difficult.

So does the productivity paradox exist

The answer is: probably. But it is unlikely to be as profound as an initial reading of the figures suggest, and it is unlikely to be wholly due to badly implemented IT. But the evidence still asserts that IT has not been nearly as profitable as expected, and it is technology that needs to interact with people that seems to be failing.

Landauer's thesis is that computers are failing to add anywhere near the value that we expect of them. His solution is simple and compelling: improve the usability of computer systems.