[In this second column based on my reading of management in Stanford, I noted how problems of the real world intruded on economics, and trying to solve them led to new developments in the subject. This column was published in Business Standard of 25 June 2001.]
ECONOMICS IN MANAGEMENT
As I said last
week, what one needs to manage well is a thorough understanding of the business
one is in. However, this understanding is not a once-and-final thing. Social
sciences have no laboratory to test their ideas; social scientists imagine
social mechanisms, and revise them only when they become egregiously
inconsistent with reality. Management is a continuous, day-to-day activity; so
one would expect that it provides frequent opportunities for the testing and
revision of ideas. Has it, then, led to repeated revolutions in the social
sciences that cater to it?
Perhaps the best
test of whether this has been so is the economic model of business: that the
goal of business is profit maximization. Application of elementary calculus to
this single proposition leads to clear guidelines - which may be called
marginalism. Most of the chairs of political economy in universities were
established in the second half of the 19th century; at that time,
marginalism became a staple of economics teaching. But at that time, the
economic view of the environment that faced business was exceedingly simple.
Businesses were either monopolies or competitive firms; there was nothing in
between.
Then, in the
1930s, competitors in trouble began to collude. The response to economics was
remarkably quick; Chamberlin in America, Joan Robinson in Britain, and Zeuthen
in Germany all wrote immediately influential books on hybrid market situations
between monopoly and competition. This was also the time when profit
maximization acquired a bad name, and with it, economics. It came to be known
as the dismal science; it became fashionable to nail the flag of other social
sciences on their being against economics.
Actually, it was
not economics that was dismal; it was the world economy, and economics only
reflected the reality. What was wrong with economics did not lie in its
psychological orientation, but in its disconnexion with reality. No businessman
practised marginal cost pricing; most of them would have thought it barmy. It
is perfectly true that fixed costs do not have to be covered in the short run,
and that any price above marginal cost adds to the profit. But in the long run,
all costs are variable, and businessmen who do not think of the long run will
not survive to see it.
However, the
entire marginal economics came to life during World War II. The warring
governments faced the need to maximize effort with constrained resources, and
problems requiring optimum allocation of resources abounded: shells had to be
directed so as to do maximum damage, supplies had to be moved to armies with
lowest transport inputs, ships and planes had to be produced in the shortest
time from a large number of inputs, doctors had to decide which casualties to
attend to first. These demands encountered economists and mathematicians who
created linear programming. They commandeered statistical techniques that had
been developed for seed and animal improvement and applied them to create
statistical quality control. In this way, the War converted marginal economics
into the immensely useful discipline of programming. It led to inventory
planning and the formulation of rules regarding the economic order quantity, to
the critical path method of scheduling, to waiting line models, to location
models, to the planning of factory layouts, and many other useful applications.
Although there
has been no such spectacular business application of microeconomics since the
War, its ideas have been applied, adapted and confronted in a variety of influential
developments. They were brought together in the monumental work of Holt,
Modigliani, Muth and Simon, Planning, Production, Inventories and Work Force.
By the 1950s, the intellectual development of programming had passed its peak.
Then Herbert Simon, one of its pioneers, turned to psychology. He created the
concept of satisficing, as opposed to maximizing – the idea that
decision-makers aimed at an outcome that was satisfactory rather than optimum,
and thereby economized both on the information needed as well as on complexity
of decision-making. Over the next 20 years he created and inspired a hugely
fertile body of work on bounded rationality. Michael Porter’s applications of
the concept of competitive advantage, embodied in a series of extremely popular
publications starting with Competitive Advantage: Creating and Sustaining
Superior Performance, have their basis in marginal economics.
Despite people’s
aversion to the idea, and despite the fact that businessmen do not always
follow it, the principle of profit maximization has found repeated, fertile
applications in management. Even if businessmen do not want to follow it, their
financiers – shareholders, lenders and bankers – will force them to follow it.
And if they neglect it for a while, they will have make a painful return to it
through the minefield of restructuring, reengineering and downsizing or they
will go under.
This idea no
doubt lay behind Frederick Taylor’s advocacy of what he called scientific
management. His basic idea was to divorce the issue of work from that of
payment for it, and to determine the work to be done on the basis of what the
worker could do using movements that were most economical of time and effort.
His ideas never found widespread application; Taylor has perhaps inspired
greater activity amongst management theorists than amongst managers. But with
the coming of assembly lines and automation, the idea of externally imposed
work norms had to be faced, even if it was not accepted.
Its antithesis
is the Japanese manufacturing practices, which probably arose in the 1950s and
began to get known in the west in the 1980s following the extreme success of
Japanese companies. In these practices, workers can stop the assembly line if
they find anything wrong; a number of models are made on the same assembly
line, which requires workers to read instructions on the kanban or card
attached to each item and change his input to suit it; workers are given a
number of tasks to be performed at the same time; and groups of workers are
given considerable autonomy in organizing and improving manufacturing
practices. These practices enjoyed considerable vogue in the 1980s, and have
spread all over the west by now. There are other Japanese practices that are
less well known, and which also modify what one would recommend on a simple
economic view. The common Japanese practice of outsourcing is really a variant
of payments by results: often the vendors are so dependent on the outsourcing
company that they are little different from factory workers.
The economic
model has recently made a comeback in a curious way. The 1990s saw the rise of
the software industry; here, neither the quantity nor the quality of the work
done by a code-writer can be measured easily. So this industry adopted a
practice, hitherto confined to the remuneration of top executives, of basing
the payments to workers upon the price of the company’s shares. The device
worked well while the stock prices of IT companies were booming; now,
presumably, it is getting less popular. But it reflects the fact that a worker
is a stakeholder in a business; the business would be more stable and less
vulnerable to economic cycles if the worker shared in the risk.