
H. PEYTON YOUNG
Scott and Barbara Black Professor of Economics
Johns Hopkins University, USA
Professor of Economics
University of Oxford, UK
Senior Fellow
The Brookings Institution, USA

Excerpt from H. Peyton Young's
interview: Question 4 & 5. What do you consider the most
neglected topics and/or contributions in late 20^{th}
century game theory? What are the most important open
problems in game theory and what are the prospects for
progress?
I will
address these last two questions in tandem. As I
mentioned earlier, cooperative game theory is an unjustly
neglected topic of research . This was not always the
case: von Neumann and Morgenstern put a great deal of
emphasis on the cooperative form, and many of the pioneers
in game theory made major contributions to the topic
(Shapley, 1953; Aumann and Maschler, 1964; Schmeidler, 1969;
Aumann and Shapley, 1974). In recent decades, however, the
noncooperative approach has increasingly gained the upper
hand. Indeed, this trend has gone so far that many
textbooks on game theory scarcely give cooperative theory a
mention. One reason for this development, as I have
already suggested, is that the topics in economics where
game theory made its earliest inroads  mechanism design
and industrial organization – seem particularly wellsuited
to the noncooperative approach.
Another
reason why cooperative game theory has languished is that
its practical applications have not been widely recognized.
Earlier I mentioned the problem of sharing costs among the
beneficiaries of a public facility. Similar problems arise
in setting rates for public utilities (Zajac, 1978). More
generally, cooperative game theory is relevant to any
situation where scarce resources are to be allocated fairly
among a group of claimants. How, for example, should slots
at busy airports be allocated among airlines? Which
transplant patient should be first in line for the next
kidney? How should political representation in a national
legislature be fairly divided among parties and geographical
regions? Some economists insist that such problems would
be solved if they were simply left to the workings of the
market. Unfortunately, this overlooks the point that
markets are moot unless property rights have been defined
and vested in individuals, which is precisely what methods
of fair allocation are about.
In my book,
Equity In Theory and Practice (1994), I examined
various fairness concepts from both a foundational and
practical standpoint. Cooperative solution concepts like the
core and the Shapley value, as well as semicooperative
notions like the Nash bargaining solution and the
KalaiSmorodinsky solution, provide the entry point for
thinking about the meaning of allocative fairness. A close
examination of practice, however, suggests that one must go
substantially beyond these approaches to formulate a theory
that has descriptive validity.
Three
central points emerge from the analysis. First, fairness
must be judged in the context of the problem at hand.
Criteria for allotting transplant organs may be quite
different from criteria that pertain to the allocation of
legislative seats, and neither may be relevant to the
allocation of offices in the workplace or dormitory rooms at
college. In other words, notions of justice tend to be
compartmentalized and contextspecific, a view that has its
roots in Aristotelian philosophy, and has been advanced by
political philosophers such as Walzer (1983) and Elster
(1992).
A second key
point is that, in practice, solutions to fairness problems
tend to be decentralized in the following sense: an
allocation is deemed to be fair for a group of claimants
only when every subgroup deems that they fairly divide the
resources allotted to them. This subgroup consistency
principle is very ancient. It is implicit, for example,
in certain Talmudic doctrines concerning the division of
inheritances (Aumann and Maschler, 1985). It also features
in many modern solution concepts, such as the core, the
nucleolus, and the Nash bargaining solution (Sobolev, 1975;
Lensberg, 1988), and in realworld allocation methods such
as rules for apportioning seats in legislatures (Balinski
and Young, 1982; Young, 1994).
The
cooperative game approach to fair division proceeds from an
axiomatic standpoint. There is, however, another way of
thinking about fairness norms that builds on
noncooperative game theory. Norms of fair division –
indeed norms in general – are often the unpremeditated
outcome of historical chance and precedent. What is fair
in one society may not be deemed fair in another, because
people’s expectations are conditioned by precedent, and
precedents accumulate through the vagaries of history.
Such
processes can be modeled noncooperatively using the
framework of evolutionary game theory. As I mentioned
earlier, this approach was originally inspired by biological
applications, and typically has three key features: i) there
is a large population of interacting players; ii) the
players have heterogeneous characteristics, including
different payoffs, information, and behavioral repertoires,
iii) they adapt their behavior based on local conditions and
experience, and are purposeful but not always perfectly
rational. The focus is on the dynamics of such a
process, not merely on its equilibrium states. One of the
main contributions of the theory is to show that some
equilibria have a much higher probability of arising than do
others (Foster and Young, 1990; Kandori, Mailath, and Rob,
1993; Young, 1993a). It therefore delivers a theory of
equilibrium selection that is based on evolutionary
principles rather than on a priori principles of
‘reasonableness’, as in the earlier theory developed by
Harsanyi and Selten (1988).
To
illustrate how the evolutionary approach can be applied to
the study of fairness norms, consider the classical problem
of how two individuals would divide a pie. The simplest
noncooperative formulation is due to John Nash (1950): each
player names a fraction of the pie, and they get their
demands provided that both can be satisfied; otherwise they
get nothing. Any pair of demands that sums to unity
constitutes a noncooperative equilibrium of the oneshot
game. If the players are allowed to bargain over time, much
tighter predictions are possible. In the standard model,
players alternate in making demands, which are either
accepted or rejected (Stahl, 1972; Rubinstein, 1982). When
the players are perfectly rational and discount future
payoffs at the same rate, the outcome of the unique subgame
perfect equilibrium is the Nash bargaining solution.
Neither the
oneshot demand game nor the alternating offers game is
evolutionary in spirit, because they are concerned with what
two particular bargainers would do in
equilibrium, not what a population of bargainers
would do. To recast the problem in an evolutionary
framework, consider a large population of agents who engage
in pairwise bargains from time to time. Suppose that the
outcomes of previous bargains affect how people bargain in
the future, due to the salience of precedent. Once a
particular way of dividing the pie becomes entrenched due to
custom, people start to think that this is the only fair and
proper way to divide the pie, and it therefore continues in
force.
To allow for
asymmetric interactions, suppose that there are two distinct
populations of potential bargainers who are randomly matched
each period (e.g., employers and employees). Each matched
pair plays the Nash demand game described earlier. Assume
for simplicity that all agents in a given population have
the same utility function, but that the utility functions
differ between populations. To capture the idea that current
expectations are shaped by precedent, suppose that each
current player looks at a random sample of earlier demands
by the opposing side, and chooses a trembled best reply
given the sample frequency distribution. (The ‘tremble’
captures the idea that the process is jostled by small
unobserved utility shocks, so that players usually choose a
best reply but not always.) It can be shown that, starting
from arbitrary initial conditions, players’ expectations
eventually coalesce around a specific division of the pie,
and this endogenously generated norm of division is, with
high probability, the Nash bargaining solution.
Furthermore, when players are heterogeneous with respect to
their degree of risk aversion, a natural generalization of
the Nash bargaining solution results (Young, 1993b).
This example
shows that there is no need to make extreme assumptions
about players’ rationality in order for game theory to yield
interesting results. Unlike the alternating offers model,
where perfect rationality and common knowledge of perfect
rationality are assumed, neither is needed in the
evolutionary model. Players choose myopic best replies based
on fragmentary information, they occasionally make mistakes,
and they have no a priori knowledge of their
opponents’ payoffs, behaviors, or degree of rationality.
Nevertheless the two models yield essentially the same
outcome.
More
generally, the evolutionary model of bargaining illustrates
how game theory can be used to study the emergence of norms.
Over time, interactions among people build up a stock of
precedents that may cause their expectations to gravitate
toward a particular equilibrium, which then becomes
entrenched as a social norm: everyone adheres to it because
everyone expects everyone else to adhere to it. When the
underlying game is concerned with the division of scarce
resources, the resulting equilibrium can be interpreted as a
fairness norm (Hume, 1739; Binmore, 1994; Young ,
1998).
I conclude
by hazarding several predictions about the future
development of game theory. The first is that rationality,
and arguments over how rational the players “really” are,
will fade in importance. As I have already argued, game
theory can be applied to systems of interacting agents
whether or not they are rational in the conventional sense.
This insight was initially provided by applications of game
theory to biology, and is being buttressed by current
applications to computer science, artificial intelligence,
and distributed learning.
My second
prediction is that game theory will continue to evolve in
response to real problems that arise in economics, politics,
computing, philosophy, biology and other subjects, a
development that von Neumann and Morgenstern would surely
have welcomed. While its major successes to date have
largely been in economics, game theory is not a
subdiscipline of economics; it is more like statistics, a
subject in its own right with applications across the
academic spectrum.
My third
prediction is more of an admonition: game theory will
continue to thrive if it remains receptive to new ideas
suggested by applications, but risks degenerating if it
does not. John von Neumann cautioned about this tendency in
mathematics more generally, and game theorists would do well
to heed his warning (von Neumann, 1956):
“I
think that it is a relatively good approximation to
truth – which is much too complicated to allow anything
but approximations – that mathematical ideas originate
in empirics… As a mathematical discipline travels far
from its empirical source, or still more, if it is a
second and third generation only indirectly inspired by
ideas coming from “reality,” it is beset with very grave
dangers. It becomes more and more purely
aestheticizing, more and more purely l’art pour l’art.
… [W]henever this stage is reached, the only remedy
seems to me to be the rejuvenating return to the source:
the reinjection of more or less directly empirical
ideas. I am convinced that this was a necessary
condition to conserve the freshness and the vitality of
the subject and that this will remain equally true in
the future.”
Read
the remaining part of Peyton Young's interview in Game Theory: 5
Questions, edited by Vincent F. Hendricks and Pelle
Guldborg Hansen. The book is released in April 2007 by
Automatic Press / VIP.
ISBN
8799101343
(paperback)
248 pages / $26 / £16
Available on
Amazon:
Check also for availability:
