Coordination, Networks, and Technological Standards

Coordination games are a stylized model for, well, coordination problems. Think of computer operating systems. Windows, Mac, or Linux? Technological choice is also a good example. The past decades have seen quite a number of “format wars” in the multimedia sector, for instance when Sony-supported BluRay was able to wipe out the Toshiba-backed alternative HD-DVD format, each of them with back doors into people’s homes through gaming platforms (PS3 vs. XBox 360). Older generations (damn, I am getting old) will remember the battle between VHS and Betamax (remember the third format, Video 2000? You do? OK, you are officially old now!). Establishing a standard, in any field, always leads to a coordination game, from deciding which side of the street to drive on or adopting the metric system to establishing a particular container size for transport or adopting a management practice. If everybody adopts the same standard, nobody will want to deviate, hence we are at a Nash equilibrium, as explained in basic Game Theory. But every single standard is a Nash equilibrium, so which one should we coordinate on?

You might naively think that efficiency should be a criterion. Just choose the equilibrium with larger payoffs, that is, the most efficient technology. Sure. Let’s all switch to Linux, now. Hm. It’s taking some time for that to happen. Why? Imagine you have people using two different standards, A and B. A is much better, if everybody coordinates. The problem is the payoffs when there is still people using both. If you switch to A, but there is still a lot of people using B, you are going to pay some costs. You switch to Linux, but you keep receiving those awful .docx files and, although you can open and edit them (Linux is, after all, more efficient), you are never sure of whether the guy you are sending the edited file to is going to be able to read it. Worse, if he can’t, he will just tell you to provide a different file he can read, as if the problem was your system and not his. But don’t get me started there.

Disclaimer: Yes, I am a rather vocal Linux supporter. The examples in this post might be colored by my (strictly personal) opinions on this matter. But I am also an state-employed academic, and I personally cannot justify the questionable waste of taxpayer money buying commercial suites that do exactly the same that open-source software can do.

Anyway. Imagine a technology or a standard B such that, if you face a combination of people, let’s say half of them using A and half using B, it will give you better payoffs than if you used A. Maybe because of the lousy negative externalities that B imposes on A. In Game Theory, such an option (B) is called risk dominant. Alas, it is perfectly possible that a coordination game has a Pareto-efficient equilibrium (meaning: better than every other equilibrium for everybody involved) and also a different, risk-dominant equilibrium.

Which one will predominate? Since both are (strict) equilibria, classical Game Theory provides no answer. There is a branch of modern Game Theory which examines learning models, where people play, see what goes on, and adjust their choices, so that society evolves over time. As in real life. It turns out that if people learn by, say, imitating past observed performance (do whatever seems to have been better), there is a trend to risk-dominance, not Pareto efficiency. Which is not exactly good news, but might not be entirely realistic.

Enter networks. It is more realistic to assume that you do not interact with the whole planet, but rasther with a limited set of colleagues, friends, or family members, in a way that can be captured by a social network. There are models studying precisely that, which we could call learning in games played on networks. That is a topic I have been working on for a while.

Let me briefly talk about just two articles, an older one an a more recent one. Both are joint work with my former Ph.D. student Simon Weidenholzer, now at the University of Essex. In Alós-Ferrer and Weidenholzer (2008), published in the Journal of Economic Theory, we studied learning models where people look at past performance and the network can have any shape. It is important to look at general networks, because results for specific ones (say, a circle or a checkerboard), although illustrative, might not generalize, and real-life networks can be pretty wild. As for the game, we looked at the quintessential case where an efficient equilibrium and a different, risk-dominant one coexist. The twist of the paper was that we assumed that you are able to see some things beyond your interaction neighborhood. That is, you have certain trading partners, but you learn about best practices from other firms you do not directly interact with. You have a limited set of coworkers, but you receive information through a larger social network. It turns out that, if that happens, efficient equilibria are selected if, and this is a big if, the network is “large enough.” Hold that thought, I will get back to what that means below.

In our more recent work Alós-Ferrer and Weidenholzer (2014), published in Games and Economic Behaviour, we looked at minimum-effort games. Those are a class of (large) coordination games used as a stylized model for productivity: the reward of an effort level is the minimum effort level in your neighborhood, but the higher your individual effort, the higher your costs. The efficient equilibrium is the one where everybody gives their best, but a single, shirking, lazy slob suffices to spoil it for everybody (himself included). Minimum-effort games also go by other names as weakest link games, and they are a good starting point to understand social effort conventions in firms or even countries. The results of the article are as encouraging as in our previous work. If the network is large enough, and provided you do receive information from beyond your interaction neighborhood, the maximum effort will prevail in the long-run. However, if the network is not large enough, the minimum effort (the shirking equilibrium) will prevail. Also, in our new paper we were able to substantially generalize the setting in a number of dimensions which I will not discuss here, as the kind of behavioural rules allowed for, the kinds of information sampling, etc.

OK so far. Those models are of course stylized, but the messages are relatively clear-cut. Provided the network is large enough, we are going to see good things. Efficient technologies will get established, inefficient ones will be dropped along the way, people will do their best, and shirking in teams will go away.

Good. Now bear with my strictly personal opinion that Linux is more efficient than Windows (it’s certainly cheaper). So, how comes some people still use Windows?

Well, the meaning of “large network” is not as simple as you might think. Suppose you compare a network with a million people and another network with just a thousand people. Which one is larger? No, not so quick. It depends. Suppose in the million-people network, everybody is connected to a central guy, or a close-knit group of people. Everybody. Well, then going from one extreme of the network to another is pretty quick. Periphery to center, then center to periphery. Two steps. That is actually a pretty small network. Now suppose that in the thousand-people network everybody has just a couple of neighbors. Maybe everybody is sitting in a circle around a lake, with a neighbor on the right and one on the left. Well, is going to take a long time for a piece of information arising somewhere in the network to propagate along the entire network. That is a large network! Technically, in our articles a large network is one where you can squeeze a large number of disjoint neighborhoods. You get the idea: if there is enough people, and everybody interacts with a limited number of them, the network is large. If everybody interacts with a large chunk of the network anyway, the network is small even if there are many people in it.

What helps efficiency? A message from our work is that efficient technologies and better conventions will be more easily established if the network has some small neighborhoods where the high payoffs of those equilibria can be established. Things like technology parks, dedicated user groups, etc. This is a kind of demonstration effect. Other people will be more likely to imitate those good things if there is some part of the network where their benefits can be clearly seen. For that to work, however, those neighborhoods need to be visible but relatively isolated from the negative externalities caused by other, inferior conventions.

Back to Linux. The relevant network is the one defined by who do you exchange files with. Well, this is a large network, right? There are millions and millions of computer users in any given broadly defined field. So the important thing is whether the neighborhoods are large or small, and as we can see…

Damn it.

That’s right. Neighborhoods are large, and the network is small. Why? The internet.

Now this is highly speculative, but if the advent of the internet would have been less sudden, maybe we would have seen a little bit more of efficiency in our operating systems. The way it happened, interactions became almost global pretty quickly, and the network, for all practical purposes, became too small too fast. And we got stuck in a few inefficient technologies. Oh, my.

Is there a way out of this mess?

One thing that might help is to keep in mind the demonstration effect. If you are a decision maker with the power to implement a change in a small group, by all means do it, and talk about it. Show the world that you are perfectly able to function paying a grand cost of zero for your operating system and software.

Another thing that might help is a bit nastier. According to the analysis, some inefficient technologies might be surviving because of risk-dominance. And that essentially means that they are better off than efficient ones when facing a mixed population. But why is that so?

A part of it might be social conventions. There was this classic Dilbert joke where somebody asked Dilbert for help to install a home Wi-Fi net, and he answered “Under what theory are the competent obliged to help the incompetent?” (here is a direct link; it is also in page 70 of the book “Try Rebooting Yourself”). OK, OK, that was nasty now. Just a joke. But suppose I send, say, an .odt file to somebody who can only read (a particular version of) .docx. The poor guy. He could of course install a couple of platform-independent software suites for free which would allow him to read my stuff. Instead, he sticks to his expensive suite and just complains back that he cannot open it, without hesitating for a second. So, since my platform can readily edit or generate a .docx or an .rtf on the fly, I give it a dozen tries until I hit a format and a variant that he can read. And since, modestly speaking, I am not totally incompetent with computers, I will find a solution. Maybe even get some utility from the fact that I can do it. Thereby making my life harder, and his easier (incidentally: “those are five minutes of my life that I am not getting back…”) . And there, precisely there, might be part of the essence of risk-dominance. By being helpful, we ensure that users of inefficient technologies will forever impose their negative externalities on us.

The solution? Destroy risk-dominance.

Of course you should be nice, polite, and helpful. But maybe, just maybe, next time somebody asks you for a format he can read, you should put the burden on him. Go silent and declare incompetence. Sorry, no can do, don’t know how, you don’t understand his platform at all. Smile. And you can still be helpful: send him a link to the free software which allows to read your open-source-based stuff. And yeah, when you look at the mirror, you can grin.

NOTE: This post first appeared in 2014 in my university blog. Since I have taken down that one (one person, one blog seems more than enough to me), I am re-posting here slightly updated versions of some of the posts which used to be there.

Leave a comment