Research

Too Much Information & The Death of Consensus (Slides)

Modern society is increasingly polarized, even on purely factual questions, despite greater access to information than ever. In a model of sequential social learning, I study the impact of motivated reasoning on information aggregation. This is a belief formation process in which agents trade-off accuracy against ideological convenience. I find that even Bayesian agents only learn in very highly connected networks, where agents have arbitrarily large neighborhoods asymptotically. This is driven by the fact that motivated agents sometimes reject information that can be inferred from their neighbors’ actions when it refutes their desired beliefs. Observing any finite neighborhood, there is always some probability that all of an agent’s neighbors will have disregarded information thus. Moreover, I establish that consensus, where all agents eventually choose the same action, is only possible with relatively uninformative private signals and low levels of motivated reasoning. 

Bot Got Your Tongue? Social Learning with Timidity and Noise (Slides)

Models of social learning conventionally assume that all actions are visible, whereas frequently we can choose whether or not to advertise our choices. In this paper I study a model of sequential social learning in which agents choose whether or not to let successors see their action, and only want to do so if they are sufficiently confident in their choice. I find that when this is combined with a non-zero fraction of noise agents and neighbourhoods of bounded size, it provokes a form of unravelling in which noise agents crowd out more and more informed agents. In the context of social media, this can help explain the disproportionate presence of bots and partisans, who crowd out regular users. Beyond this, I find that the combination of timidity and noise causes a complete breakdown in the improvement principles on which much of this literature depend, though they can be salvaged (or partially salvaged) when faced with only one of these features.

TLDNR: Inattentive Learning on the Internet

Our ever greater access to information has not produced a perfectly informed society of political consensus. In this article, I study the role of rational inattention in explaining this, within a model of sequential social learning. In so doing, I illustrate how to tractably model a very general class of `social cost functions’: functions that give the cost of observing any given subset of predecessors. In such a model, where there are costs to learning both from these social signals and private information, I find that making access to both forms of information cheaper (either by making the cost of private signals lower, or making it easier to observe the actions of predecessors) can reduce the asymptotic probability with which agents correctly match the state. Finally, I use my model to study the impact of the internet on our media environment, showing how greater access to the opinions of others on social media (for example, those of influencers) can remove the incentives for news organisations to produce high quality news in equilibrium.

Sleeping Beauty Behind the Wheel (Work in Progress)

Confronted with multiple action-optimal probabilities in the absentminded driver paradox, how should an agent act? Whereas the standard answer is to assume the different agent-parts can magically coordinate, in this paper I investigate the behaviour of ambiguity averse agents of various forms who consider all such probabilities plausible. Considering one procedure for a single absentminded agent, and another for multiple agents with indexical uncertainty, I then study two games with many action-optimal equilibria: Sleeping Beauty Behind the Wheel and Sleeping Beauty Goes Viral, which illustrate the application of these solution concepts to learning problems with indexical uncertainty. Finally, I consider the differences in beliefs implied by the halfer and thirder solutions to the Sleeping Beauty Problem.

Sleeping Beauty Drives to School (Work in Progress)

In a model of sequential social learning with finitely many agents, whenever the visibility of actions depends upon the state of the world, agents should update their beliefs simply upon learning how many agents have acted before them. In a sequential model where they are uncertain about the total population size, and receive only limited information about how many predecessors have arrived and acted before them, I study how agents should form beliefs in response to such indexical information, and how this relates to the Sleeping Beauty Paradox. With endogenous visibility, we also encounter the absentminded driver paradox, as considered in Sleeping Beauty Behind the Wheel (see above).

Network Formation, Belief Consonance & Latte-drinking Liberals with Evan Sadler (Work in Progress)

That ideological beliefs and lifestyle choices go hand in hand is well established empirically, but lacking theoretical explanation. Considering agents with a preference for belief and lifestyle consonance, we investigate under what circumstances ideological polarization produces a belief-lifestyle affinity in stochastically stable networks.