generativist / beliefs

Why Simulate Belief Systems?

Complex Adaptive Systems are Bigger Than Your Brain

In her September 9th, 1990 column for Parade Magazine, Marilyn vos Savant answered a question posed by reader Charles F. Whitaker,

Suppose you’re on a game show, and you’re given the choice of three doors. Behind one door is a car, behind the others, goats.[1] You pick a door, say #1, and the host, who knows what’s behind the doors, opens another door, say #3, which has a goat. He says to you, “Do you want to pick door #2?” Is it to your advantage to switch your choice of doors?

The prompt rephrased a statistical puzzle now called the Monte Hall Problem, first crafted by Steve Selvin (1975a; 1975b). The correct answer is, “yes, you should switch” — in doing so, the probability of winning doubles from one-third to two-thirds.The formulation assumes you prefer cars more than goats. Such a preference is no obviously longer valid. See also: 100% Goats. Yet, many people — including those with both pride and Ph.D.s — wrote in to chastise her when she made this claim. Sampling one particularly embarrassing response, “As a professional mathematician, I’m very concerned with the general public’s lack of mathematical skills.” The ensuing intellectual drama catapulted the visibility of the Monte Hall problem. It now serves as nearly the canonical introduction to Bayesian analysis.The other is typical introductory problem involves calculating the inverse probability of a disease given a positive result to a diagnostic test with high specificity. As pedagogically structured, the result is counterintuitive. After accounting for the prior probability of having the disease, the patient probably does not have it. Ironically, the example almost universally ignores the obvious conditioning inherent to the doctor ordering the test! That is, it prescribes the assumption of an uninformative prior, discarding the expertise of the doctor that compelled the test. That you have almost certainly not been tested for Ebola suggests we do not live in that world.

Lest the statistically well-trained reader write off the solution as obvious and the problem as trivial, consider: Paul Erdős — a man whose centrality to mathematics has been enshrined in the Erdős Number, the six degrees of Kevin Bacon equivalent for mathematical publications — did not believe the solution at first. He reached the correct conclusion only after his patient interlocutor demonstrated its validity through a visual Monte Carlo analysis (see: Which Door Has the Cadillac: Adventures of a Real-Life Mathematician by Andrew Vazsonyi (2002), pp.4–7). Simulation allowed the brilliant Erdős to see his mistakes for a small, constrained problem.

Social explanandum are neither small nor well-constrained. (And you are not Paul Erdős.) Belief systems, in particular, defy comprehension. As Walter Lippmann wrote in Public Opinion (1922, p.16), “[T]he real environment is altogether too big, too complex, and too fleeting for direct acquaintance.” Although, Lippmann lived in a world without high-powered computers, his conclusion echos those of computational social scientists. In an information environment characterized by complexity, people — and, the computational agents used to model them — need means of acting in a world beyond the limits of their bounded-cognition. Simulation affords an instrument for interrogation.So long as you, the simulator, does not lose sight of what becomes less obvious with chronic use: your construction is simplified simulacrum, not the system you hope to learn about. In between the two are wealth of possibilities. Mistaking the map for the terrain often risks policy prescriptions that are the sociological equivalent to bloodletting. See also: Kevin Baker’s Model Metropolis.


Originally published on Medium.