A commentary on the Focus Article “The Role of Ritual in the Evolution of Social Complexity: Five predictions and a drum roll”
The development of Seshat is tremendously good news for social scientists interested in broad patterns in human history. It has long been evident to any unbiased observer that there are historical trends, large-scale patterns in the ways societies (or complexes of societies) evolve.
In some domains, such as that of military technology, the pattern seems fairly straightforward: all other things being equal, peoples favor more efficient technologies, and when they do not there are usually fairly evident reasons. For example, there is no great mystery to how firearms became the predominant weapon of the frontline soldier. Firearms may not have won every battle, but they had greater efficacy than rival technologies overall.
Other questions, such as whether alphabetic literacy promotes democracy, are more difficult to decide. As Whitehouse, Francois, and Turchin note,
The process of inferring general patterns in human history has usually meant cunningly plucking out facts to fit your argument—for instance ‘cherry picking’ historical events to lend credence to your judgments about the ‘errors’ of the past and your favoured ‘prescriptions’ for the future.
Proponents of a theory make general claims, and illustrate them with one or two extended case studies. Their critics deny the generalization, and hold up one or two counterexamples. At the end of the day, there is no clear conclusion, and the success of a theory comes right back down to its prima facie plausibility. The form of empirical inquiry has been followed, but its spirit has been exorcised.
Seshat promises to change this by allowing the rigorous testing of historical hypotheses against a database that is large both synchronically and diachronically. What’s not to love?
I am excited about Seshat, and optimistic that it will teach us a great deal. I do wonder, however, about the following:
- Correlation vs. causation. Seshat will be able to generate correlational data, but these are of limited value in assessing causal claims. To be sure, if there is no correlation where a causal claim has been made or if the putative effect precedes the putative cause, then Seshat will be able effectively to disconfirm a causal hypothesis. But in many cases it seems that cultures evolve as complexes: for instance, we might imagine that agriculture and the state tend to coevolve because a state can organize large-scale irrigation projects, and large-scale irrigation projects increase tax revenue and strengthen the state. There is a causal influence here, but it is non-linear. If the variables change together, it is doubtful that Seshat can discriminate causal from coincidental relationships, or tell us anything about the particular causal linkages.
- Which variables? The database is obviously designed to include all the variables relevant to Whitehouse’s Modes theory, and so it might reasonably be expected to provide an excellent test of those hypotheses. But if the database merely substitutes cherry-picked variables and gerrymandered categories for cherry-picked examples, I don’t think we will come out much ahead. Does the database contain the variables relevant to the Ritual Frequency Hypothesis, a rival to Whitehouse’s Modes theory? Or is the Modes theory to be tested only against the null hypothesis, that there is no pattern at all? Will the database include the variables required to actually compare specific predictions, or merely to confirm the existence of some kind of pattern?
- Psychology has recently faced something of a crisis of conscience over the discovery that some of its most famous findings are either not replicable or were tested by statistical measures insufficiently powerful to establish their claims. Here I must confess that I am the type that prefers classical mathematics to statistics, and that I have a hard time following statistical arguments. Given how easy it is to “lie” (intentionally or unintentionally) using statistical methods, I cannot help but feel doubt that statistical arguments are going to resolve anything.
In short, I wonder whether Seshat will not really be a giant, digital Golden Bough. Frazer, in writing the original, gathered all the examples he could, and tied them together into a coherent narrative about social evolution. He clearly intended to carry his argument precisely by the weight of examples he cited—a kind of intuitive statistics, without numbers. Ultimately, his work was rejected because he took his examples out of their contexts. But isn’t this exactly what a database is designed to do? To permit the cross-cultural comparison of variables stripped of all their particular contexts? If this was such a problem for Frazer’s undertaking, has it now ceased to be a problem? I understand that Seshat includes hundreds of variables from different societies, and that these can—potentially—be invoked to provide some degree of context. But for this much depends on the right variables being coded, and coded in relevant ways.
It is obvious today that even the documentary value of the Golden Bough is limited by Frazer’s theoretical agenda: because he was invested in a particular narrative, he gathered information in a particular way, sifting the relevant from the irrelevant on the basis of his theory. So, too, even if Seshat includes hundreds of variables, these are still but a selection from all that might be coded. It is not obvious to me how this selection should be made: it seems to me that cultural anthropologists have emphasized the importance of ethnography precisely because it is impossible to predict in advance which cultural phenomena might be related, or how. There are, I am sure, many variables that all social scientists would agree are important—but are any that all would agree are not?
I must emphasize that I do not know anything more about the design of Seshat than what is contained in the preceding article, and that none of the foregoing are intended as claims about it: they are merely questions, and ones I hope (and trust) will prove to be ill-founded. I am certain that all of my questions occurred long ago to Seshat’s designers. This project is indubitably worthwhile—indeed, it is the most exciting thing I have seen in a long time. I hope it lives up to its promise.
Hey Brian,
I share both your excitement and your sobriety about the Seshat project. It is going to be quite a challenge to address the issues you’ve brought up here — which I would argue is one of the most exciting features of this enterprise. These are age-old challenges for historic analysis and cultural studies in general.
One way I think we can begin to address the problems around causation versus correlation is by employing the tools of system analysis from complexity science. What I’m thinking here is that as mathematical models are built to capture key dynamic feedbacks (what are sometimes referred to as the “governing dynamics” for a system) it will be important to add and remove different dynamic drivers to see how the model systems behave — and compare this to causal mechanisms in hypotheses that might explain the observed patterns.
As an example from the first field I studied to apply complexity tools — atmospheric physics, and cloud formation in particular — a great deal was known about the state variables of temperature, density, pressure, humidity, phase transitions for states of matter (e.g. freezing point, condensation level, etc.). So any mathematical simulation of radiative transfer, the energy exchanges as photons move through the atmosphere, would need to be consistent with the equations for the state variables (which had been validated in lab experiments and are well understood). This relationship between experimental work, model-building for simulations, and testing against observational data is difficult, to be sure, but it does work.
I am confident that simulation-based research will prove helpful for “testing” various hypotheses in the theory-building part of the research process. And that “variable discovery” — realizing different variables are important or that the ways they are parameterized impact how the models perform — will also help separate causation from correlation.
Said succinctly, we cannot truly understand a complex system until we have simulated it satisfactorily to get it to behave as real-world systems do.
I realize this is only one piece of the puzzle, but it is something that prompts me to lean more into enthusiasm while holding to the sobriety of just how difficult (yet rewarding) this project can be!
Best,
Joe
Hey Joe,
I agree that simulation is a very promising way of exploring the variables and their relationships. In fact, I would go further and say that we cannot understand what has happened historically until we understand it in the context of all the things that might have happened but didn’t. Ideally, it would be great to understand historical events as particular trajectories through an abstract space of possibilities, complete with attractors and other dynamical currents and constraints. I think that what you have suggested is very promising, though it is a kind of modeling with which many historians and social scientists are not very familiar (yet?).
Totally with you, that would be very powerful indeed — and is becoming possible with advances in simulation-based research, large-scale data analytics, and cross-disciplinary collaborations among technical researchers and those in the humanities.
We are living in exciting times for synergistic and integrative research!