In the introduction to this series of conversations, I define the Third Way of Entrepreneurship as a process of cultural evolution in which all three ingredients of an evolutionary process—selection, variation, and replication–must be managed to accomplish societal goals. The Third Way stands in contrast to the two dominant paradigms for entrepreneurship and social change of all sorts, laissez-faire and centralized planning.
Laissez-faire assumes that the pursuit of lower-level self-interest, such as individuals maximizing their utilities or corporations maximizing their short-term profits, robustly benefits the common good. I claim that this can’t work because its central premise is profoundly untrue. If this wasn’t already obvious from other perspectives, it becomes painfully obvious from a multilevel evolutionary perspective.
Centralized planning assumes that a group of experts can formulate and implement a grand plan to improve the welfare of society. I claim that this also can’t work because human social systems, which are embedded in natural systems, are too complex to be understood by anyone. Experimentation is required, which is a managed process of cultural evolution by another name.
If laissez-faire and centralized planning can’t work, then the only thing that can work is the Third Way. Positive systemic change must be the target of selection. Alternative social practices must be oriented toward the target of selection. And work is required to replicate the best practices in a way that is sensitive to the local context.
As a paradigm for entrepreneurship and all other forms of positive social change, stated explicitly in terms of evolution and complexity and contrasted with the two other major paradigms, the Third Way of Entrepreneurship is new. However, if it is the only paradigm that can work, then it is the only paradigm that ever has worked. In other words, when we examine cases of positive systemic social change in the past and present, we will find that they have converged upon the Third Way, even if the words “evolution” and “complexity” were seldom used.
Exhibit A, and the subject of this conversation, is the philosophical tradition of Pragmatism, which originated among a small group of intellectuals in America during the late 19th century. The first pragmatists included Oliver Wendell Holmes, Charles Sanders Peirce, and William James, as beautifully told by Louis Menand in his book The Metaphysical Club. They, in turn, influenced social reformers such as John Dewey and the administration of Woodrow Wilson, who conceptualized the United States as a citizen of the world.
Our guide to this period of history is Trygve Throntveit, who received his Ph.D. in History at Harvard University in 2008 and is currently Dean’s Fellow for Civic Studies at the University of Minnesota College of Education and Human Development. Trygve has authored two books of the period, William James and the Quest for an Ethical Republic and Power without Victory: Woodrow Wilson and the American Internationalist Experiment. Tryg has also become conversant with the modern evolution/complexity paradigm, making him the perfect authority to relate the philosophical tradition of pragmatism to the Third Way of Entrepreneurship.
David Sloan Wilson: Welcome Tryg! Please introduce yourself to our audience in your own words.
Trygve Throntveit: Thank you, David, it’s a pleasure. I was trained as a historian with a focus on American social thought—“social thought” meaning the ways that certain people view, describe, explain, and try to reimagine the world they share with other people in order to improve it. I was particularly drawn to the pragmatists by the degree of self-awareness they brought to their social thought: they knew that they (and everyone else) were viewing, describing, explaining, and reimagining their world, not simply receiving and relating information about it. And they knew that such cognitive action has consequences for other behaviors, which in turn have their own consequences—in short, they knew that ideas have consequences, and not just for other ideas but for the material and social world. Thus, I thought it was only right that I trace the consequences of their ideas beyond the realm of discourse, to see how they gained traction among groups and within institutions, and with what results. That’s what led me to cultural, political, and diplomatic history; but it also led me to think about my own intellectual commitments, how I can test and refine them in my professional as well as personal settings, and what consequences they might have for my world and the people with whom I share it. Hence my current interest in reorganizing higher education to promote a civic ethos that, with some differences of emphasis, resembles your marriage of cultural evolution and complex systems thinking.
DSW: Excellent! The history of Pragmatism has been studied from numerous perspectives and well told to a general audience by Louis Menand. What will be distinctive about our conversation is to discuss Pragmatism explicitly from an evolutionary/complexity perspective. If we think about it as a “cultural mutation” that arose in America during the 19th century, how would you describe it? How did it differ from the background cultural milieu? And how did it grow to have such influence over a period of decades?
TT: It’s always dangerous to impute novelty to ideas, but there was something radical in the pragmatists’ account of the world—radical in terms of its origins as well as its implications. On the origins score, William James saw pragmatism as going “to the root” of certain philosophical problems that he thought impeded philosophy from playing any useful role in public life. In fact, his was very much a “Third Way” approach to philosophy—or a via media, as my friend James Kloppenberg describes it—based on what he considered the fundamental character of human consciousness as an evolved adaptive function.1 In James’s (slightly exaggerated but heuristically useful) telling in Pragmatism (1907), the field of philosophy was divided into empiricist and idealist camps. The first held that humans, like the rest of the world, were mere collections of matter, interacting with other matter according to iron natural laws that gave shape to reality but left no room for a purpose or purposes beyond themselves. The second held that humans embodied aspects of an absolute, all-encompassing consciousness, spirit, or force that determined the character and course of the universe in line with some positive, normative purpose beyond material existence. In James’s view, both camps conjured versions of a “block universe”—a finished, or at least predetermined, reality—that defied common sense. To draw analogies to your work, David, James thought the laissez-faire empiricists (no unified purpose governing reality) and the central-planning idealists (one unified purpose governing reality) both ignored the empirical evidence, supplied by every human consciousness, that reality—so far as anyone can experience it—is both “one and many,” as he put it: it presents connections and disjunctions, regularities and novelties, continuity and change. Human experience is a constant negotiation between ideal purpose and encompassing environment, each reacting with and upon the other and blurring the line between them. After all, humans evolved to manipulate their environments, and every human being tries to do so—yet different human beings do so in very different ways, with varying degrees and even definitions of success.
This vision of a “pluralistic universe” had radical implications for ethics and politics at a time when industrialized societies like that of the United States were already in turmoil due to rapid economic and social change. In the United States, the horror of the Civil War and the scandalous corruption, inequality, and injustice of the Gilded Age had led many Americans to question the fundamental design of their political and economic system. Among educated Americans, Darwinian theory called the design of the entire world into question—indeed, challenged the very notion of design in nature. In the midst of this intellectual and cultural crisis, James, and later John Dewey, reinterpreted the uncertainty at its center: rather than a burden, they saw it as a liberation. If the world is not, in fact, fixed in all its features; if it has a genuine history, in which events have consequences; if it is not as it should be, but not as it has to be either; then theoretically it can be changed for the better, through events managed by human beings making informed guesses about the consequences of their actions. At the same time, the plural and changeable character of reality meant that the best informed, or “truer” guesses—those most likely to serve the purposes that prompted their formulation—were the most widely informed, those that took account of the broadest range of experience (as James explained at greatest length in The Meaning of Truth, his 1909 response to critics of Pragmatism). In short, it meant that democracy, defined as collaborative, experimental work across differences, was the preferred model for all forms of inquiry and judgment: scientific, religious, moral, ethical, etc. For many who questioned the social and political structures of their local, national, and even global communities, pragmatism was a call for reform, not only to adapt practices and systems to contemporary conditions but to build in the capacity for continued adaptation to change through participatory, deliberative, experimental mechanisms.2
DSW: This very helpful indeed. One point you make is that philosophical pragmatism emerged during a crisis period of American history, not so different from the crisis period that we are experiencing today. Peter Turchin has analyzed the cyclic nature of American history in his book Ages of Discord. A period of national unity during the first half of the 19th century deteriorated into the Civil War, the extreme inequality of the Gilded Age, and the societal collapse of the Great Depression. Then there was a recovery during the era associated with the New Deal, only to be followed by a second Gilded Age associated with the Reagan and Thatcher eras. Today we often blame neoliberal economic ideology for extreme inequality but that is too short-sighted from a historical perspective. What was the counterpart of neoliberalism during the first Gilded Age? In other words, how did people justify “Greed is Good” back then? Was it a religious ideology that wealth is an indication of moral goodness? Was Adam Smith’s invisible hand metaphor invoked? Was it something else?
TT: Then as now, technology, policy, long-term global trends, and cultural and intellectual developments all interacted to contribute to the wealth disparities of the late nineteenth century and to people’s heightened awareness and resentment of them. There was a sort of market explosion in the post-Civil War era, driven in part by industrial and other technologies such as Bessemer steel, oil refining, and the expansion and sophistication of credit and financial mechanisms, which in turn led to improved transportation, increased manufacturing, speculative investment, and, in short, the growth of capitalism and consumerism. Policy also played a role: The protective tariffs of the post-Civil War decades augmented the wealth-creating advantages of those who already had resources by lowering competition and thus the risks of capital investment. An immature regulatory system (and monied opposition to its development) made the situation ripe for monopoly, oligopoly, and what many working and middle-class folks considered the cruel exploitation of labor. The dollar was fixed to a gold standard, and its limited supply tamped down inflation and kept the real value of debts from declining over time—meaning it was hard for people of small means to improve their financial standing through borrowing and investment.
DSW: OK, one point you are making is that there doesn’t need to be a “greed is good” ideology for lower-level interests to play themselves out. They simply will do so unless appropriately constrained. Moral justifications will be like frosting on a cake. Now let’s describe the frosting!
TT: Before tasting that intellectual delicacy we have to ask: How was such an economy justified by its beneficiaries to society at large, and how did the mass of Americans respond? There are many stories to be told here, but a few things seem most relevant to your questions, and to our own time. First, a steady stream of technological marvels with daily applications—transcontinental rail travel, steel-construction buildings, electric power and lighting, telephony, mechanically produced food and clothing, ever-cheaper print—could be invoked as justifying the costs of economic concentration. Second, increasing urbanization had brought large numbers of people physically closer to one another and to diverse goods, ideas, habits, lifestyles, occupations, and temptations. Historian Jackson Lears has described how such opportunities to reinvent one’s social identity combined with the transactional logic of the market and with longstanding Protestant ideas about one’s state of grace manifesting itself through personal industry and its visible fruits to encourage an intensely individualistic ethos, grounded in a “psychology of scarcity” by which each expenditure of thought, energy, and resources in one direction depleted the reserve available for other ends.3 In short, there was a psychological pressure to separate private and public ends and to prioritize the former. This is not to say that everyone—or even most people—consciously made such calculations, or that there were no countervailing trends. There were, however, major efforts, academic and popular, to justify the self-centered pursuit of economic gain (and, for those of small means, social advancement), including not just selective readings of Adam Smith but also dubious applications of evolutionary theory. In the pseudo-Smithian account, private goods get amalgamated into public goods by the invisible hand. In the social evolutionary account, the human race is improved by relentless competition among individuals and classes that weeds out inefficient habits, practices, and people over time. As sociologist William Graham Sumner explained in a famous essay on “What Social Classes Owe to Each Other,” the answer was: Nothing.
DSW: Very interesting! I can see that this just scratches the surface of a very complex social history, but it suffices for the purposes of our conversation. Against this background, how did the musings of a tiny group of intellectuals become so influential?
TT: I think the best answer is that they were good ideas. To be sure, James was an evocative writer and captivating public lecturer, but Dewey was never accused of being either. Both were apparently inspiring teachers, and from the 1880s through the early 1900s they directly taught or mentored several of the young men and women who would shape the progressive movement to reform the economy, society, and politics of the Gilded Age—among them Jane Addams, W. E. B. DuBois, Max Eastman, and Walter Lippmann.4 Professorships at Harvard (James) and Chicago and Columbia (Dewey) obviously provided a platform, at a time when elite universities were even more dominant and when both the academy and the publishing industry were more interested and adept in the broad dissemination of sophisticated philosophical, scientific, social, and political ideas than they are today. But again, they were good ideas. The world around you is in flux? You’re anxious to control your destiny but yearn for the aid, comforts, and genuine recognition of a community? You wonder at the marvels of modern science but tremble at its implications of a world without God, or indeed any moral character or ideal purpose? James and Dewey sought to reconcile—or rather, hold in fruitful tension—the notions of unity and pluralism, change and continuity, free will and determinism, science and religion, self and society, real and ideal, and to show other people how their lives would be better if they learned to do the same.
DSW: Great! Now we come to the main event of our conversation. How did the pragmatists come to realize that whole systems must be the target of selection, such as James’s vision of an ethical republic and Woodrow Wilson’s vision of multi-national governance? How did they avoid the error of invisible-hand thinking, which supposes that lower-level competition robustly benefits the common good?
TT: They were simultaneously good empiricists and good moralists. They saw evidence everywhere that lower-level competition in the absence of effective regulation—whether moral, social, or political—was favoring certain activities that actually suppressed lower-level competition and the lessons and innovations it could generate. Louis Brandeis, an admirer of James and a major architect of Wilson’s first-term economic reforms, was particularly persuasive on this point. Of course, others could argue that radical economic and social inequality was itself a systemic good, or a product of system optimization because it meant the weeding out of less fit individuals and less individually empowering practices. James and Dewey, however, insisted that such arguments were normative rather than empirical, and were not afraid to make the very different normative argument that general flourishing across human systems was superior to concentrated economic and even physiological gains at the expense of large numbers of other individuals. At the same time, James, and more extensively Dewey and Wilson, made persuasive cases that self-aggrandizing behavior was ultimately injurious and even perilous to the aggrandizers. For one thing, it limited experimentation and thus stymied the development of knowledge, culture, and the aggrandizers’ own psychological, intellectual, and moral development. For another, by excluding the perspectives and experiences of so many millions of thinking and feeling individuals from broadly significant decisions, the powerful blinded themselves to the consequences of their self-aggrandizing behavior—resulting in economic instability, social unrest, environmental degradation, and destructive wars.
In light of your question, it must be noted that neither James, nor Dewey, nor Wilson would have said that lower-level competition was bad for systemic health and growth. On the contrary, they thought it was critical. What was bad was lower-level competition that was completely unaccountable to the community as a whole for its consequences, or that effectively halted competition after the first few rounds were decided.
DSW: That is a point well taken. Competition oriented toward the common good is synonymous with positive cultural evolution and needs to take place as quickly as possible. Proceeding to the next major ingredient of an evolutionary process—variation—how did the pragmatists come to realize that an experimental approach to social change was called for? How did they avoid the error of thinking that centralized planning could do the job?
TT: James thought the entire universe was unfinished. He also thought that it was pluralistic—that central to its unfinishedness was the potential for its parts to be related in multiple different ways, depending in part on the actions (cognitive or motor) of the agents exploring it. One major conclusion he drew from this metaphysical attitude, which he called radical empiricism, was that no one person or group of people could fully understand the world or predict its response to change. He also insisted that even validated hypotheses about that response encouraged activities that could, immediately or over time, so alter conditions that new hypotheses, experiments, and practices might be warranted.5 And he thought all this applied to the material world and its natural “laws”. Imagine how much less regular and predictable he found the social world! Experiment was, therefore, necessary a) to test hypotheses before institutionalizing the practices they implied, and b) to refine and reform practices and institutions in light of their changing consequences in a changing world.
DSW: That is experimental to the core! Did the pragmatists realize that the replication of best practices was a challenge in its own right that required a systemic approach?
TT: Yes. They called the most important of those systemic approaches science and democracy, and by both terms, they meant something broader than we often do today. Neither, for the pragmatists, was reducible to a set of rigid rules for formulating and testing hypotheses or taking collective action. Rather, they both implied an individual and cultural commitment to practical (i.e., problem- or goal-focused) inquiry, experience-based argumentation, epistemic humility, and social verification of belief and its consequences. This last point is critical regarding James: His notorious will-to-believe doctrine was not a warrant to believe whatever one liked. It was a warrant to believe an idea that was controversial only so long as it helped solve a problem or achieve a goal and did not contradict better-founded and widely more useful beliefs. In short, it was a warrant to experiment with the replication and refinement of practices in a socially disciplined manner.
DSW: So, much as I posit in my introduction to this series, philosophical pragmatism converged upon the Third Way of Entrepreneurship by taking a systemic approach to selection, variation, and replication. Finally, to what extent can the pragmatist movement be called a success, in the sense of actually causing positive social change at a large scale, and to what extent can it be called a failure? After all, the title of your biography of Wilson is Power Without Victory.
TT: We certainly seem these days to lack, in our intellectual and political culture, the mix of bold yet humble thinking James (and Wilson, at least in his best moments) endorsed. Many academic disciplines—including the most proudly empirical, namely the physical sciences—have become narrow discourses, impermeable to lay input regarding the relevance and consequences of their purposes and the findings those purposes make possible in lieu of unknown others. In our politics…well, it goes beyond “fake news”—both the real fake news and the fake fake news with which we are inundated. As politics has become more technocratic yet also more populist, citizens have become consumers of policy and clients of policymakers rather than deliberative, active co-creators of public life. That narrows the intellectual horizons of politics substantially. Meanwhile, the kindred cults of constitutional originalism and textualism have further hampered sustained inquiry into the consequences of many social and political practices with dire consequences for large numbers of people. Finally, market metaphors—lent cultural authority by economists’ adoption of the positivist attitude still prevalent in the physical sciences—have so permeated our culture that nearly every question of common concern is subject to the preference test: What would be best for me, right now, from my sovereign perspective? In short, we’re back to the Gilded Age in many ways.
The title of my book on the Wilson years is meant to evoke the concerns just expressed. Power—when construed as power over or against others—does not necessarily yield victory in the sense of achieving one’s long-term, highest, most sustainable goals. As I have often reiterated, the United States emerged the most economically and (for a time) militarily powerful nation in the world after the First World War, but in rejecting membership in the League of Nations acted to shore up its sovereignty at the cost of a stable, sustainable international order that might have preserved its economic and national security. It chose a unilateralist form of political entrepreneurship that ignored the larger ecosystem of interests, needs, opportunities, and hazards. Two decades later (having endured an economic catastrophe under a series of laissez-faire administrations), the unilateralist diplomatic gambit contributed to a Second World War resulting in a massive loss of American lives.
I don’t think pragmatist progressivism and internationalism failed entirely. Many conservatives, moderates, liberals, and progressives today agree on the value of certain systemic processes, goods, and goals deemed radical in James’s day, but established and secured in the three decades after his death in 1910. The market-regulating functions of the Federal Reserve, the producer and consumer protections of the Federal Trade Commission and similar agencies, federal lending to farmers and others engaged in economic activities deemed nationally important, and the reliance for national defense on a federally controlled army rather than a congeries of state militias; all these date from Wilson’s administration. Social Security and the axiom that the United States must play an active role in world affairs are two major legacies of the (Franklin) Roosevelt years that former Wilsonians and others directly and indirectly influenced by pragmatist social thought helped create.
Sadly, all of these pragmatist legacies have, in my opinion, developed in directions that James, Dewey, and their intellectual fellows would find problematic. As I argued some years ago in Political Science Quarterly, the pragmatist contention that policymakers should ultimately be accountable to national public opinion has triumphed. But that triumph has coincided with increasingly low participation in local politics; the rise of direct primaries and caucuses wedding presidential candidates and other party leaders to a range of uncoordinated and often incommensurable goals; the demise of the mediating institutions and free spaces that used to nourish civic agency and local democracy; and an increasing focus on the relation of the individual, or individual interest group, to the state itself, as opposed to the fellow citizens it represents. I think the pragmatists would lament the hegemony of a political discourse in which their ethic of mutual obligation is rarely echoed, while contextually justified supports or privileges are reinterpreted as fundamental rights upon which the changing needs of fellow citizens can never infringe. In sum, despite many institutional legacies, the political culture that the pragmatist progressives promoted failed to take root. Without a broadly informed, politically empowered, prudently skeptical, yet civically-minded public willing to engage in tolerant debate and productive collaboration across differences, a strong state lacks accountability and exacerbates division and inequality. Today, when many Americans seem eager for a “third way” between laissez-faire and technocracy yet lack the shared sense of mutual obligation and commitment to experimentation required to find it, the limited success of the pragmatist movement, in the face of powerful resistance and cultural inertia, does not strike me as an indictment of its proponents’ reason or judgment. Instead, it strikes me as a reproach and, more important, an inspiration to the generations that followed them, including our own.
DSW: This is a great contribution to the “Third Way” series. Thank you so much for sharing your time and expertise!
TT: Thank you so much for the opportunity, David.
Read the full Third Way of Entrepreneurship series:
Want to dive deeper? Sign-up for the Third Way Discussion Group where you can join David Sloan Wilson, Third Way contributors, and other TVOL readers for weekly virtual conversations about the latest installment in the series.
 James T. Kloppenberg, Uncertain Victory: Social Democracy and Progressivism in European and American Thought, 1870-1920 (New York: Oxford University Press, 1986).
 Trygve Throntveit, William James and the Quest for an Ethical Republic (New York: Palgrave, 2014), chapter 5; Trygve Throntveit, Power without Victory: Woodrow Wilson and the American Internationalist Experiment (Chicago: University of Chicago Press, 2017), esp. chapter 1.
 Jackson Lears, Rebirth of a Nation: The Making of Modern America, 1877-1920 (New York: Harper, 2009), 64-71.
 Throntveit, Power without Victory, chapter 1.
 On this point—anticipating James Mark Baldwin’s similar arguments, now known among evolutionists as the “Baldwin effect”—see William James, “Great Men and Their Environment,” and “The Importance of Individuals,” in William James, The Will to Believe, and Other Essays in Popular Philosophy (New York: Longmans, Green, 1897), 216-254, 255-262. Despite James’s sexist and culturally insensitive language in both essays, his arguments remain stimulating.