The Tragic Tradeoff Between Ideological Negotiation and Systematic Programing
What people must believe in to work is not so much a system as it is an ideology, but a system that doesn't need us to work doesn't need us.
Daniel Fraga and Owen Cox at Technosocial recently invited Brendan Graham Dempsey of Metamodern Spirituality to discuss Wagner, Nietzsche, Metamodernism, and the Dark Renaissance. I enjoyed the discussion and suggest it:
The elaborations on Wagner and Nietzsche were very engaging, and it was nice to hear allusions to “The Grand Inquisitor” and Dante. Rightly or wrongly, I’ve heard it argued that the tortures of Dante’s Inferno were precisely a result of God’s Grace trying to reach people who hated and rejected that Grace; similarly, Ivan Karamazov is compelled to reject God’s gift of free will precisely because a God exists from whom Ivan can derive an ethic which makes the suffering of children evil—a paradox that driven Ivan mad. In this way, the good and the bad are profoundly mixed: the torments of Dante are perhaps rejected graces, and the desire to reject God’s existence is evidence of God. How exactly they are unmixed is up to us: the demons in Milton put it well regarding the power of the mind to make heaven something else…
According to Christian theology, God is everywhere, and everything that exists is thanks to God. God is good, and since God created everything, everything must be good, but how then can Hell exist? Well, some theologians argue that Hell results from people using their free will to hate God, which means the Glory of God becomes a torture to them. God is thus Hell to those who hate him, and Heaven to those who love him. Separation from God could cause this agony, or even the direct experiencing of a God who “The Lost” want nothing to do with, for this could reflect back on “The Lost” the reality that they want nothing to do with “the best of all possible beings,” which suggests “The Lost” are “the worst of all possible beings”—a realization which causes psychological and existential torment that could increase hatred of God, causing an ever-worsening situation.
It’s hard to say what is the case, but the point is that just become something is good doesn’t mean that goodness will cause us to be good. In fact, if there is “something in us” that decides to hate or reject the good, the very goodness of that thing could worsen our torment and increase our rage. Even more problematically, if there is “something in us” (or in some of us) which naturally dislikes “the good” or seeks to destroy it (out of envy, jealously, wanting to escape comparison with it, etc.), then there is “something in us” which increase the likelihood that we reject and attack “the good” in proportion to how good it is. If this is the case, then the more something approaches perfection, the higher the likelihood it is destroyed. Keep in mind that an entire socioeconomic order can be overturned or destroyed by a radical minority: if 10% of a country were to hate “the good” of that country, that would be millions upon millions of people.
Following these ideas, the more perfect a society is, the more it can torture people who fail to meet its standards, while a terrible society can force us to rise to its occasion. This is an unfortunate complexity that makes all efforts to design “a better world” much harder, which brings us to the topic of Game B, which has been a topic of great interest and debate. As I understand it, this “wrinkle” I have described is a concern of “The Dark Renaissance.”
For thoughts on this topic, please see:
Now, Mr. Dempsey more associates with Metamodernity then Game B, and so please understand that my points here may not apply to his work—I couldn’t help though but be inspired by the Technosocial discussion to think more on Game B. For Mr. Dempsey’s work, please visit here:
To return to the main line of thought, paradoxically, the “goodness” and even “perfection” of a society might be precisely why it brings out the worst in us, whereas a horrible society could offer us opportunities for sainthood. Isn’t that strange? Now, I’m not advocating we design a terrible society on purpose: my point is only that just because we design a perfect society, it does not follow from this that the society will necessarily be the best society. The perfection of the society could be precisely why it is horrible, based on how the people within that society act and relate to it. Unfortunately, the more beautiful and excellent the sandcastle, the greater the temptation can be to kick it. Perfection can bring out the worst in us. Is this why we like to kill gods, as perhaps suggested by René Girard? Hard to say.
What I have described here does not describe everyone (and perhaps only describes a minority), and there are indeed some people who “develop” in a manner to overcome the tendency to hate the glorious (say according to “mimetic desire,” envy, insecurity, etc.). This is a good and noble change, but will the majority so develop? Indeed, that has always been the dream, but it is a hard dream to make true. Small societies and tribes can arguably accomplish it, and certainly some individuals “develop” beyond “human nature” (as brought up in the Technosocial talk). Unfortunately, the problem is “scale”: when a society stays small enough and niche enough, all can be well, but as it grows, the reality of “human pathos” becomes harder to avoid. I will not elaborate on it here, but here is a graphic from “The Four Arrows” which might prove relevant (please see the piece for more):
As discussed in “The Four Arrows,” I have no doubt Game B can work on small enough of a scale with people who believe in Game B: the question is if Game B can work for people who think Game B is foolish. A system is only as effective as its capacity to work when people don’t believe in it, and the greater the scale, the higher the likelihood it will incorporate people who dislike it or don't “get it.” Ultimately, an effective system is one that works even when people want to destroy it.
As discussed throughout “Belonging Again” by O.G. Rose, The Triumph of the Therapeutic by Philip Rieff is an important book, and Mr. Rieff warns that if a society fails to develop an effective system of “givens” and “releases” (Nos and Yeses, per se) then the social order will fail to direct human nature in productive and constructive ways. It is this system of “symbolic boundaries” that all successful social orders must somehow create (along with a managerial solution to Hayek’s “Knowledge Problem,” but that is another topic), and it is very hard to accomplish this task. No social order (or at least that I know of) that doesn’t take seriously the foundations of sex, death, and birth succeeds at establishing this system of “symbolic boundaries,” for the “symbolic boundaries” must be assented to by everyone, and basically everyone designs their lives profoundly around death, birth, and life (or around avoiding them). Taking these three seriously is not simply a matter of taking seriously “the dark side” of man, but instead “the hard side of life,” per se, “The Real” which makes us irritable, possessive, grouchy, angry, and lonely. This is what I understand to be the thrust of the Dark Renaissance critique (though I might be wrong), which is to say that there is doubt if Game B will work when its full of people who feel like Game B is a bad idea. For Game B to work at scale, it would seem that it must benefit people who hate it. A system people must believe in so that it works is not so much a system as it is an ideology. Perhaps a good ideology, but an ideology all the same.
To make a distinction in this paper that I cannot promise I will maintain throughout all my work, an ideology needs people to believe in it for it to work, while a system works regardless what people think. Yes, systems incubate ideologies as ideologies inspire the construction of systems, so there is certainly overlap and transitioning between the two, but generally something which is “more ideological” than “systematic” is something with more “gaps” which must be filled according to a “human element.” If there is a shovel and a pile of dirt, for that dirt to be moved into a hole, a person must be convinced to pick up the shovel and do the work—this would be “more ideological than systematic.” If, however, there was a robot which automatically moved the dirt into a hole, I would simply need to program the robot accordingly—this would be “more systemic than ideological.” Both of these can get the job done, and, as we will explore, there is a danger in relying on the robot (or “system”), but if I am going to rely on a human to “fill the gap” between the reality of the pile of dirt and need to fill a hole, I will have to somehow convince the human to do the work, and that means possibly facing and dealing with the emotional, psychological, “pathos,” and spiritual realities of that person. We could call this “negotiating,” and that means that where there is ideology, there must be negotiating, whereas where there are systems, there must be programing (though of course every society is a mixture of the two).
Generally, a concern of Dark Renaissance seems to be that Game B requires “negotiating” to work, but given the realities of “pathos,” the likelihood that negotiating is effective drops the greater the scale and size of the society (generally, it seems that only “the pricing mechanism” has figured out how to deal with this problem, but that is Game A). Though it’s not always easy, it’s “more likely” that negotiations between people who all believe in Game B will work out then say negotiations between millions of people who have never heard of Game B. If ultimately Game B wants to scale, it would seem that a “system” would be required that will work if “programmed right,” for there are simply limits to what “negotiations” can accomplish (without ultimately resulting to force, power, shaming, and the like). This is what I mean when I say that Game B will require a replacement to “the pricing mechanism” (as discussed in “The Four Arrows”) if Game B indeed wants to transcend the mechanisms of “competition” which define Game A. This is because it seems to me that “the pricing system” requires competition, though again I’m happy to entertain possible means of reform (possibilities of which are describes in “Notes on the Dark Renaissance”).
To visually depict the problem and mentioned “trade-off,” considering the following:
The smaller the scale, the more the society can operate efficiently with just ideology and negotiation, and the more it will be “human.” People will know one another; people will engage “face to face”; people will have names versus be numbers. From experience, I can attest to how wonderful this way of life can be (which I think can be associated with Game B); at the same time, it is more contingent upon “everyone doing what they are supposed to do” and acting as they need to act. A single person in a neighborhood who acts poorly can ruin the entire community, but it’s also the case that this “bad egg” can be removed from the community rather easily without requiring systemic overhaul (assuming the “ideological unity” is maintained)—though please note that this also means “scapegoating” might be easier (it’s always tradeoffs). Please note that by “ideology” in this piece I don’t mean something like what is found in Marx, but rather something more like “a schema of ideas” according to which people organize their lives (“religion” is another word that comes to mind, but I will not use that here).
The more a society scales, the more it will need and require a “system” to hold together, simply because the social order will incorporate more people who don’t believe in the system, who actively oppose it, and the like. Everyone in the society will likely not share the ideology of everyone else, and it will not be possible for the social order to hold together via “negotiation.” That will require more systemizing and a more intentional “design” of how economics and social activities manifest, say according to Capitalism, Socialism, or something we are yet to consider. On a small enough of a scale, people might not even need money: a very basic and informal barter system could work. Thus, the need for a “formal system” is relative to the size of the society; the smaller the scale, the more the social order will organize efficiently enough following ideology/negotiations (perhaps we could even say “the communicative rationality” of Jürgen Habermas, but I’m not sure).
A “system” does not need people to get along, “develop,” or share in similar ideologies: all a system needs is the right programming (like a robot). For a society to scale, it must be increasingly systemized, and not all systems are equally effective: history is full of many systems which have failed. “The pricing mechanism,” as discussed by Fredrich Hayek, seems uniquely efficient at organizing a social order in a manner which the people “generally accept” (“universal acceptance” is impossible), and the work of Deidre McCloskey gives us good reason to believe that “free markets’ have uniquely enriched the world. Mr. Jim Rutt also alluded to this point in his Parallax interview when he said that Game A helped millions of people climb out of poverty, a point with which I agree.
But now we must begin to acknowledge the trouble with Game A, and why Game B is justified (I think at least) to be discussing the problems with it. The benefit of “systems” is that they are not contingent on the human element to work, but the danger of systems is precisely that they are not contingent on the human element to work. It is similar to the current debate and concern with A.I., the Singularity, and so on: once we program a system to do x, it will do x, even if we beg it to stop. If we tell a self-driving car to “take us to the airport,” it might drive through a neighbor’s yard and race there at a hundred miles per hour: when we command a robot to do something or “program a system,” we might not think of all the “clarifications” we should include with our initial statement. As a result, we might end up with a lot of “unintentional consequences,” some of which might prove dangerous and fatal. Likewise, when we design the incentives of Capitalism to “make everyone rich,” it might process nature into energy which everyone can use and sell, dooming us to the horrors of Global Warming. And if we make this mistake regarding Capitalism and realize we messed up, we can’t easily “change the programing of the system” back to something more benevolent. The machine will be running, and it will not stop easily.
Perhaps we can lay out the main point as following:
1. The greater the scale, the more a “formal system” will be needed. (By definition, “the supply chain,” which is necessary for technology like the internet, heating, and the like, requires a “formal system.”)
2. The greater “the formal system,” the more it will run “on its own.”
3. The more a system runs “on its own,” the more it will not need people.
4. An efficient system at scale doesn’t need people, and thus it could potentially turn against people efficiently. (The dangers of Capitalism could be like the dangers of A.I.)
5. We will not be able to easily stop the system.
Game B is justified to be concerned, but I fear avoiding this “Five Step Problem” through “replacing Game A” would possibly entail getting rid of the supply chain, which would mean the forgoing of technical advancement like the internet. Why? Because the internet requires laptops, which requires all the parts in a laptop, which requires countless raw earth material, which requires convincing people to work the mines, which then requires trucks to transport the raw materials, which requires convincing people to work all the jobs necessary to make a truck possible, which requires food so that the workers have the energy to work, which requires roads and housing for the waiters at the restaurant—on and on (“nobody knows how to make a pencil,” as the old saying goes).
A problem, yes? The Game A system which makes possible computers, heating, running water, and the like may now be going “too far,” as Game B warns, and contributing to the collapse of the human world (not just environmentally but also psychologically and existentially), and there doesn’t seem to be any ready way to “change the programming” or “stop it.” What must we do? Well, it seems like we need “to replace” Game A with Game B, with something new entirely, but I’m not sure if that is desirable or possible (for we would lose the supply chain, as far as I can tell). Ideally, we could perhaps somehow live like the Game B tribes in the wilderness without giving up the internet, but keeping the internet requires keeping the supply chain, and that requires Game A. For this reason, I personally am more of a “Reformationatist” (at this point at least) who indeed stresses A/B thinking (as Dr. Cadell Last writes on), with particular ideas for “Reformation” spread across the work of O.G. Rose (such as those listed here). I also believe in the need to “change our culture” to one which cultivates “aesthetic sensibilities,” which is to say we incubate “intrinsic motivation,” a topic explored in The Fate of Beauty and later works. I also think NFT technology might play a role, as described at Intrinsic Research Co. So, again, I am not against the Game A/B discussion—I just want it to be A/B.
In closing, following Christianity, the identity of Jesus as divine was confirmed precisely when people killed him. Rightly or wrongly, the concern with Game B, following Dark Renaissance, seems to be that the moment people reject Game B will not be the moment which confirms its greatness, but instead the moment which confirms its folly. The “pricing mechanism,” as explored in Hayek, works even if people think it is foolish, and people who hate fossil fuels end up using fossil fuels to attend conferences about the dangers of it. Game A, in this way, functions even if people dislike it, which in one way is awful, because that means it is very hard to stop when Game A becomes “cancerous” (as Alex Ebert discusses), but in another way is great, for it means Game A can “handle” the problem of human pathos. Game B must be able to do the same (“tragically”), or we must instead “reform” Game A versus replace it with Game B. For me, “Reformation” seems like the only viable option.
Is more freedom found where a society relies on a system or where a society is organized by ideology and negotiation? It’s hard to say, and a lot of it depends on what we define as “freedom.” Still, I think it’s fair to say that the Dark Renaissance and Game B are equally committed to “human flourishing,” which entails an increase in “human freedom” (which, do note, can sometimes be terrifying). Bound by this unified vision, I don’t see why the discussion cannot bear much fruit and benefit us all.
(For a discussion on what was covered in this piece, please see this conversation with Kevin Orosz, “Game A VS Game B, Crisis with Capitalism, & The Dark Renaissance.”)