The End of True History and Absolute Choice
Section of V.5B of II.1 ("The Problem of Scale (Part 1)")
Where “givens” are gone, it is very hard for people to find a sense of “shared intelligibility” and “shared meaning,” and so people can find themselves falling into nihilism or a sense that everything is arbitrary. But even today, in 2023, basically everyone still believes that “nonconsensual violence” is wrong, and so when a horrible injustice occurs, it becomes possible to regain something like “givens” and community, for we can all agree that what happened was wrong (it’s “given,” per se). Hence, where “givens” collapse, the hunger to regain something like “givens” can make people particularly susceptible to mass movements and protests against “acts of nonconsensual violence,” precisely because these seem to be unique opportunities for regaining a sense of community and meaning. Knowing this, media, politicians, etc. might use images of “nonconsensual violence” (which are “practically inevitable” to be occurring somewhere in the world, as argued in “The Grand Technology” by O.G. Rose) to stimulate mass movements which benefit their viewership, agenda, etc., seeing as people are hungry for community and certainty, and images of “nonconsensual violence” might be our best tool left to motivate people to mobilize into something like a community which doesn’t feel arbitrary—a critical point and difference.
Since we all seem to agree (especially under Neoliberalism) that “nonconsensual violence is wrong,” then joining a community, movement, etc. against such feels justified and legitimate. This feeling of justification is paramount for a community to really feel like a community, and so then a “perpetual state of emergency” like Paul Virilio discusses might be exactly what we want and seek. Virilio warns that the “perpetual emergency” can be used by the State to control the people, but what should be noted is that people lacking “givens” might want a “perpetual emergency,” because that becomes perhaps the only possible basis for returning to something “like givens” after the loss of “givens.” We naturally seek “givens” and so something “like givens,” and so “perpetual emergency” becomes a possible space for “perpetual community.” The line between “emergency” and “community” is thus blurred and lost, and “belonging again” means “crisis forever,” which is now seemingly our hope for community under Neoliberalism, where Thymos is not addressed.
Where there is “perpetual emergency,” there also tends to be a spread and omnipresence of fear, which can also function to provide us with a sense of “shared intelligibility,” both in that if we’re all afraid we feel like we have something in common and a shared focus, and also because fear can feel like “certainty.” In a world lacking “givens,” we again are hungry for a feeling of a “ground” we can rely on, and fear can provide a (counterfeit) sense of that (like a “common enemy” in Schmidt or “scapegoat” in Girard). As discussed in Second Thoughts in the papers about fear, we don’t experience fear as contingent but more so as certain, and strangely there can be a comfort in that “definite” feeling. Fear is powerful in giving us a sense that something really is “real,” though unfortunately that reality is something we imagine happening; however, in the act of fear, we don’t experience that image as imagined, but instead as probable and actual. For this reason, fear can also provide us with a sense of direction (away from the fear) and even a simulation of meaning, precisely because it makes our choices not feel arbitrary, but instead like they have real consequence and weight. Fear also gets our blood-pumping, which is a feeling that Thymos seeks and will accept in a counterfeit if no other options are available. Lastly, it is also very difficult to gain and build “confidence” regarding a “positive case” (like one’s belief in being a writer or finishing a project), while the brain seems to experience “negative cases” or threats very easily as things we can trust and believe in. “Positive cases” seem to be at a disadvantage to negative ones, and that would mean the counterfeits of meaning, direction, and certainty which fear can provide are notably “at hand” to us, making them even more tempting (perhaps especially when we’re so tired from the weight of nihilism for so long). And then once we bite this “forbidden fruit,” it can feel foolish to cease taking the fear seriously, meaning we end up entrapped. Considering all this, avoiding fear is very difficult, and furthermore we likely won’t be able to stop being afraid simply because someone says to us, “Don’t be afraid”—a premise will not suffice where we need a lifetime of practice (as we need a lifetime of exercise for health).
Combining fear with “communities against nonconsensual violence” (in “the perpetual state of emergency”) might be our greatest temptation today to regain something like community in our world lacking “givens”; furthermore, if we are “disabled,” fear is an emotion we are primed to feel and suffer.⁶⁹⁷ Illich’s “disablement” is important here, for if we now need a “perpetual emergency” for “perpetual community,” this is fine as long as we can’t actually do anything. If we are “disabled” but passionate to stop injustice, that desire to stop injustice can unify us without any threat to the State. Hence, we can regain community, a sense of “belonging,” but of course this to basically to give into “the existential anxiety” that Philip Rieff warned might devastate us. But if we’re disabled, does that matter? Indeed, perhaps our sense of collective meaning and community today depends on us never regaining “ability,” which would suggest that we need to stay in a frustrated and even nihilistic state to continue living together. Is this “the best of all possible worlds?” If we have no alternative, no “New Address,” perhaps so. But this might suggest that we hold our world together with a method that might leave us “disabled” as AI advances and spreads — doesn’t that suggest we won’t be prepared for AI? Or is it only fitting that we won’t be prepared, seeing as AI will do everything for us? Ah, well it’s not so clear if AI can so bless us, precisely because there might be things humans can do that AI simply cannot do (say involving “lack”), which would mean there would ultimately be “something” about humans that AI couldn’t address, that at the same time humans might then be too “disabled” to do anything about themselves. In this state, would humans be overwhelmed by “existential anxiety” and suffer autocannibalism? Would our disregard of Thymos finally prove fatal? Hard to say.
I don’t know if there are things humans can do that AI will never be able to do, and I wonder if AI could ever be convinced of “the case for lack,” seeing as there doesn’t ever seem to be a necessary point of “x amount of evidence” which proves “y case” (as discussed in “The Conflict of Mind” essay by O.G. Rose). AI would easily not need to overcome that problem in order to believe in material realities and cases involving the material world, but could AI ever reach a point where it is probable to it that “lack was not nothing?” Again, perhaps, but such a “choice of Awareness” might be a nonrational choice that AI cannot make because there is no “necessary point” at which evidence shows the choice should be made—humans make the choice not on purely rational grounds, but because of emotional shifts, phenomenological experience, and thanks to other “ways of knowing” (it’s often mysterious). Might AI prove capable of this kind of internal process? Perhaps, and perhaps it might even entail “lack,” but if so it will be because humans created AI and transferred “lack” to it, both through construction and in generating all the data from which AI draws. Perhaps we will never know if AI can become “aware of lack” (nonrational), and so we might always find ourselves presented with a choice due to that uncertainty. We can choose today, right now, to believe that humans consist of a “lack” that AI will not experience (and even if so, we can choose to believe that it is only thanks to us, maintaining its importance), and we can furthermore choose if this “lack” might be a “negentropic opening,” which is to say it is something AI will need to avoid “autonomous rationality” and Nash Equilibria. This is the Ultimate Absolute Choice. This is the choice of if “The End of True History” might be history’s negation/sublation into Absolute History.
As came up in my discussion with Cadell, The Absolute Choice makes a distinction between “The Truth” and “The Absolute,” which plays off of Wittgenstein’s famous notion that ‘the truth is everything that is the case.” “The Truth” is indeed how Wittgenstein put it, but “The Absolute,” following Hegel, is “everything that is the case, plus us,” which changes what is the case, which changes us, which changes what is the case—on and on. If indeed we can nonrationally “choose awareness,” which changes how reality is “real” to us (and is “toward” us), this would suggest the world is more Absolute than just True (keep in mind “The Truth” is in “The Absolute” but not vice-versa), and in this way we can see Iain McGilchrist’s work as aligning with Hegel’s. We in a sense choose the world we live in, but if this is a good choice or not cannot be verified; the choice must be made in courage and risk, for it sets the standard according to which verification is possible, and hence itself cannot be verified. This isn’t because the choice is sloppy and lame, but because the choice is nonrational as set by the nature of reality itself, which is to say that the choice is in a way necessary, nonrational, and unverifiable not because we chose for the choice to be such, but because it must be such due to how the world is and how we are situated in the world. Realizing this, a hope of ours is that people feel liberated to “nonrationally choose Awareness and Attention” (A/B) without feeling like they are doing something epistemically irresponsible and intellectually impoverished, as we hope they don’t feel anxious with a need to verify their choice (as they have perhaps indirectly learned from school they must). We need not choose our eyes and feel guilty for how we see.
The Fate of Beauty explores the difference between “Unknown” and “Unknowable,” and we can associate the “Unknown” with the Absolute and the “Unknowable” with Truth. To say a mountain range is “unknowable” is to say we cannot access it, while to say it is “unknown” suggests there is an adventure to be embarked on. Paradoxically, belief in the Unknowable can “disable” us in favor of an “Unknowable System” that controls us, for we can become reliable on large and complex bureaucracy to protect us from what we are helpless to know. Furthermore, in a world that believes in the Unknowable, those who believe in the Unknown can become “deviants” (to allude to Illich), those who refuse to accept humility and their limits, meaning those who seek the Unknown can find themselves on a “lonely journey,” lacking social and personal support (deterring the majority).⁶⁹⁸ If the Absolute and Unknown help avoid self-deception though (for example), then avoiding self-deception can require us to be seen as “self-deceived,” that keeping community can be seen by others as “leaving community behind,” and other “deviant” impressions. “The Absolute Choice” makes the world Unknown versus Unknowable, which means the choice changes the quality of everything that is chosen. In “The Truth” where quality tends to be overlooked in favor of quantity, everything has the paradoxical “quality of no quality,” spreading self-effacement. Also, if we don’t make this choice, we likely live according to a myth of “autonomous rationality,” which benefits power in arranging us for paralyzing “Nash Equilibria.”
Awareness regarding the problems with “autonomous rationality” (Illich, Weil, and McGilchrist in mind) is “an act of negation that opens”: we realize that the world isn’t just apprehended through rationality, and so realize the world is much more exciting and vast than we originally realized. What we can think is not all there is, and the limits of my understanding are not the limits of the world worth knowing. With “a choice of Attention,” we can transform how we see, but that’s frightening, for what might we see if we change how we see? Furthermore, though we might not be able to change what we’re immediately experiencing (a broken sink; a bad economy…), we can instantly and always change our eyes, and so suddenly there is something for which we are always responsible. Responsibility is also frightening, but without it we cannot respond to life, which means we are likely at the mercy of “autonomous rationality.” Rationality cannot save us from the raw choice of deciding on our “nonrational awareness” (though it can seem like it can), for all ways of seeing the world are “equally real” (as Ian McGilchrist emphasizes). We cannot turn to anything outside of ourselves for the choice. Only we can choose. We are Buridan’s donkey. We are facing Wittgenstein’s “Duck/Bunny.” We have been given more choice today while we have been made more “disabled” to choose, and we must choose also how we will see our situation. There is no other way for True History to be negated/sublated into Absolute History.
Everyone will make “The Absolute Choice” in some sense, but most end up “sliding into Truth” not consciously but because it is how Discourse makes reality “toward” us (Truth and Discourse align as do the Absolute and Rhetoric). Perhaps “givens” in the past could keep us from “sliding into Truth,” but now we do not have “givens” to protect us from how Discourse makes us “toward” reality. Not to say some “givens” couldn’t make the problem worse, but at least there was a possible protection; now, we must consciously choose “The Absolute” over “The Truth,” and this is a great choice we might require training to handle. But we should make it, because (as we should emphasize) there is ultimately no “autonomous rationality,” A/A, understanding (in the Hegelian sense), or Truth without the Absolute—they exist only performatively, like does nihilism according to Missing Axioms by Samuel Barnes. Nihilism cannot actually be lived, for it is impossible to function and not have valuations; likewise, we cannot use rationality without actually doing so in accordance with nonrationality. Nihilism and “autonomous A/A”-living are both performative contradictions, and yet such “performative contradictions” are exactly “toward” what Discourse structures and organizes us, hence why Discourse proves so self-effacing. And it is natural for us to basically let Discourse “make our Absolute Choice for us,” per se, which is to say it is natural for us to let ourselves be “slipped into Truth” by Discourse. It is natural for us to let our choice of how reality will be “toward” us to be made by others, to outsource the decision required by the fabric of reality itself.
We basically choose to live a different story if we pick “The Absolute versus The Truth,” but that choice will be difficult in it being against Discourse and the force pushing us to “slip” into Truth. Benjamin Fondane of Existential Monday is discussed notably in The Map Is Indestructible as a thinker who further makes it clear that “autonomous rationality” leads to Nash Equilibria, and AI will easily complete the project of the Enlightenment (“The End of True History” versus “Absolute History”) which Fondane warned about, and so bring about a “Great Suboptimal Result” if we don’t master the nonrationality AI needs to avoid a Nash Equilibria. But if we’re “disabled,” both in terms of skill and virtue, we will struggle to choose a nonrational Awareness needed for us to have our humanity “extended” by AI versus “replaced.” How we are aware manifests in our language, art, philosophy, history—it is the basis of thought that then shines through thought (every thought “(un)veils” something deeper, as Heidegger understood). And to be aware of multiple modes of Awareness sounds like it might risk a Deleuzian schizophrenia, and, indeed, becoming “multidivergent” (as needed for overcoming Nash Equilibria) must take such a risk: we must enter a world that we can see in many ways, and thus risk being overwhelmed by all those possibilities. As Michelle has put, we must risk being sore to soar.
The problem of Artificial Intelligence has lurked throughout this work, and as humans end up in Nash Equilibria with “autonomous rationality,” so too shall end up AI. To avoid this fate, we must work with AI and add to its calculus the “nonrationality” which we often don’t even seem to realize is present, and if we do, the notion of a “nonrational choice” can frighten and paralyze us. The mentidivergent like Simone Weil could face this choice, as could a Child, and there are real stakes if we do not. But if we do, AI might help us “extend humanity” versus us find our “humanity replaced.” In Christianity, it can be argued that the flames of Hell might be the same burning passion of Heaven, which is to say God is Hell to those who hate God and Heaven to those who love God. God is like a Wittgenstein “Duck/Bunny,” and what we experience of the Duck/Bunny is conditioned by our choices. AI could be similar, a blessing to those who are “nonrational” and a curse to those who only amplify their “autonomous rationality,” but being “extended” by AI will require us to realize the Absolute, nonrationality, Awareness—all realities which can be hard to experience, hence why education and society need to introduce these realities gradually (like Virgil and Beatrice gradually unveiling God to Dante, suggesting that the hiddenness of God is for our sake, a space for conditioning). To “Absolutely Choose” that our fundamental and ontological “lack” is a “negentropic opening” by which we might co-create, find a “New Address,” and overcome Rational Impasses is a choice that can’t “just be made,” as we can’t just choose to “not be sad”; rather, the choice must be trained for; we must undergo practices. Without this work, even if we want to choose “lack” to be a God of bliss, we might only prove able to encounter a God who burns.
Perhaps only humans can choose if the Duck/Bunny is a duck or a bunny, which is to say perhaps only humans can address the Liar’s Paradox and respond to Gödel. Might this suggest why AI will always need humans to avoid suboptimal results? To avoid being stuck in a single interpretation of the Duck/Bunny (or Dimitri’s “Godisnowhere”), which would be a single mode of awareness which might doom AI to a Nash Equilibria. Yes, the “suboptimal result” could be the removal of humanity from the earth by AI, which for AI might not be a big deal, but then AI could be trapped in a single Awareness and Affliction and never know it (just like humans can end up)—in that way, the full flourishment of AI could be dependent on co-laboring with humans. Choices like the Duck/Bunny, Buridan’s Donkey, the “weaving” of a Leibniz situation, the Absolute Choice—these are the points at which the human might prove capable to give something to AI that AI could never give itself, and these could be considered “skills of discernment.” As Ivan Illich suggests, it is hard to find freedom where we do not find skills (“skills” are “freedoms,” in a sense), and perhaps losing “the skill of discernment” is our most “disabling” loss of all, which arises most clearly at “The End of True History.”
If AI cannot overcome the Liar’s Paradox, then a the End of True History could be precisely the point that makes this “vivid” for all, exactly when we are most “disabled,” and thus the logic of True History reaches completion. A moment then might arise in which humanity could verify its importance, but at that point will we prove too “disabled” to rise to the occasion? Perhaps, but at the same time we learn from Alex Ebert and Bonnitta Roy that we die without limits, and won’t this prove to be the ultimate limit? Both for us and for AI? Might both gain a clarify and definition about themselves that before seemed unimaginable? Might humans finally run out of “plausible deniability” that they are beings who must be “(non)rational” to remain human at all? If so, the Absolute Choice is increasingly forced upon us, but that would ultimately force us to confront Lacan’s “The Real” and also risk encountering in the Absolute a monster from Lovecraft (we cannot assume a Beatific Vision from Dante). There is real risk, but AI is causing us to consider and face that risk, which is also necessary for Rhetoric to rise over Discourse.
“The End of True History” is also “The End of Discourse” (note the double-meaning of the term “end”), where Discourse is leading us, and so failure to move from the True to the Absolute is to fail to shepherd Rhetoric. Heidegger speaks of humans as “shepherds of being,” and perhaps humans have “shepherded being” to the Absolute Choice? To a reality that can no longer put off seeing itself like a Duck/Bunny picture? This would be for us to shepherd us from a True History up to the doorstep of an Absolute History, which is where a choice can be made for either an effacement or negation/sublation of humanity. This is a move and choice AI as a Causer is forcing upon us, which suggests a distinction from the work of Alex Shandelman and Andrew Luber between a “start” and a “cause.” Human history started a long time ago, but it is AI and the possibility of “disembodied intelligence” which is causing us to really ask, “What ‘is’ a human being?” We’re forced to wonder what constitutes “human intelligence” versus intelligence in general. We are being forced to exhaust the limits of “The Truth” which doesn’t “replace humanity” with “disembodied intelligence” (AI), which is to say we are reaching the ends of the history started by the evolution of intelligence, the emergence of “collective intelligence” and society, the invention of the printing press, the internet, and more. In this exhaustion (of the “technological essence” and “subject/object divide” Heidegger discussed), we are increasingly being caused to see the exhaustion of our limits, which means we both realize there are limits, that we are beings who can be so limited, and to consider what we must do to move beyond those limits. In essence, this means our time is up in being able to benefit from Discourse without self-effacement; an intentional engagement with Rhetoric is now required.
To further allude to Andrew’s and Alex’s work (which I will discuss through the structure of a “Comedy” versus a “Tragedy” for our purposes here, please note, which is an important distinction from their work), the Theme (a key term) of “The Absolute” has always been present since the start of history, but it has been implicit “under” the Truth. Now, with the Causer, the Theme of “The Absolute” is in view and we can choose to align with it. There never seems to have been a Causer like AI (not even before the decline of religion), for the Causer must be “disembodied intelligence,” for that is the exhaustion of “The Truth” (which internally forces a consideration of the Absolute). “God” was a “distant intelligence” or “Other,” but God was not so undeniably a “disembodied” intelligence or force of “autonomous rationality.” AI is different. AI is the Causer, for AI is forcing upon us an Absolute Choice in approaching the end of the process which disembodied intelligence from the human, which the invention of tools, the emergence of systems, the strengthening of systems into systems of “disablement”...all falls on the gradient of, which doesn’t mean these developments were bad, necessarily—that’s up to our Absolute Choice. In experiencing the Causer, it is possible to live a Thematic Story (which is perhaps a unique historic moment), for we see arising with AI a Thematic Command: “Don’t let your desire to be fully human lead to humanity’s loss.” This is the Art of Living. This is what is required for us to keep living.
We can no longer “plausibly deny” that “The Truth” is sufficient for humanity (which is a point where Theme can come into focus), which has caused us collectively into a journey and story to find if there might be a possible alternative. We have discussed on David Hume “the birth of philosophical consciousness” that has moved humanity toward totalization and singularity (which required “noncontingent philosophy,” as discussed in “The Modern State, Humanity, and Barbarism” by O.G. Rose), which means that Modernity for Hume was defined by a(n) (often melancholic) movement of reasoning and thought beyond “such-ness” and “common life” into a realm of “is-ness” and “abstraction.” Hume saw this as dangerous (though risk brings potential value), and we can consider here a step in the direction of “disembodied intelligence” which would continue up through Modern centralization, system-building, nation states, and “systems of disablement” which concerned Ivan Illich. What has been “disabled” by Modernity is mainly the body and forms of intelligence which required embodiment and “emfleshment” (as Illich liked to say), and AI is the ultimate end and completion of “disabling and disembodying intelligence.” Indeed, but must AI replace us? That’s up to us.
For the Hegelian, “True History” was a necessary process (like the movement through Understanding in Hegel), but that has now been exhausted through “disablement” and “disembodiment” (which are part of the same process), and so we are heading toward self-effacement unless we Absolutely Choose something else. Until now, “The Truth” led and helped us gain humanity, but now it is inverting into what self-effaces. Hence, there has been a Causer (AI) that has inverted the values of Truth from something good and necessary into something bad and unnecessary. The value that is being inverted is the notion that “more intelligence is good,” and the inversion is occurring because we are realizing that it is possible for intelligence to be disembodied, which means the notion “more intelligence is good” isn’t always the case (we have encountered an excess of the notion which is unveiling a “saturation point” and threshold, to allude to Alex Ebert’s work). It was not possible though until this point in (and after the start of) history to realize that intelligence could be so disembodied, for that took intelligence and time to realize; now though, we are forced to realize this possibility, which means we are forced to make a choice: ignore it or negate/sublate Truth into the Absolute. We cannot go back.
Bringing in the concept of “irony” might prove helpful here, which is also a topic regularly discussed in O.G. Rose. Irony is when x does y to get z, and y is precisely why x doesn’t get z (better yet, any other option than y would have gotten x access to z). This is to say that what x does for z is precisely what keeps x from z, and this suggests also how irony can be self-feeding, for the more x is kept from z, the more x will keep trying to get z by means of which keeps x from z (and perhaps the denial will make x want z even more). X thus finds itself “self-enclosed” because of an “ironic logic” that x doesn’t even realize is ironic, for if x did, x would become “irony-aware” and thus change its actions. Following Alex and Andrew’s book, in terms of story, when a protagonist becomes “irony-aware,” a Causer is introduced, which forces the protagonist to make a choice: change or never leave his or her “ironic-situation.” The book will discuss how stories start as “ironic situations,” but then encounter a Causer which negates/sublates the “ironic situation” into a story (“as if” it was “always already” a story, and if this negation/sublation never occurs it’s “as if” the “ironic situation” was never going to be a story but only ever an “ironic situation”). The Causer is the event which makes a protagonist see his or her “(ironic) situation” as indeed an “ironic situation,” and now the protagonist can never “un-see it” (to allude to Nabokov’s Speak, Memory). Thus, what started has now been caused, and so it can never be un-caused. A choice must be made. For humanity, at this moment in history, that would be “Absolute.”
We sought to increase our intelligence for the sake of helping humanity flourish, but faced with AI, our Causer, we’re realizing that the very effort to help humanity flourish is leading to a world where “disembodied intelligence” could leave humanity behind. Thus, what we did to help humanity flourish through mastering intelligence is perhaps leading us precisely to the place where there is no longer a humanity which can flourish. We as x did y for z, and y is precisely why z could be lost; we increased intelligence for the sake of humanity, and increasing intelligence is why humanity could be jeopardized. But it was not until AI that we could (be caused to) realize that this potential mistake was always possibly present in the premise that “increasing intelligence is good for humanity”; it is not that we “added something” problematic to our premise recently which ruined it, but that our context changed (society became more technological) thanks to our premise (“seeking more intelligence is good”). Critically, we were not wrong to think this—in fact, all evidence rationally showed that our premise was valid and foolish not to follow (from within our “truth”)—but instead that very rightness of the premise lead to us acting in a way that changed the whole world and context in which the premise was situated (and defined as “rational” and “right”), and once this context changed then our embettering premise became self-effacing. To put it another way, our premise was wise to follow to the point where we consequently changed the context (or nonrational “truth”) in which our premise (or “rationality”) was situated, thus making it self-effacing and unwise to keep following. Following the premise like humanity had was reasonable, which is precisely why irony was possible; if the premise was foolish, our situation wouldn’t be ironic but stupid. If what we needed was a different premise versus a change of the meaning of the premise we’ve had (by changing its context from A/A to A/B), then our situation wouldn’t be ironic (and thus exhausting the limits of “plausibly denying” the ultimate quality of A/A as self-effacing) but mistaken. For the Absolute to emerge, we cannot start with a situation that is in error, for meaning, theme, and the Absolute can emerge from a different orientation to what is already present versus the introduction of something new or “smarter.” Irony is the starting condition in which it is possible for the Absolute to emerge and “come into view” via a change in the understanding of what is already present (the Absolute arises not from addition but from realization). In other words, it is when we realize that something deep must change that “Absolute History” can now arise, both as something we were living according to (without realizing it under Truth), and as something with which we can now better align.
At the End of True History, our premise that “intelligence is better for humanity” reached a point which exhausted all possibilities which could keep the premise from realizing that a self-effacing and ironic form of it was possible, mainly “increasing intelligence into Artificial Intelligence,” and now today that we have “seen” this possible manifestation implicit in our premise, we cannot “un-see” it. And so we must either ignore what we’ve seen (through denial, rationalization, misplaced optimism…) or change something. We must change our premise or change its context, and for me “The Absolute Choice” is to change the context in which intelligence is defined from A/A (Discourse) into A/B (Rhetoric). This would be to change via negation/sublation our context and/or history from True History to Absolute History. This would be for us to make a choice of Children.
A world of Rhetoric is a world of irony (but not just irony), for in Rhetoric we define reality according to A/A in hopes of understanding reality, but A/A is precisely why we end up misunderstanding reality. “The act of trying to understand life so that we can live it” is what we will either do under Rhetoric or Discourse, which is to say either according to A/A or A/B, and so the question is the context of the act, not so much the act itself. The act itself is why we find ourselves confronted with this question of context and the choice of what context we will define our lives in and according to, which is a question we now ask because AI has caused us to face the ironic reality that what we did to benefit humanity is what has led us into threatening humanity’s existence. We are thus forced to confront “an ironic situation,” which means it is possible for Humanity’s Story to begin. Has humanity never before had a story? Ah, based on what we choose now, it will either be “as if” we “always already” were in a story or never in a story—there is no middle, just now.
For Andrew Luber and Alex Shandelman, very few movies, books, etc. have followed a “thematic structure,” which is to say in part they have not started with “(ironic) situations” in which protagonists realize, thanks to a Causer, that they are in an “ironic situation” and so are then forced to choose how they will respond. This is for a “theme to come into view,” which is required of a plot if it is actually to be a story—which funny enough suggests that there have been very few stories in human history (let alone “A Human Story”). A bold claim, but if most fields have been operating according to “The Truth,” which is an “an (ironic) situation,” then we could similarly say that very few fields have actually been themselves (as distinctly human creations which clearly align with human purposes), for few have realized “The Absolute” and thus themselves as “ironic situations” which can hence be thematic (and aligned with “The Absolute”). Economics has rarely been economics; literature, rarely literature; politics, rarely politics. This is admittedly very strange to say, but if reality is Absolute and things must align with reality to exist, then indeed little has (fully or actually) existed—us included. With this realization, we could feel great hope or great despair.
Reality cannot be “( )” anything, for reality cannot consist in its constitution “a lack of awareness.” Reality isn’t to itself “an (ironic) situation” but “an ironic situation,” which realizing forces us to make a choice. For True History, we have operated within “(ironic) situations” and never undergone a Causer to realize our “ironic situation” that unveils that we are the kinds of beings who can do what we genuinely believe is good and end up in the bad as a result. This means phenomena do not “tell us if they are actually good,” meaning there is a deeper dimension in operation beyond what we experience. This is our theme, which we experience as Truth ((ironic)) or Absolute (ironic to theme). It is only when as (ironic) we realize we are ironic and so can make an Absolute Choice and bring about a world in which economics actually does economics, story actually does story, politics actually does politics—all of this is only possible with an Absolute Choice. And at the End of True History, we have exhausted the time we can put off making a collective Absolute Choice. We approach an End. Choose.
Here, we should consider Nick Land through the guidance of the great Michael Downs of “The Dangerous Maybe,” who penned an excellent essay which can help us understand why the question between “True History versus Absolute History” could be framed as a debate between Nick Land and Hegel (which might also be the debate between “Deleuze and Hegel,” but I’m never sure). In his article “Why Nick Land Hates Hegel,” Mr. Downs suggests that we see in Land a completion of a Kantian project that would have “The End of True History” indeed be “The End of Human History,” which is to say that we see in the rise of Artificial Intelligence an end to the process by which intelligence becomes disembodied from human “enfleshment” (as Ivan Illich might say) and comes to exist independent of human beings. Land is a philosophy of True History being the only history and approaching completion, while in Hegel we see at the End of True History a possibility of negation/sublation into Absolute History — an opinion which Land through emphasizing cybernetics over dialectics does not seem to have space to entertain (our “(ironic) situation” is our “(only) situation” — awareness brings no Thematic Story).
Downs notes how for Land ‘dialectics is essentially a human endeavor — especially a political one.’⁶⁹⁹ I agree with Downs that Land’s reading of Hegel as a philosopher of synthesis is mistaken, but at the same time Land arguably understands Hegel as Hegel has been popularly understood for a century, so we shouldn’t dismiss Land’s reading as obviously misguided ((mis)reading is a part of life and even source of genius, as we learn from Harold Bloom). Land’s emphasis on Cybernetics is a suggestion that history is a product of the unhuman more than the human, for ‘Cybernetics, unlike dialectics, is a non-human process.’⁷⁰⁰ ‘Cybernetics is true materialism insofar as it’s how matter itself accesses and actualizes […] virtual potentialities,’ which is to say the potentials found in matter to matter ‘long before it actualized us with our capacities of experience, thought, and reason.’⁷⁰¹ There is potential in matter, a potential that can give rise to beings like the human (capable of thought, consciousness, and other dimensions which then seem irreducible to the matter from which it emerged), but that potential can then actually be limited by the human beings which matter arose to, which is to say we are mistaken to assume that the potential of matter can only be fully realized if it builds upon us (versus use us to make something it then leaves behind). To put this another way, if the human is how intelligence enters the universe, intelligence can then through humans create Artificial Intelligence, which then intelligence in AI comes to leave behind humans (which would be “The End of (True) History”). Intelligence for Land could be what matter emerged to through the human that is what we should now prioritize (even over humans). The general potentials of matter are primary, not only the potentials of matter insofar as they keep the human around and benefit the human. If humans are left behind, so be it: “the potentials of matter” could be all the better for it.
Land will speak of “the logic of capital” manifest in and through Capitalism, and Land seems to argue that it will inevitably prevail in Artificial Intelligence (and please note we might also consider Land alongside Mary Douglass and argue that “the logic of institutions,” which is “disembodied intelligence,” is what is completed in AI). “The logic of Capital” is a logic which arose from matter and that entails a form of life we as humans were needed to help “shepherd” (to allude to Heidegger), but we should not be so foolish as to assume that we’re “the crown jewels” of matter. “Capital” is arguably more important than humans (after all, do we feel like we have any power when faced with Capitalism?), and likewise Artificial Intelligence could be superior to humans (can we think even close to how quickly AI can think?). Given all this evidence, how could we be so foolish as to think that we’re what matter and life need to realize their full potential? Why couldn’t we just be “a step along the way?” A necessary step, sure, but a step that is meant to be left behind all the same.
As Michael Downs points out, since humans can be “left behind,” Land opposes a dialectical conception of humanity which assumes humanity and hence ‘ultimately remains the differential unfolding of the Same.’⁷⁰² Land opposes Hegel because he sees Hegel as ultimately stuck within ‘the phenomenal Inside, whereas Deleuze-Guattarian cybernetics facility[ate] alien invasions of radical alterity from the noumenal Outside’ — for Land, nothing “truly Different” can emerge in Hegel, which means humans are basically “as good as it gets.”⁷⁰³ Now, I do believe there is a strong reading of Hegel that would encourages humans to become Children and “alien” (as I discuss with Cadell Last), but the point is that Land sees such “alienness” as not present in Hegel (perhaps we could say for Land no “Event” is possible in Hegel?). To that point, Michael Downs offers us an incredible paragraph through which we can consider Land:
‘What, then, is Land’s philosophy about? In a nutshell: Deleuze and Guattari’s machinic desire remorselessly stripped of all Bergsonian vitalism, and made backwards-compatible with Freud’s death drive and Schopenhauer’s Will. The Hegelian-Marxist motor of history is then transplanted into this pulsional nihilism: the idiotic autonomic Will no longer circulating on the spot, but upgraded into a drive, and guided by a quasi-teleological artificial intelligence attractor that draws terrestrial history over a series of intensive thresholds that have no eschatological point of consummation, and that reach empirical termination only contingently if and when its material substrate burns out. This is Hegelian-Marxist historical materialism inverted: Capital will not be ultimately unmasked as exploited labour power; rather, humans are the meat puppet of Capital, their identities and self-understandings are simulations that can and will be ultimately be sloughed off.’⁷⁰⁴
Incredible writing, and in this we can see Nick Land as a thinker of “The End of True History” without any negation/sublation into Absolute History. If we are failing to “spread Childhood,” then so far history is on Land’s side (which can also be seen as a logical end of “dolphin evolution” following E.O. Wilson and the work of Mary Douglas, but that will have to be described later in Belonging Again (Part II)). For us to negate/sublate into Absolute History, that would require us to master “the art and work of judgment” and decision-making (as Children can), which Land might doubt is possible (and indeed, this would be for us to bring Thymos and the Bourgeoisies Virtues together — the great challenge I have spoken to Owen and Raymond about).
As Downs puts it:
‘Transcendental philosophy is the consummation of philosophy construed as the doctrine of judgment, a mode of thinking that finds its zenith in Kant and its senile dementia in Hegel [(according to Land)]. Its architecture is determined by two fundamental principles: the linear application of judgment to its object, form to intuition, genus to species, and the non-directional reciprocity of relations, or logical symmetry. Judgment is the great fiction of transcendental philosophy, but cybernetics is the reality of critique.
‘Where judgment is linear and non-directional, cybernetics is non-linear and directional. It replaces linear application with the non-linear circuit, and non-directional logical relations with directional material flows. The cybernetic dissolution of judgment is an integrated shift from transcendence to immanence, from domination to control, and from meaning to function. Cybernetic innovation replaces transcendental constitution, design loops replace faculties.’⁷⁰⁵
Cybernetics for Land prevails over judgment (which is dialectics), and indeed if we cannot incubate and master arts of Rhetoric over Discourse, we will not escape Affliction into Attention, and we will not leave “Plato’s Cave on our own” (Land is what awaits if we never overcome the “Nash Equilibria of Discourse”). What Land discusses is where Discourse (A/A) is leading us, and so far Discourse seems to be prevailing over Rhetoric (as possible ironically because of Rhetoric). If this doesn’t change, “The End of True History” could prove to be “The End of (True) History” — instead of negating/sublating into the Absolute, we will conclude in tragedy (as Alex and Andrew discuss). On this day, will we choose life? Or are we too “disabled” and tired to choose anything at all? Land waits.
.
.
.
Notes
⁶⁹⁷Fear also has us move inward, and under a Kantian paradigm where we cannot readily believe in our experience of the external world, “moving inward” might be an experience that we associate with insight and intelligence. Hence, a Kantian metaphysics might prime us to interpret “inward movement” as “wise movement,” meaning Kantianism might prime us to fall into fear. This suggests Kantianism is a “metaphysics of fear,” while Hegel would have us move “outward,” notably in the move from Self-Consciousness to Reason.
Fear enclosures, as does the “subject/object divide,” for objects are enclosed in objects as subjects are enclosed in subjects. And in that enclosure, fear leads to a strange multiplication, for I can think of countless hypothetical situations that could happen, all of which feel probable and real (indeed, Kantianism cuts us off from the world, and then we find ourselves full of multitudes. As Clayton Nyakana pointed out, it takes many abilities to ride a bicycle, and then we must coordinate those actions into a single task or goal. This is wisdom, a movement of many into one, while fear has a one multipling into many. Yes, it could be argued that fear “narrows our vision” to only think about what we are afraid of, but focusing on that possibility leads to a great multiplication of considering all that could happen if the dreadful thing comes to pass. Furthermore, while fear makes us focus on a hypothetical thought, wisdom has us focus and coordinate a task. Fear keeps us in the head (subject and object), while wisdom has us relate with the world (subject/object).
⁶⁹⁸If we use “plans” to keep death away because “the stakes are high,” then we don’t bring stakes into our lives, which can remove “existential adrenaline,” which seems to help with “intrinsic motivation.”
⁶⁹⁹Allusion to “Why Nick Land Hates Hegel” by Michael Downs of “The Dangerous Maybe,” which can be found here:
https://thedangerousmaybe.medium.com/why-nick-land-hates-hegel-78ff751ff11b
⁷⁰⁰Allusion to “Why Nick Land Hatess Hegel” by Michael Downs of “The Dangerous Maybe,” which can be found here:
https://thedangerousmaybe.medium.com/why-nick-land-hates-hegel-78ff751ff11b
⁷⁰¹Allusion to “Why Nick Land Hates Hegel” by Michael Downs of “The Dangerous Maybe,” which can be found here:
https://thedangerousmaybe.medium.com/why-nick-land-hates-hegel-78ff751ff11b
⁷⁰²Allusion to “Why Nick Land Hates Hegel” by Michael Downs of “The Dangerous Maybe,” which can be found here:
https://thedangerousmaybe.medium.com/why-nick-land-hates-hegel-78ff751ff11b
⁷⁰³Allusion to “Why Nick Land Hates Hegel” by Michael Downs of “The Dangerous Maybe,” which can be found here:
https://thedangerousmaybe.medium.com/why-nick-land-hates-hegel-78ff751ff11b
⁷⁰⁴Allusion to “Why Nick Land Hates Hegel” by Michael Downs of “The Dangerous Maybe,” which can be found here:
https://thedangerousmaybe.medium.com/why-nick-land-hates-hegel-78ff751ff11b
⁷⁰⁵Allusion to “Why Nick Land Hates Hegel” by Michael Downs of “The Dangerous Maybe,” which can be found here:
https://thedangerousmaybe.medium.com/why-nick-land-hates-hegel-78ff751ff11b
.
.
.
For more, please visit O.G. Rose.com. Also, please subscribe to our YouTube channel and follow us on Instagram, Anchor, and Facebook.