
A world in which “the true was the rational” would be one where “being right” and “being rational” would be synonyms, which following Fondane would be a prison in which we could go mad (there would be no real “possible”). But what if this was a world where the “being right/rational” was magnificent, beautiful, wonderful, and somehow never suffering a “Buridan's Donkey”-situation? Perhaps in that world nobody would care for “the possible” and hence all would be well? Indeed, that might have been the hope and dream of the Enlightenment, and even if that might perhaps be the case in some alternative dimension, there is good reason to believe (given history) that such a possible world is not ours. Also, I struggle to imagine how this kind of world could even be possible in theory and it not eradicate diversity and difference, precisely because rationality must always be based on a notion of the truth that excludes other (takes on the) truth(s). For this reason, it seems dangerous to even try to realize a “a rational/right world”: the effort seems destined to be a terror. And if its not, that might only be because we have sealed ourselves within an “internal consistency,” an A/A-entrapment, in which we cannot identify the terror we cause. Rationality is in the business of coherence, after all, and only memory seems poised to help us “remember outside” the entrapment—hence why “a rational/right world” would be one in which there was no “good reason” for memory. We’d have all we needed now anyway, wouldn’t we?
I
We were “haunted by Anselm,” and so we have perhaps subconsciously tried to hide from that disquietedness by leaping into “maps” and living according to them “rationally” and faithfully: Anselm bothers us because he unveils the truth of all “maps” as knowable only by entering, at which point we might never find “reason” to make it “epistemically responsible” to leave. And so we might discount Anselm as a theologian whose argument is strange: we do not let Anselm function as a revelation of how all epistemology is ultimately conditioned by a “nonrational ascent,” which is to say all epistemology is “ontoepistemology.” Following from this, how are we at this point responding wrongly to “the problem of internally consistent systems?” One way we are is by thinking we can solve the issue through “autonomous rationality,” which is to say we would save everyone from “maps” if only they were more rational. This is a terrible mistake which feeds the problem, because all “maps” are rational to themselves within themselves: for people to be more rational is for people to generate more maps. And yet it is also understandable why this mistake was made: to live alternatively is to be at risk of psychosis and worse.
We might consider Descartes alongside Anselm for a moment here as someone who might also help us understand why we “must think from our knees,” per se. Even if Descartes does not prove we exist, as Anselm might not prove God exists, Descartes at least shows that we are a closed system that must assume our existence in order to proceed, as a rationality must assume a truth to engage in rationality. Descartes only suggests we cannot not exist, for to think we don’t exist, something must exist to think we’re not around. (“I think, therefore there is reason to think.) Alright, but do we exist? Perhaps we’re just a brain in a vat like The Matrix? Perhaps we’re a thought floating in God’s head? Even if we’re a brain in a vat or a divine thought though, there is still a sense in which “we exist,” but there’s also another sense in which we don’t exist like we think we do. Considering this, perhaps we can say Descartes is somewhat correct; to reword his famous phrase:
a. I think (I think), therefore I probably am.
b. I think, therefore there is “reason to think” (there is something thinking).
c. I must think to doubt I think, therefore I can be certain that I think (even if I can only be confident in the subject of my thinking).
And so on. Admittedly, none of these have the same elegance as cogito, ergo sum (the tradeoff between elegance and technicality can be a tough one). Regardless, Descartes seems to argue that thinking requires existence versus cause existence, as Andreas Röyem points out. This is an understandable confusion, given the wording of cogito, ergo sum. In light of this possibility, Descartes argues:
a. Thought exists, for we cannot doubt this without thinking, therefore a source of (the) thought(s) must exist.
b. A source of thought is a thinker.
c. Therefore, a thinker must exist.
d. Therefore, I exist.
Descartes will then argue that it is impossible for God not to exist, because if God didn’t, the idea for God would have never entered our minds (in other words, only God could be the source of the idea for God). By extension, we must exist because God must have thought of us, for otherwise we wouldn’t exist, so Descartes grounds his argument for the existence of us on the existence of God, trying to avoid making a circular argument.¹ To put this another way, if Descartes can prove “something thinking must exist,” then God’s existence necessarily follows, while oddly it doesn’t follow that “God’s existence” means “something thinking must exist” (at least in the way we experience thinking). Descartes goes from thought to God more than God to thought, which might suggest God isn’t needed, but it’s more so the case that thought wouldn’t exist (as independent of God) without God. Descartes is considered “a birthplace of modernity,” and on this point we might see why: even if God exists, God’s arguably necessary existence is still contingent in its quality to us. Modernity radically introduces contingency, which also fascinated Ivan Illich as he associated it with Christianity, and with that is birthed a notion of human agency that we can change things for the better.
Still, the jump from C to D can be questioned and/or complexified. Perhaps “the thinker who exists” is the person who invented a brain my consciousness is plugged up into in the future, so though “a thinker must exist,” that thinker doesn’t have to be me. So even if “I” must exist in some way to experience thoughts, it is possible that the proof of “a thinker” doesn’t necessarily mean I am that thinker (as a source, for the thoughts could be downloaded into me from somewhere else). I must be someone who can experience thoughts, but that doesn’t mean I must be the origin(al) thinker. Similarly, though the “act of thinking” must be so, “the content of thinking” could be an illusion.
Regardless, even if based on a misreading, Descartes is useful because he helps me approach my existence and thinking as requiring assumptions (even though paradoxically his whole project was to find something that couldn’t be doubted). Reality cannot be a hypothetical: I am a system that must assume itself to proceed being itself. I may not exist (in the way I think I do, at least) — my thoughts might be entirely wrong — but I must assume something to continue. I cannot doubt everything. By investigating and deconstructing himself “all the way down,” from Descartes, we garner reason to believe Wittgenstein was correct: ‘We just can’t investigate everything, and for that reason we are forced to rest content with assumption. If I want the door to turn, the hinges must stay put.’² It is wrong to believe Descartes’s project achieves certainty beyond perhaps the raw reality of thought itself (not that Descartes thought this — I’m not sure), for it does not follow from his meditation that we can be certain about what we think. However, the fact that we have “reason to think we are thinking” means we have “reason to think we are thinkers,” and even though we cannot verify that we are “thinking thinkers,” we would not be irrational to assume this, because otherwise proceeding in thought becomes impossible, and if there is no thinking, there is no subject to even doubt or be certain about, to be rational or irrational about: it is rational to assume something that, without assuming, defining the rational becomes impossible. Again, this brings us back to Anselm, who showed it was justified and even necessary for reasoning to “kneel” in order to meditate upon its subject. Hence, “autonomous rationality” is impossible — not that thinking must think this way about itself (“kneeling” is a free choice, after all).
The fact we have “reason to think” means we have “reason to assume.” Based on these assumptions, we can then construct the rest of our thinking (rightly or wrongly). Owever,If we must assume, in my mind, that means thinking is more structured like a mathematical system or proof (after Gödel) than we tend to realize. Thinking entails axioms that thinking structures itself off of in hopes of achieving internal consistency, but even achieving this will only increase thinking’s plausibility: there is no necessary relationship between validity and plausibility. Descartes proves we cannot doubt everything by doubting everything, as rationality can encounter its limits once it is fully rational, as thought can realize it must “kneel” before an axiom when it tries to think God. We can never escape except through problematic imagination (which is a false escape) the requirement of assuming something like a thinker exists to think at all; what does follow is the necessity of “thinking as/in a system.” Thought must be humbled, or thinking could prove a curse, as Fondane witnessed.³
As discussed with the wonderful Matthew Bartley, Anselm and Descartes can help us see that we need to “system-think before we systematically think” (if we work hard with the wrong tool, we won’t get much done). The way we think about something can be just as important as how hard we think about it. If I try to hammer in a nail with a wrench, it might work, but it might also mess up the job. Nails need hammers, and there are jobs that if I try to use a hammer when a screwdriver is needed, I might break whatever it is I’m working on. However, if we’re really good with a wrench, and we’ve been told our whole life that wrenches are all we need (or maybe we grew up in a world where the only tool we ever used was a wrench), it’s likely that we’ll end up in situations where we use wrenches when we really shouldn’t — this happens with “mental tools” and “mental models” all the time (leading to the world Fondane saw manifesting).
We are born into socially beget systems of thought that we get so used to using that they become “invisible” to us (think Heidegger’s doorknob). We don’t even see ourselves as “system thinkers” but just “thinkers”: we’re “(system) thinkers,” per se. To escape this, we need to “make the invisible visible” and become “systems thinkers,” those for whom “thinking systematically” is meaningfully possible (which then also opens up a possible distinction between “problem-prevention” and “problem-solving”).⁴ But for doing this, we’ll likely be rejected, and furthermore really getting this requires us to “get on our knees,” at which point we might never find “rational justification” or “epistemically moral” reason to stand again…
When we’re stuck in a system, the system trains us not to think for ourselves, and so the system tends to preserve itself. To escape the single-minded thinking a system tends to incubate, we first have to realize the system is there, which entails understanding that the nature of a system is to gradually and slowly become “invisible” to us (like a doorknob gradually and slowly becomes “invisible” the more we use it and it becomes part of the house, to allude again to Heidegger). Belonging Again in mind, all social orders and “givens” have the potential in them to become systems (that self-hide), and with time, they almost always do — the allure of (apparent) efficiency is too great. Once a system emerges, so tends to also emerged “monotheorism” in accordance with that system — a singular way of thinking — that gradually makes the system “invisible.” Once this occurs, the system tends to “do our thinking for us,” and then we don’t have to worry about thinking systematically, because the system takes care of that on our behalf. But problematically, if that system has us constantly using a wrench, when we encounter problems that need a hammer, we’ll try to use a wrench anyway. Very systematically too…(and so we’ll systematically make the problem worse).
Concerns about thinking systematically (“rationally”) don’t bother people who are only “(system) thinkers,” so if we decided to be “systems thinkers” (“truthful”) we have extra work (otherwise, we won’t think for ourselves, but that might make us happier — at least until it suddenly doesn’t).⁵ Doesn’t sound like much fun, but if we don’t rise up to the challenge, it’s fairly inevitably that we will eventually face problems that our system and its corresponding thinking are not suited to solve. Especially without a “Liminal Web Infrastructure” (“the social coordination mechanism”), society generally creates “(system) thinkers” and resists “systems thinkers.” If we are someone who exists in the “space between” systems, we might be an outsider and viewed as crazy. We might not be taken seriously, because we are so “outside the box.” Thus, if we’re “systems thinkers,” it will likely be hard for “(system) thinkers” to listen to us to learn that they are in fact “(a) system thinker(s)” (making the hidden visible). Often it is disaster or “being a loser” in a system that gets people kicked out of a system, and then once they are outside of it (and perhaps become “systems thinkers”), they are viewed as crazy, evil, and so on (possibly scapegoats, following René Girard). Does that sound like fun? Not at all. Thus, there is incentive to use a wrench forever…
Everyone is naturally a “(system) thinker” (“rational”), and those who become “systems thinkers” can face rejection. And yet without “systems thinking,” systematic thinking is not meaningfully possible, and it’s likely the majority of people will be limited in their mental and intellectual options. When these people encounter a problem that requires a hammer, only having a wrench, they and the society at large will suffer. Arguably, societies fail without “systems thinkers,” and yet it seems in the nature of societies to reject them. Is this how we ended up in the world Benjmain Fondane wrote about? Yes, but the alternative is to risk “psychotic entrapment” and madness — exactly as we now suffer.
II
We learned from Missing Axioms by Samuel Barnes that “nihilism” would not address our “problem of internally consistent systems,” and a reason for that could be that nihilism itself is a totalizing strategy. “Everything is meaningless” or “everything is nothing” are phrases which would require an “autonomous rationality” to make and justify. Scientism, nihilism, “autonomous rationality” — all of these are “autonomous projects” which ultimately fail, though they are reasonable efforts and strategies to take on given the horrors and terrors of Religious Wars and “magical thinking” which has plagued humanity. “(Non)rational thinking” is also existentially demanding, and faced with it, we easily find “autonomous rationality” or say “autonomous empiricism” more appealing, especially if they and their results are more “vivid” to us (resisting “vividness” requires nonrationality, though this doesn’t mean there is no truth to “the vivid”).
What is rationality then? What are we doing when we think we’re being rational (if rationality is “in a truth” more than “getting at truth”)? Well, in a way, rationality seems mostly about making good bets, which sounds reductionistic and problematic, but in another way is “underwhelming” in a good way, for a humbled rationality might be more likely to accept its limits. To expand on this point, consider: if A is B, and B is C, is C equal to A? Yes, that would be a rational conclusion. Now, let’s consider: if A is B 20% of the time, and B is C 15% of the time, what percentage of the time is C equal to A? That’s a lot trickier (5% sounds right, no?), which is why it’s a challenge that we live mostly in a world of probabilities (though by how rationality and logic are often discussed, it’s suggested we live in a world composed mostly of basic syllogisms). What it means to be rational can be talked about like it’s simple and clear-cut, but I think that causes trouble. To use a favorite example of mine: if I think it’s going to rain today and I bring an umbrella, but then it doesn’t rain, did I act irrational? No, but I was wrong: turns out it’s possible to be both rational and wrong, even though the terms “rational” and “wrong” are often conflated. Similarly, if there is a 95% chance of event x happening and I bet against x and end up right, did I act rationally? Well, generally, if something is only 49% likely to happen, let alone 5%, I seem irrational to bet in favor of it, but that doesn’t mean the bet won’t end up in my favor, suggesting that the phrase “acting rational” isn’t so clear cut in its application.
People can talk as if there are two camps — irrational people and rational people — and act like problems in life have solutions, and that if we can’t figure those solutions out, it’s because we’re not thinking hard enough. But often problems in life are trade-offs and bets for one outcome versus another: we don’t live in a world of syllogisms but probabilities. As Lorenzo Canonico notes, beliefs should be treated like an investment portfolio, and in his work “Is Investing the Best Model to Deal with Uncertainty?” Lorenzo writes:
‘I consider myself a fallibilist, because I believe truth exists even if we may never be certain of it. Specifically, I think most if not all of what I know will eventually be disproven or proven inadequate to explain major phenomena. I base this belief not just on the philosophical arguments of people like Karl Popper, Chuck Klosterman, and Annie Duke, but also on the observation of 2 trends: a) history is filled with paradigm shifts and b) our complex world is full of black swans.
‘Think about it this way: if you had to bet money on each aspect of your worldview, would your portfolio eventually go bust? Eventually, probably yea, since on a long enough time line paradigm shifts and black swan undermine almost every plan.
‘Under ‘financial epistemology,’ our worldview becomes a portfolio: a collection of bets on what will be disproven and when in order to achieve some material advantage. Even though we know that our investments can and eventually will fail, the key is to make our ‘trades’ at just the right time (which is impossible) to maximize our goals.’
Something similar could apply with rationality: to the degree it is rational to do or think x is relative to probability and costs/benefits: it is not simply a matter of “if it’s actual,” because that’s not always determinable and many problems aren’t so straightforward. Now, if x can be determined to be true, then it should be believed regardless the costs/benefits, risks/rewards: please note that we our approaching the question of rationality in this circumstance relative to uncertainty (this is not pure pragmaticism). To the degree uncertainty isn’t present is to the degree we don’t have to approach “rationality as making a good bet,” but if we believe certainty is mostly impossible, then this role of rationality will almost always be present to some degree.
Rationality and probability seem mostly indivisible, as it’s similarly impossible for rational activity to occur outside a concert of other rational agents making decisions that can change what is rational for us (John Nash’s great insight). And yet we can continue to teach or discuss rationality as if it’s the ability to discern what’s right and wrong, true and false. In a universe of syllogisms (where “A is A” was the only metaphysics), this could follow, and certainly there are syllogistic dimensions of the universe, but mostly we live with probabilities. Considering this, to employ a thought from Kennan Grant, rationality encourages a bias towards the individual more than individualism encourages a bias toward rationality — this is often gotten backwards. We can make better bets regarding ourself versus regarding things beyond us, for considering Friedrich Hayek, there are too many 2nd, 3rd, 4th, etc. order effects which are too complex to be well-analyzed and therefore too complicated to be compellingly articulated in an exclusively rationalist framework.⁶ If we want to make good bets, it’s rational to zoom in on the individual because the individual is more comprehensible (though not perfectly), but this means “rational rationality” favors atomism, reductionism, and other tendencies which could become problematic habits.⁷
It seems reductionist to suggest rationality is “mostly about making bets,” but critically the very recognizing of this quality of rationality is precisely a realization that can change the quality of rationality (say from Understanding to Reason in Hegel). If it doesn’t seem right to think of rationality as just a tool of probability, then we might start to realize that we’re suggesting rationality is never actually operating according only to itself, that it is always “pointing beyond itself” and operating according to something else. An “autonomous rationality” would be very probabilistic, but if it seems to us that this is wrong, then that might be because it doesn’t feel right to say that all we do when we think is engage in probability assessments. Ah, but maybe that’s because when we think it’s not that all we’re doing is using (autonomous) rationality — maybe thinking is more robust and multidimensional (A/B versus just A/A).
Alright, if rationality often seems practically to be about “making bets,” it would seem we are interestingly in an age when “autonomous rationality” is becoming more “vivid” as a “bad bet” which seems possibly a notion which could lead to a paradigm shift (from A/A to A/B). If it’s a “bad bet” to treat thinking as just rationality and by extension thinking as basically “just making bets,” then the very rational logic of bet-making can work back on rationality to change its quality and self-understanding. Also, if we understand Anselm as forcing us to “get on our knees,” that too can help us see that “betting thinking is just about making bets is a bad bet,” simply because we can’t treat “getting on our knees” as “taking a bet,” precisely because the bet might never end, seeing as we easily will never see good reason to get off our knees. In other words, “thinking as rationality as bet-making” cannot help us deal with “the problem of internally consistent systems” or Pynchon Risks., and yet doesn’t it seem to be the case that thinking and rationality or mainly about “making bets?” What we have seems inadequate, but it also seems to be all we have. Are we doomed? Well, without Otherness, “surprising encounter,” the Face — perhaps so.
III
Though certainly part of the picture, “thinking as just rationality as bet-making” seems inadequate to describe what it is we do when we think, as it is inadequate to address maps. Similarly, empirical methods cannot “take us all the way” in determining which case to take up: that last stretch of the process must be through “you,” per se, but that is again difficult. A world of “autonomous rationality” is a world that is likely to become “a probabilistic problem-solving world” versus “a human problem-preventing world,” which brings us back to the essay “Incentives to Problem-Solve,” which focusing on for a moment might help us grasp further why “autonomous rationality” and “totalitarianism” tend to mix exactly as Benjamin Fondane warned, keeping in mind that the main tool of self-legitimacy and totalization which “autonomous rationality” might use is an epistemology and culture of “problem-solving” at the expense of “problem-prevention.”
Empirical methods rely on data, for one — data is “the fuel of the car,” per se — and data cannot be originally preventative only reactionary. Data can tell us what problems to solve, but not what problems to prevent, and data will not naturally show what problems should be prevented or what problems were prevented: preventive measures are at an existential and empirical disadvantage to reactionary measures. In an empirical society that is much less likely to take a case seriously unless it is backed by data, the nature of data will have that society full of problem-solvers but not problem-preventers, and if there is problem-prevention, it will easily be based on probability markets, which is better than nothing but also data-bound and so ultimately unable to negate/sublate “autonomous rationality.” Likewise, so goes the fate of a “rational society,” seeing as rationality requires data as well: it will not be skilled in “the nonrational truth” to which it thinks and accommodates itself. This isn’t to say that data we are collecting about Climate Change can’t help us keep Climate Change from getting any worse (for example), but it is to say that data couldn’t keep Climate Change from happening at all: it can help us change direction once we get going, but it can’t keep us from going in the wrong direction to start (and also can’t “prepare” us for the Causer of AI, discussed in II.1).
If I were to introduce legislation that it should be illegal to home-school children on the basis that some children outside the public school system would grow up to be radical, religious terrorists, what studies could you offer to prove that my idea wouldn’t reduce the number of terrorist attacks? Yes, you may claim it’s a violation of freedom to force people to go to public school, but would you be able to offer any proof that students won’t turn out better for being forced into public school? And considering the risk, that countless people may die in a terrorist attack, isn’t it better to be safe than sorry? Furthermore, I could argue that students who don’t attend public school don’t integrate into American society well, and that it is the “price of citizenship” for parents to send their children to public school, even if they don’t want to do so. Again, you may claim I am violating individual liberty, but I would argue that I believe individual liberty comes second to public duty. Why should you be right and not me? Frustrated, perhaps you would claim that abolishing home-school would have detrimental effects on students, but then I would simply ask for evidence and you would be unable to provide it (precisely because my suggested legislation hasn’t been passed yet). Perhaps you could cite studies about how those who attended home-school have performed better in college than those who went to public school, but I would simply point out that you can’t prove those students wouldn’t have done just as well had they instead gone to public school. Furthermore, I could argue that if home-school students did do better, home-schooling gave them an advantage that other children were denied — I could argue that home-schooling was unfair. Not all parents have the luxury to stay home and teach their children; many parents are forced by economic pressures to work and send children to public schools. Hence, letting children be home-schooled violates equality and is only an option available to the privileged — it is the duty of the State to get involved and assure an equal playing field for all. To this, you may repetitively and dogmatically claim I am violating individual liberty and that I had no studies to prove that forcing all children to attend public school wouldn’t have detrimental consequences, but to this I would again claim civil obligations are greater than individual rights, that you didn’t have proof that children wouldn’t do as well or better for being forced into public school, and that even though I didn’t have studies, justice and equality were on my side (in a problem-solving society, a lack of data can just defer to rights). To this, what could you say?
This thought-experiment needs elaboration, but we can at least hope to begin outlining why there is something potentially totalitarian about demanding studies, research, data, and the like.⁸ There is only data for what we do, and if we are not allowed to do x or y until we have data to justify x or y, then x or y will never be done, and there will hence never be data justifying x and y, and so we’ll be rational not to have done them. If Nazis rise to power and then we only feel justified to oppose them if we have data to back our claims, especially if the Nazis set the terms of how data is collected through institutional incentives, grant funding, etc., then Nazis will easily not be opposed. At the same time, if we feel justified to act without data, that also could be a recipe for psychosis, social collapse, and trouble. What is to be done? (A new social coordination mechanism?)
To be clear, I don’t think there is anyone who, if you asked directly, thinks we should use only one method of thinking to determine truth. Asked directly, most understand that empiricism can only get us but so far. Nor do I think anyone uses empiricism exclusively all the time, just in spots or predominately, as I don’t think there is anyone who is purely introverted or purely extroverted. But when we get into the heat of a debate or in the rush of a research project, I think we can easily forget what we know. We talk as if empiricism is the only method, and when faced with a debate opponent, we take on that mode of thinking to win, recognizing (at least subconsciously) how “the board is set in its favor” (especially where “the problem of internally consistent systems” is not recognized). It is in these moments when a person ceases to be aware that he or she is speaking as if empirical methods are to be used exclusively that the person can speak as if humans are guinea pigs, unintentionally, and so support a way of thinking that treats potential human suffering as necessary for truth.
To lessen the risk of unintentionally violating human dignity, and to refrain from overly-relying on empirical methods that will not equip us properly to avoid what will increase human suffering, we have to learn to think not just in empirical terms but also abstractly and in terms of “cost-benefit analysis.” This is very difficult, and often vaguer and more uncertain than we would like it to be. Cost-benefit analysis weighs various options and the various possible outcomes of those options, and then makes a decision based on that forecast. If I am choosing to go to college or to go straight to work, a cost-benefit analysis considers the positives and negatives of going to college compared to the positives and negatives of going straight to work. Based on the best available data, I make the best guess that I can, but this guess would by no means satisfy a strictly empirical framework: I would lack scientific certainty, of which, by nature of the situation, is impossible to garner.⁹ Yes, for cost-benefit analysis, we could use studies and research, but there is a difference between using empirical analysis to inform a cost-benefit methodology and using empirical analysis to inform an empirical methodology. The problems come when empirical research “brings with it” an exclusively empirical methodology for decision-making, considering that the latter is inadequate. We must have clear in our minds which methodology is appropriate for making the final decision and which methodology is appropriate for informing that decision. We must use empiricism in concert with all ways of thought; we mustn’t use empiricism to inform conclusions alone.
Also, where there is an awareness of “cost-benefit analysis,” there can be a better sense that we can never ultimately avoid risk and human suffering: even when we are prepared, we can face negativity and hardship. But when the emphasis is on systems, planning, empiricism, and the like, it can be suggested that we can figure out how to avoid suffering, whereas when the emphasis is on there ultimately having to be a “human element,” we might prove more prepared for things not to go our way (a point which brings to mind what Ivan Illich said about curable suffering being more intolerable than uncurable suffering). There’s always a chance things could go wrong, but when that error occurs, it won’t necessarily be experienced as evidence that we did something wrong or that we need to do something different. The world is imperfect, and the space of imperfection is where “the human element” can step in and have a role — it’s simply the way of things if things are to have ways at all.
Wait, but isn’t “cost-benefit analysis” just making a bet? Didn’t we say that it was inadequate of rationality to see itself as such? A great question, and first a cost-benefit analysis will prove ineffective if nonrationality is not incorporated into it. Also, the very need of “betting” means we are brought by necessity to employ something that can feel inadequate to reduce thinking down to, which we end up driven by necessity into facing a notion of rationality that seems wrong. This is for us to be driven to be a place where we “encounter limit” and hence a place where a negation/sublation is possible (say from Understanding to Reason). What this suggests is that “rationality as bettering” can be an opening for a negation/sublation of rationality, but not necessarily: it’s up to us to realize the limit. However, the point is that this limit and opening for rationality is found in probabilistic thinking more than “hard empiricism” (even though it’s possible for probabilistic thought to end up serving empiricism and never realize its limit), and if we are moved by empiricism into “cost-benefit analysis,” we are moved a step closer to A/B from A/A (even if we never actually take that final step).
“Making bets” without realizing limitation though won’t necessarily help us avoid “incentives to problem-solve,” for it is completely possible that probabilistic thinking can forever operate according to data, proving to be just an extension of hard empiricism and in service of the problems which considered Benjamin Fondane. We can’t make bets without data, sure, but bets don’t have to be “captured” by data, though they easily could be, meaning we stay stuck in A/A vs A/B. It’s also true that there’s a way in which all studies entail bet-making, but what bets are made will likely align with data where humanity doesn’t acknowledge the limits of rationality. Probabilistic thinking is a step in the right direction but cannot treat itself as if the finished journey lest it fall back into prior dysfunctions. (Beauty might be approached probabilistically, but it is not itself reducible to a probability.)
IV
It is humbling of rationality to recognize its limits, and the moment we recognize limits they are experienced according to rationality, making it seem as if rationality isn’t limited at all (we are limited in experiencing limitation, a key notion in Hegel). As a result, it can seem crazy to treat rationality as limited, and yet we need to accept and Absolutely Choose that truth if we are to avoid self-effacement and what Fondane discusses. In addition to this “feeling of being crazy,” another reason it can be hard to avoid “autonomous rationality” and map-totalization is because mere fact is often handicapped in itself from being compelling and “vivid” like can be rationality. This is also another reason why “autonomous empiricism” isn’t really autonomous and hence why we should accept its limitations before we end up in problematic totalization: there’s more often than not (and perhaps always) an excess which slips in. This isn’t necessarily a bad thing, but could be if we don’t acknowledge it, for that lack of acknowledgment could lead to us imagining autonomous possibilities which actually aren’t possible.
Actuality and facticity can be plausible but not really compelling. We know “2 + 2 = 4” will not inspire us to stand up and protest injustice; it tends to be a story or a video of someone suffering that gets us moving. “Ideas are not experiences,” as we discuss often, and experiences are more likely to motivate us to act than mere ideas or mere facts. Furthermore, we learn from the paper “Compelling” that no one has to necessarily be compelled by anything that they don’t want to be compelled by, for we all have control over our own standard of justification and can forever keep “moving the bar.” Thus, if facticity is to compel the majority to believe it and/or act, it must do more than just be “actual”: it must also be “compelling.” But there is no necessary relationship between “being actual” and “being compelling” — it seems like there is, but I can learn all the facts and ideas in the world and never leave my house or change my mind. How does data get us to change and act? Well, it must tap into emotion, art, rhetoric — a slew of tools, all of which could be “captured” by institutional incentives, notions of rationality, the zeitgeist, and so on. If data and facts are to reach us, they must travel a transformative road.
No, I’m not saying no one ever acts on anything factual, but I am saying we all act on the experience of facts more than just facts. When we open the freezer and find water collecting on the bottom, it seems like we are motivated by the fact that the drainage system is clogged to fix it (and in some sense, we are), but it’s mostly the experience (through which “truth” manifests) of the leaking freezer that compels us to fix it. Additionally, it is the potential consequences of having to buy a new freezer if it is broken that compels us to act: it is not the mere fact “the drainage system is closed” that makes us want to fix it. It’s the drainage system of our freezer that we use and experience everyday — it’s not a general drainage system or a general freezer.¹⁰
Our kitchen situation is embedded and manifest in our experience: we are motivated by “a truth embedded in experience, everydayness, and consequence (to us),” not “a truth embedded only in its facticity (to no one in particular).” However, we can imagine facts “as if” they are “just facts,” which can train us to have more faith in our ability to be compelled by “just facts” than we are in actuality. As a result, when we encounter facts that simply don’t have a quality of being compelling, we can rationally think that this means these supposed “facts” are unimportant, debatable, and perhaps not even facts, while at the same time come to believe by the experience of something as compelling that facts can be compelling in their facticity versus our experience of them.¹¹ These are both mistakes that can lead to us having a disordered relationship with facts, leading to us being confused about their roles in our lives. Facts that aren’t compelling can still be invaluable, while at the same time facts are never “autonomously facts” only ever experienced. Experience is unavoidable: the question is only how we experience and its quality.
To speak of “actuality” is to speak of “truth(-experience)”; to speak is to speak experiencing (suggesting the primacy of experience, notably say “The Strike of Beauty,” as I’ve discussed with Thomas Jockin, suggesting a reason why Beauty is so primary in O.G. Rose, though unfortunately this strike can be forgotten soon after, suggesting a central role of memory).¹² Fact is always experienced, and so it is never “just fact” which compels us, but if we imagine this to be the case, we can be more vulnerable to end up in the world described by Fondane, especially if empiricism and “autonomous rationality” combine forces, as seems likely in Modernity. Ultimately, we need to associate “truth” not so much with “facts” (precisely because “facts” are never really autonomously themselves) and instead accept that “truth” is always through an experience, because once we accept the inevitability of experience as a factor in truth, we can then focus on asking, “What kind of experience is truth best manifest in, as, and through?” We will argue for “the experience of surprising encounter,” as possible in and trained from a place of “faithful presence.” We have argued throughout O.G. Rose that there will always be a subjective element, hence why denying the subject is dangerous and could lead to the world described by Fondane. But the quality of that subjective change can vary and are not all equal. The subjective element possible in “surprising encounter” is very different from a normal, unsurprising experience of a cup or conversation, and if we want to “get at” correspondence more than not (however imperfectly), it is best in the context of “preparedness for encountering surprise.” Truth seems best in surprise, rationality in acknowledging its limitation in “making good bets” (“toward” (non)rationality). If we fail to orientate ourselves accordingly, “the problem of internally consistent systems,” and the likelihood of bad responses, will only worsen.
V
We naturally experience “the true as the rational,” which is to say realizing that “the true isn’t the rational” requires us thinking in ways which will seem unnatural and wrong. This is a hard obstacle to overcome, but we should expect that difficulty in a world where Gödel’s discovery was so revolutionary (otherwise, Gödel’s discovery would have likely been made long before). It is natural for us to totalize in either “autonomous rationality,” Scientism, Nihilism, Theology, Capital, or the like, which means it is natural for us to self-efface. Totalization only works if it can maintain the connection to itself that “the true is the rational,” which is to say that “the rational is the right” (A/A), which requires “fencing out” other views to maintain and protect its self-consistency. For this reason, “autonomous rationality” (for example) must not take worldviews, differences in thought, Pluralism, and the like very seriously, and yet the only reason “autonomous rationality” is even possible is precisely because it is a “possible worldview” among many. That is a move Totalization must make: it debases other options as “just options,” yet it is only possible because it is an option itself. What makes it possible is what it denies, and so why A/A self-effaces as an A/B in either un-realization or self-denial.
Gödel showed that logic and thinking were doomed or more complicated than we thought (and we could even say that thought works by ignoring itself as geometrical): traditional “formal systems” and/or “autonomous entities” will not suffice. If we try to make them, we will suffer consequences and risk self-effacement. Dialectical reasoning is necessary, but with that we suggest there is an “essential human element” in our thinking that cannot be removed (at the very least to observe and honor say Gödel’s Platonism). Yes, but doesn’t that mean we are at risk of “internally consistent systems?” Yes, notably in the form of conspiracies: if we are to reject the strategy of Totalization which was implemented to deal with say the trouble of Religious Wars and “The Ethical Real” (Lacan), then those troubles will return in some form or the other. A better form? Maybe. But regardless, avoiding the risk, a world absent of “nonrationality” is perhaps a world that we can believe is computable and technologically manageable. If “nonrationality” not only exists but is necessary though, then there are limits to what technology can accomplish. Perhaps this suggests why we might subconsciously “want” rationality to be all we need, suggesting wisdom to Deleuze’s admonishment that we need our “own line of flight” so we can avoid “capture” and being “computable.” The importance of neurodiversity is also suggested here, as discussed by Lorenzo Barberis Canonico, as it’s also suggested that there is mercy and grace (such as found in Flannery O’Connor’s “Greenleaf”) in this point by Fondane: ‘But the experience that will make an ‘exception’ of us and give us over to existential problems does not depend on us.’¹³ But time will tell, and if it doesn’t, we might be gone before we can tell it didn’t tell.
Rationality naturally captures and acts “as if” equivalent to truth, but rationality can realize it isn’t equivalent to truth, which is for rationality to realize it is limited, which means it realizes there is a limit to cross and something beyond those limits (Hegelian). With this realization, in which direction might rationality develop? Might it develop toward Beauty or Madness? The Absolute Choice is ours, and unfortunately Madness before the Causer of Artificial Intelligence is a real option (Land waits) — as II.1 discusses. But a world where we recognize “the problem of internally consistent systems” is a world where we might be less vulnerable to “map-totalization” as Fondane describes, the “deconstructing of common life” considered with Hume, or the “insanity” described by Korzybski, but what is the price for this advancement? We become more susceptible to map-proliferation. Where “the true isn’t the rational,” rationality can operate and search for its own coherence without worrying about the truth ultimately proving able to deconstruct it. Sure, rationality might construct itself around a notion which could be destructed, but there is always the possibility of realizing an indestructible map, and so rationality always has a chance and truth doesn’t necessary get to have the final say. Couldn’t there be a map-proliferation in a world where “the true is the rational?” Sure, but that was a world before Gödel where the realization of “incompleteness” was more likely to be taken as evidence that a worldview or ideology was invalid, hence better setting a cap and limit on how extreme map-proliferation could become. Now, in realizing “essential incompleteness,” our challenge of maps is easily more extreme.
As Fondane understood, as long as there is always “possibility,” we are not fated or determined, and we are certainly not doomed for “the suboptimal result” of Nash Equilibria, “conflicts of mind,” or inescapable “internally consistent systems” (such as conspiracies). Unfortunately, where there is “always possibility,” there is likewise always existential anxiety, and by extension the temptation and possibility of “magical thinking” and conspiracy, a temptation profoundly magnified by the internet and new information technologies. Fondane’s “hope in the possible” is a hope found at the bottom of the “Big Other”-box of Pandora’s Rationality…Totalizations — all are “autonomous projects” which ultimately fail, though are very responsible efforts and strategies to take on given the horrors and terrors of Religious Wars and “magical thinking” which have plagued humanity. If we reject totalization, we invite these troubles back — as we must, according to Fondane. And these troubles are indeed back, mostly notably in the proliferation of conspiracies, which makes sense, seeing as conspiracies are precisely indestructible masterpieces of rationality and coherence, internally consistent systems. “The multiplication of conspiracies” is a case-study that can help us understand map-proliferation, which is a challenge we must face if we take Fondane seriously. To that topic, we will now turn.
.
.
.
Notes
¹To lay out the argument: I think, and so must exist, and if I exist and can think the thought of God, since that thought couldn’t exist if God didn’t, God must exist. It could then be said that this is because it is not in God’s nature to think something that isn’t, hence I exist. Additionally, since God is good, anything God can think that can exist will exist, and since it’s better for a thing to be “actual” than merely “mental,” what God creates will be fully actualized (relative to Eternity). Hence, we actually exist. (Whether this all follows is a different question.)
²Wittgenstein, Ludwig. On Certainty. New York: First Harper Torchbook Edition, 1972: 44e.
³If we take Descartes and Anselm seriously, they seem to suggest there is a difference between “rationally constructed arguments” versus “truth suggesting arguments,” which would mean not all arguments are in the business of doing the same thing. As we discussed in “Conclusive Arguments Are Rare” in Thoughts, there really aren’t many if any arguments that, after hearing, you must change your view. Usually, arguments just “point” to something we might accept but not necessarily, especially not considering that rationality can seemingly deconstruct anything. And yet ever-deconstructing rationality is only possible thanks to a nonrational truth. Is there no way to avoid an act of courage we must face accepting or denying?
⁴The way we think about science is different from how we think about art, yet it can be common for us to think about art like science. The way we think about history is different from how we think about science, yet it’s easy if we’re historians to try to think about scientific problems like historical problems (as Hayden White knew) — and so on. We often don’t think about the tools we use to think about different subjects; instead. we just use whatever we find “at hand” (as often selected for us by the social system) and have at it. When we’ve been trained by a system to think a certain way, trying to unlearn those tendencies is as hard as trying to unlearn our native language. Once we’ve grown up with English, forgetting how to speak English is difficult if not impossible. In my view, the correct order for thinking about a problem…
Systems Think:
Determine which system or “mental model” is best for understanding x.
Systematically Thinking:
Then think logically within and according to the “mental model” or “frame” we have decided is best for x.
If we skip straight to “systematically thinking” without first “systems thinking,” we might think logically and brilliantly, but we might still think incorrectly or ineffectively (using the wrong tool) (for a list of mental models, I really like Farnam Street). So it is with “thinking truth” before we “think rationally.”
The logic of “mental models” can also be used on general “systems thinking”: it could be said that models are “micro-mental tools” while systems are “macro-mental framings,” but there is plenty of crossover. All of us think within at least one system, and if that’s the only system we think in, it probably does a lot of our thinking for us. If we’re a “system thinker” (versus “systems thinker”), there is basically no system: it’s “practically invisible” to us. Single systems only appear to those who are aware of systems: we must be pluralistic to understand singularities. So there’s a very real sense in which there is no such thing as “system thinking,” only “systems thinking,” because “system thinking” is really just “thinking” in experience (when we’re in a system and don’t realize it, which could be signified with the phrase “(system) thinking”). This is a problem.
⁵Keep in mind that if we are in system x and considering options 1 and 2 within x, we are not a “systems thinker,” because we are not considering 1 and 2 through x and then 1 and 2 through y. However, 1 and 2 can feel like moving between systems, so we need to be on the lookout for that trick.
⁶Considering this, perhaps the emphasis on individualism in Conservatism comes from its commitment to effective, more-likely-to-be-right rationalism, versus a commitment to individualism for the sake of “self-centered” individualism. And perhaps this suggests there is little status to be won by being a Hayekian Conservative: it is a worldview that suggests we rarely can make good bets beyond ourselves, which makes us out to be helpless and self-centered out of epistemological necessity. This seems regressive and the opposite of progress, which feels like it must go outward, away from where we are to somewhere else. This isn’t to say Conservatism is right, but it is to say that an understanding of rationality that is based on probability tends to lean toward smaller systems, while a rationality based on theory can favor larger systems. The world needs theory, certainly, but theory that fails to be probabilistic is not theory I would bet on.
⁷Does all this suggest we shouldn’t judge the quality of a decision by its consequences but by the information available to it at the time of the decision (and the smaller the system, the higher the likelihood the data is reliable)? If so, this is because we are dealing with bets, not guarantees, and luck can prevail when faced with impossible odds. Rationality is mostly probabilistic; syllogistic rationality leaves the wrong impression. Rationality is about making good bets, and if we’re always right, we’re probably not being rational.
⁸In this thought experiment, the person who is in favor of home-school is debased for not having “proof” that home-schooling is necessarily beneficial to children. But do note that the nature of studies makes this difficult to provide: it is not possible for a child to go through home-schooling, then go back in time and go through public school, and then compare the outcomes. In this sense, it is impossible to create “objective certainty” about which school system is best for a given child. Also, the only way to find out by empirical standards whether a society will benefit from making home-schooling illegal is by, in fact, making home-schooling illegal (and considering the nature of State growth, once the State made home-schooling illegal, it would be unlikely that it would ever reverse on its position, at least not for a very long time). Once illegal, we would have to leave it illegal for say at least fifty years, if not more, in order to satisfy objective and empirical methods (but even then, who’s to say we don’t need more time?). In other words, the only way to find out if home-schooling is detrimental or beneficial is by acting as if it is detrimental — by siding with those in favor of abolishing it. To put it another way, if I demand evidence before I will stop myself from making home-schooling illegal, I ask for evidence that cannot possibly be given to me until home-schooling is illegal, and if your failure to provide me with this evidence will result in home-schooling being made illegal, then home-schooling will be made illegal. The board is set so that what I think goes.
This presents a paradox: if studies are our exclusive or overly-domiante standard for determining truth, then we must seemingly allow everything to determine if it is good or bad (and we must allow everything for a long time, to give it a fair and complete examination). We must have a period in which home-school is illegal, and then a period in which everyone is forced to home-school. If we are to determine what is the best environment for raising children, we must have a period in which children are only allowed to be raised by one mother, another period in which they are only allowed to be raised by one father, then a period in which children have three parents, then four, then five, etc. By empirical standards, we cannot determine “what is best” until we allow everything that is possible to occur so that we can observe and judge it. I believe most Social Scientists and professionals are aware of this paradox, and adjust their research and consultation appropriately, but I don’t believe the average citizen is aware, as I think is made apparent by how most people debate and argue, and this is detrimental to democracy. Constantly they ask, “How do you know (insert)?” and “Where’s your hard data?” and there is most certainly a place for these inquires, but we cannot appeal to studies all the time, especially when it comes to deciding what we should do next.
Determining “what is best” empirically can entail allowing everything that can be, to be, for an extended period, after which we can make our (approximate) judgment. Obviously, no one actually thinks we should do this, but this is the direction an empirical methodology leads us when we rely on it exclusively, whether we realize it or not (and when people are in the heat of a debate, this can be how they talk, using this standard to discredit the other side, consciously or unconsciously). If I want to end home-schooling, it cannot be proven that this will have detrimental effects on the country until what I desire is tried, which is the period in which I may be proven right or wrong. There may be studies that suggest I “might” be wrong or right, but to actually know, what I want must be tried. And this means humans must be treated like specimen in a lab (as Fondane might have understood). Where empirical methods are relied on exclusively, human dignity is threatened. If it is the case that making home-schooling illegal will have detrimental effects on society, discovering this truth empirically and according to studies will require for people to suffer. If home-schooling lessens suffering, the person who says we cannot know if making home-schooling illegal will be good or bad until we try abolishing it (unintentionally) says that we have no scientific evidence for allowing home-schooling until people suffer: determining truth can necessitate demeaning people. Considering this, we must be careful before we demand studies, for we may unintentionally demand for humans to suffer before we are willing to prevent and stop that which might causes human suffering. And if we cause unhappiness, we cannot undo that damage, and if we put our society in a situation it cannot come back from, that damage could be permanent.
Of course, perhaps making home-schooling illegal would be beneficial, but the point stands that we cannot find out through empirical methods without risking human suffering: the exclusivity of the empirical method requires gambling with human life and happiness. In essence, it requires treating humans like guinea pigs. This might seem like an extreme claim, and I most certainly don’t mean to imply that I think anyone means to treat humans as such, but when empiricism is the dominate methodology, this is what happens, unintentionally and indirectly. When animals are experimented on, it is not the hope or goal of the scientists to make the animals suffer; in fact, their hope is usually to discover ways to improve the quality of human life without causing any suffering to the animals. Scientists experiment on animals to find cures to diseases and to discover new surgical methods, and from these procedures, humans have benefitted greatly. Likewise, those who use empirical methods exclusively and who unintentionally treat humans like guinea pigs do so in hopes of improving the quality of human life and not with the intent of causing human suffering. But the point stands that this way of thinking treats humans like experimental subjects (and there might be inherent limits to what this effort can accomplish anyway if humans are fundamentally A/B not just A/A). This isn’t to say empirical methods shouldn’t be used, but that we need to be aware that they entail a threat to human dignity. Whether this risk is worth taking or not depends — my point here is only that the risk is there.
⁹In this line of thought, whether than think about making home-schooling illegal in terms of just “What do the studies prove?” it is much more sensible to ask, “What are the costs and benefits?” To start, some of the costs would be that individual liberty is restrained, while some of the benefits might be reduction of terrorist attacks. Are these costs and benefits accurate and believable? And even if they are, is this trade-off worth it? The answer to that question depends on who you ask, but at least it can be said that using a cost-benefit approach uses a method that is not “set from the start” to try and experiment with every possibility (as exclusively empirical methods are set “toward”), and that this method doesn’t threaten human dignity.
¹⁰Please note also that our experience with the freezer can help fill in some of the gaps of our knowledge, and also help us find a balance between “technical knowledge” about the fridge and the “practical knowledge” we actually need to use it — a much more difficult standard to strike when encountering political or scientific matters presented to us on the television, for example.
¹¹Why does this matter? For one, it is because what we learn on the news about what’s happening in Washington DC, the suffering occurring in China, climate issues, etc. might not be experienced like facts in our everyday life which can strike us more emotionally. Then, if we don’t feel compelled by this information on television, we may take it as evidence that the information is not true. The fact we don’t feel motivated to act then becomes evidence that we shouldn’t act, making topics covered in “Probable Cause” by O.G. Rose all the worse.
¹²See “What Do I Say When I Say, ‘That Is True?’ ” by O.G. Rose
¹³Fondane, Benjamin. Existential Monday. Translated by Bruce Baugh. New York, NY: The New York Review of Books, 2016: 31.
.
.
.
For more, please visit O.G. Rose.com. Also, please subscribe to our YouTube channel and follow us on Instagram and Facebook.