The Value Isn't the Utility
Inspired by an Other Life Discussion (2.24.23) on Aristotle's "Nicomachean Ethics"
It was wonderful to speak with the Other Life community this week on Aristotle’s Nicomachean Ethics, hosted by Justin Murphy. Everyone involved was impressive and made the text come alive, which I think is important today, for I believe the stakes are very real when it comes to determining according to which framework we should situate the question, “How should we live?” It is important to answer this question, but if we first don’t ask if our worldview is more Utilitarian or “Virtue Ethic”-based, we may fail to realize how this important question could be halfway answered before we even begin considering it. The framework we bring to a question is as important as the question itself.
It is very useful and important to consider utility when making a decision, but “utility” as the prime metric (or as “autonomous utility”) could prove problematic, for what is “useful” and/or “value-adding” is relative to our metric and worldview, and how do we decide our metric and/or worldview? What constitutes “utility” is relative to a framework, and we cannot decide this framework through only considering utility: in other words, we cannot move from just Utilitarianism to deciding that Utilitarianism is the most “fitting” (or even the most “useful”) framework by which to understand and live life. We must make a “Non-Utilitarian Choice” before we can be “Utilitarians,” which doesn’t mean we are entirely wrong or mistaken in Utilitarianism, but it is to say that “Autonomous Utilitarianism” is impossible (like “autonomous rationality”). “Truth organizes values,” as discussed in The Conflict of Mind, and so Utilitarianism cannot “ground” itself (even if it happened to be the best ethical framework overall).
Moving forward, please note that I will not be making hard distinctions in this paper between “Pragmaticism, “Utilitarianism,” “Effective Altruism,” “Longtermism,” or the like—I will only discuss Utilitarianism, but I believe my argument applies just as well to all these schools of thought, though I don’t deny there are differences which can be brought out in other works.
Furthermore, in my view, if “Utilitarianism” doesn’t generally mean “the worldview in which ‘productivity’ and ‘the increase of the greatest good for the most people’ are the highest values,” then I don’t think we are really talking about Utilitarianism but rather just action. We shouldn’t conflate “all actions and their results” with “utility,” because if we do we’ll end up thinking “Utilitarianism is inevitable” and thus become Utilitarians and likely assume the Utilitarian metaphysic and worldview without realizing it. I think this is common: people believe all decisions must be according to “what generates the best outcome” and thus conclude that “Utilitarianism is true.” This is a leap which makes us vulnerable to absorb the whole worldview of Utilitarianism, but “making wise decisions” and “being Utilitarian” are not identical. Yes, Utilitarianism is about “making wise decisions” (according to its worldview, metaphysics, and ontology), but not all “wise decisions” fall under Utilitarianism. Problematically, if we’re Utilitarian (perhaps without knowing it), that means those “wise decisions” beyond the frame of Utilitarianism might be ones we don’t even realize we need to make: they are in a darkness beyond the reach of our Utilitarian flashlight. Are we missing out? Well, I mean, is there something we should be seeing?
I
To be clear, I am not saying that considering outcomes, productivity, utility, or “increasing good for the most people” are necessarily bad considerations: my point is that they should only constitute “mental models,” that they shouldn’t constitute “a whole worldview.” Arguably “productivity” and “utility” are uniquely powerful mental models, just like science—which is precisely it is so tempting to “overfit” them and act as if we can arrive at ontology and “truth” from them. But this is not the case: the framework by which we define “utility” itself cannot be judged by purely Utilitarian metrics; the will require a definition of “the good,” which will require a definition of “the human person.” And thus we fall into Aristotle…
“Virtue Ethics” conceives the good in terms of character, values, and means, with far less emphasis on “outcome” and “result,” but of course outcome matters. “Virtue Ethics” can entail Utilitarianism, but Utilitarianism doesn’t consciously entail “Virtue Ethics”—and “consciously” is a key term here, because “utility” must be organized and defined within a set of values (otherwise, it wouldn’t exist). Critically, I think a reason we have failed to appreciate the inescapability of “Virtue Ethics” is because we have called it “Virtue Ethics” versus “Value Ethics,” when for Aristotle “virtues” and “values” are profoundly connected. “Virtues” sound like “morals,” and there is something about “morals” which is very relative, and so we conclude that “virtues” cannot be a moral foundation due to Relativism. None of this follows, but that is what we can subconsciously conclude and hear with the phrase “Virtue Ethics.” We also associate “virtue” with religion, and in believing religion is dead or waning, it can seem to use that “virtue” is dead as well. As a result, it makes sense to then focus on outcomes, productivity, and utility. After all, what else is left? What else do people value? Lastly, I like the phrase “Value Ethics,” because economics often discusses “value,” and the term sits between “virtue” and “utility” nicely, which is ultimately where I myself want to fall. (We here could also discuss Maurice Blondel’s “action” and how it is distinct from Utilitarianism and Pragmaticism, better aligning with Aristotle, but I will leave that for my discussions with “High Root,” Episodes #73 and #80.)
We naturally give our thinking over to our words more than we give words to our thinking. As “the medium is the message” for McLuhan, we must not fail to realize that our words often end up our statements. We must be careful when we speak, yet silence is not always the answer. For Aristotle, what constitutes “the virtues” is relative to our “values,” which is to say our virtues are relative to our “ends” and telos. “Means” and “ends” in this way are indivisible, and a fair critique of Aristotle is that he might have assumed “the ends” of human beings, which means that “values” don’t need to be discussed nearly as much. As a result, the emphasis in Aristotle is on “virtues,” which follow from “values.” However, today, I think we might need to place the emphasis on “values,” hence why I think we might need to discuss “Value Ethics” over “Virtue Ethics.” It is likely more evident to moderns that we cannot avoid “values” than it is that we cannot avoid “virtues” (even though one follows from the other), and so we might be able to get people to care about “Value Ethics” while they quickly dismiss “Virtue Ethics” in favor of Utilitarianism—but that is easily just my take.
“Value” and “utility” are distinct categories which can overlap, and if by “utility” we mean something akin to “action,” then ultimately “value” is meaningless unless it leads to “utility.” Action is required if our values really are our values, but we also err if we judge the ethics of our actions only and always according to outcomes which are judged by metrics which we absorb from the zeitgeist. To do good in Capitalism might be to increase corporate productivity, but it cannot be assumed that “increasing corporate productivity” is a sign of character. It might be, but we would require the skill of deliberation to tell (a skill the system might train us out of, I fear).
II
Critically, Utilitarianism often smuggles in “a Value Ethics” while at the same disowning “Value Ethics.” “Value Ethics” is the ability to determine ethics in a determination that, “This is valuable,” which is to say, “This is fitting according to what the subject ‘is.’ ” Now, of course, this isn’t easy to determine, and it makes sense that we have turned to science and other “objective” fields to answer questions of identification for us, but unfortunately science cannot tell us what our values should be. We must bring those to science (as elaborated on in “A Critique of Pure Observation” by O.G. Rose), as well as the market — and yet there is something about both that tends to deny the role of external values in their functionality and effectiveness. I would have to defend the claim, but following the Scottish Enlightenment, I would say the wealth of Capitalism is a product of “Value Ethics,” and in its success Capitalism became Utilitarian: the success of Capitalism gives it reason to believe it does not need what made its possible success. Children seem prone to conclude they do not need their bloodline.
Science can tell us that we are to some degree a product of “natural selection,” but it cannot tell us if we should still employ “natural selection” today. Science might tell us that humans are happiest when they have regular encounters with nature, but we cannot conclude from this that therefore we should pass a law that requires all cities to construct small parks every ten blocks for citizens to enjoy. Scientific considerations and considerations of utility are extremely useful and should be taken into account as data, but precisely because they are so effective it will be tempting to “overfit” them and try to turn them into “monotheories” and sources of “autonomous reasoning.” This is dangerous and can lead us into “Game Theory”-dynamics we will prove unable to escape — a point I try to argue in “The Most Rational and Suboptimal of All Possible Worlds,” which focuses on Benjamin Fondane. Basically “autonomous rationality” cannot help us escape “Nash Equilibria,” and since Utilitarianism is often a hard rationalist position, Utilitarianism seems doomed to fall into this mistake as well. Not necessarily, no, based on how “utility” is defined, but this tends to be the case — we must be attentive to assume that we are not “monotheorists” without realizing it.
Please do not mistake me as saying that rationality has nothing to do with ethics, for indeed while ‘virtue makes us reach the end in our action […] intelligence makes us reach what promotes the end.’¹ Rather, my point is that we need a dialectic between “nonrationality” (or “truth”) and “rationality” (as discussed throughout The True Isn’t the Rational), which means we must be aware that “a rational choice” is not necessarily the same as “the choice of the standard according to which we define ‘rationality,’ ” which cannot be purely rational because what constitutes “rationality” comes afterward. We must select the “frame” in which we operate, or else we’ll end up “enframed” by the socioeconomic and technological order without realizing it (a point Justin Murphy noted and also admonished by Deleuze). “Choosing a frame” or “being enframed” — those seem to be our choices, and yet Utilitarianism may contribute to us not even realizing we need to make a choice between these two — a severe problem.
Aristotle notes how ‘Plato was […] puzzled’ in trying to determine ‘the difference between arguments from principles and arguments toward principles,’ which suggests that arguments are always “closed circles” of which assume their principles and/or axioms in their starting (like Heidegger’s “hermeneutical circle”).² Again, this calls to mind “The True Isn’t the Rational” essay, where it is argued that if we act “rationally,” we are “always already” acting according to what we believe is “true” — a “nonrational truth” must be in “the rational” (“always already”). Failing to realize this, we can come to conclude that we just need rationality (leading to problematic “autonomous rationality”), as similarly we can come to conclude that we just need outcomes in ethics to determine “what is right” (leading to problematic ethics which discount the need for ontology and metaphysics, which likely leads ethics to being “captured” by the socioeconomic order). Yes, all ethical decisions lead to results and outcomes, making it seem like only considerations of outcomes were in play, when really this outcome could only follow from an assumption of “values” (which seems to vanish just as soon as we start acting, leading to misjudgment). Similarly, the consideration of “nonrational truth” automatically translates the consideration into rational terms “as if” it was always just rationality at play, but the translation is paramount to note. If we don’t, we will think “just being rational” is all we need, and so we will focus on “rationality” instead of “truth,” and so likely end up absorbing our “truth” from the zeitgeist. And thus we will be “captured,” just as Deleuze feared. From this “capture,” we will develop a corresponding and “fitting” ethics, an ethic which (“just by chance”) will prove accommodating to the system, even in how the ethic “ought” to rebel. Freedom will prove useful.
To close this section, the paper “The True Isn’t the Rational” discusses how the moment we think about a truth, we experience it “as if” a product of rationality, thus making it seem like all we ever used was rationality (and thus it seems as if “the true is the rational,” when really “the true isn’t the rational” — they are distinct categories). A paragraph from that work:
Rationality must “appear” true (otherwise, it wouldn’t strike us as rational), and thus it must always “appear” to be the case that “the true is the rational.” By extension, the “true appearance” of rationality is projected back onto our premises, making them “appear” rational, and thus that it is the case that “the true is the rational” [when really this is not the case].
Similarly, the moment we choose to “e-valuate” something “as valuable” and “virtuous to live out,” it will thus be “good” to act according to that value: motivation to act manifests instantly with the e-valuation, “as if” the drive to act is e-valuation. The moment we identify, realized, and/or create a value, we are motivated to act according to that value, and thus the “e-valuation” is immediately hidden behind a consideration of action (as if “the value is the utility”), which is easily taken as a consideration of utility. In the same way that the experience of truth translates truth into a rationality to make it seem as if “only” rationality was employed, so the creation, realization, and/or experience of a value instantly translates it into “a call to action,” which can afterward be judged in terms of is results. Thus, “Value Ethics” are instantly hidden behind something like Utilitarianism, making it seem “as if” Utilitarian considerations are all that are ever in play. As a result, as we always experience “the true as the rational” even though “the true isn’t the rational,” so we always experience “the value as the utility” even though “the value isn’t the utility” — we cannot avoid pulling “a sleight of hand” on ourselves, causing confusion.
III
Why is it so easy to believe Utilitarianism is all we need? Well, for most of human history, it “practically” was all we needed, because most of human history entailed reducing poverty, ending sickness, increasing wealth, and the like, and when faced with problems which “facticity” presents us with and that is most “self-evidently” in need of correction, then an ethic which emphasizes “increasing the most good for the most people” (which is basically “negative” in “reducing harm, pain, and the like”) is often enough to get the job done. Utilitarianism is “fitting” and works for such circumstances, and since it works so well it is very easy to then make the mistake of “overfitting” Utilitarianism and applying it to any and all circumstances. And this is where we fall into error.
In his poetry collection Saedah, the great Dr. Filip Niklas writes that ‘[u]p until now we have survived so as to live well, but now may be the time that we must live well if we want to survive.’ I completely agree, and problematically Utilitarianism cannot guide us to “positively” determine what we should do to “live well” so that we might survive. “Living well” requires a discussion on values, and we should keep in mind that Aristotle is probably mostly teaching to members of “upper classes” (or at least not slaves). Hence, the ethics of Aristotle are arguably more so for “First World Nations,” but something similar applies to the psychoanalysis of Lacan and Freud (and arguably trying to apply both anywhere else would be “too early” and “unfitting”). “Value Ethics” is the ethics needed once Utilitarianism has succeeded, per se, and yet the very success of Utilitarianism will “give us reason to think” that all we ever need is Utilitarianism. Likewise, the very success of rationality and science make it easy to believe that all we need is rationality and science, perhaps thus leading to the nihilism and “Meaning Crisis” of the world today. Autonomous anything, disregarding dialectics, tends to be problematic.
In a First World Nation, I can certainly know there are people suffering in Third World Nations, and relative to them, Utilitarianism is a very useful guide. If they are short on food, I know I can donate to help supply food; if families are falling apart, I can adopt; and so on. It is not the case that in a First World Nation an ethic of Utilitarianism cannot help me act ethically at all, and indeed there will be poverty and people suffering in my community whom I could help (not everyone in a First World Nation is equally well off). Again, Utilitarianism is very useful when dealing with problems that are more so “given” by my very facticity and environment, and furthermore when dealing with a Third World Nation, it is safe to say that my Utilitarian calculation of “needing to provide more agriculture” is not an ethical notion which will unintentionally objectify the people of the Third World Nation or fail to take into account nuance: when a thousand people are starving because they are suffering a drought, then the way “to do the most good for the most people” is to provide water. Now, the particular way I go about providing water might not be effective, but I run little risk of objectifying people or misidentifying “the good” in my effort to provide water, seeing as the facticity of the situation itself makes it clear that “water is needed.” In this situation, yes, Utilitarianism is blunt and broad, but a blunt tool will work and is arguably “most fitting.”
When dealing with areas suffering drought or the like, what constitutes “the greatest good for the most people” requires little abstract reasoning, only more so “observation” and empiricism. Furthermore, the situation itself makes people similar in their need, and so it’s far easier and safer to make judgments on what constitutes “the greater good” without unintentionally objectifying people or causing trouble. And in this situation, again, Utilitarianism is a useful moral tool that can help us get the job done. To stress, it is not a problem to have Utilitarianism as part of our “mental toolbox”: the issue is trying to use it “autonomously” and “overfitting” it in situations which require (abstract) “e-valuation” (as we will discuss).
However, the more “abstract” and less backed by or manifest in “facticity” itself the ethical problem becomes, the broader solutions will fail to work and likely unintentionally objectify people, and the movement from Third World Nations into First World Nations can be seen as a movement from “general problems ‘given’ by the facticity” to “abstract problems ‘bound up in’ particularity.” To explain, consider the question: “How can I do good for my spouse?” This is an important question which lovers should ask, but problematically what one person finds “good” another might find “bad,” and vice-versa. Gary Chapman’s The Five Love Languages comes to mind, as well as personality tests like Enneagram, but the point is that individuals are deeply different and value different things; thus, “doing good” to Sarah might not be the same as “doing good” to Elizabeth, and so “a broad and wide” Utilitarian calculation regarding “how I could do good to my spouse” will not only fail but easily prove immoral, because though to Sarah I might do something that helps her, for Elizabeth I might do something which makes her situation far worse. A more precise tool is needed, though if Sarah lacks food, then providing food will be for me “to do good for my spouse.” It might not bother Sarah so much that I don’t take trips with her according to her “love language,” because her main focus at the moment is food. However, if food becomes more abundant, I will need to realize that providing Sarah something to eat may no longer prove adequate to “address” her.
The more a nation becomes First World, the more people individualize and are not bound by “common circumstances”: they may all live in the same neighborhood, but one person is a writer while the other works for an international corporation, while one person has children and the other does not. Yes, if there was a food shortage, everyone would suddenly be bound by “a common circumstance,” and thus Utilitarianism would prove effective as a “moral guide,” but a First World Nation is where such a circumstance is rarer than not. As a result, this means situations in which Utilitarianism is inadequate are more common (and where trying to force it to be adequate would cause “overfitting”), suggesting that Utilitarianism in a First World Nation to First World problems would prove problematic (and could easily make situations worse). However, for a First World Nation to use Utilitarianism toward Third or Second World nations, or regarding its own poor and lower classes, could prove perfectly fine and optimal — the issue is that we miss the switch and pivot. Though there is more of an understanding of a need for “mental models” in epistemology, I fear these are not so widely understood as being needed in ethics.
To increase prosperity, perhaps thanks to Utilitarianism, is precisely to decrease the probability that Utilitarianism will prove adequate. Now, rich nations can collapse, so Utilitarianism should always be a tool we have ready to employ, but we today may actually prove less prepared for if a nation doesn’t collapse in deciding what we should do, having disregarded “Value Ethics.” To be speculative, perhaps there can be a “death drive” and subconscious desire for trouble just so that we have clarity regarding how we should live? Hard to say, though it does seem like “self-destructive behaviors” and/or “self-sabotage” correlate with prosperity (following Lacan and Durkheim), strangely, as it also sometimes feels like people are just waiting for the next stock market crash, war, or natural disaster, as if they lack a system or theory of “e-valuation” by which to direct their lives until something bad happens (in their facticity) (and unless they can be in anticipation of something bad happening). This makes me think of Lancelot by Walker Percy and how everyone is finally happy once a hurricane is about to strike, a point he also discusses at the start of “The Delta Factor,” asking questions like: ‘[W]hy is a man apt to feel bad in a good environment, say suburban Short Hills, New Jersey, on an ordinary Wednesday afternoon [and] why is the same man apt to feel good in a very bad environment, say an old hotel on Key Largo during a hurricane’?³ Percy has a lot to say on this, but here we might wonder if part of the problem is that in First World Nations we need to create and/or realize values for ourselves which can then direct us in how we should live (Nietzsche comes to mind here), but we have generally not been trained to create these values, perhaps because “addressing facticity” (poverty, problems of transportation, etc.) has been our main “director” for most of our history, and up until now Utilitarianism has not only worked but helped bring about incredible prosperity and advancement (the work of David Deutsch comes to mind). And Utilitarianism indeed still has a role to play, but we also need the capacity to “e-valuate,” which might mean we have to “create our own values,” following Nietzsche, which will require incredible work on ourselves as “subjects” (as discussed throughout O.G. Rose). Otherwise, we might fall into a “Direction Crisis,” which is perhaps what “The Meaning Crisis” ultimately suggests.
Utilitarianism cannot tell us how we “ought” to live in a First World Nation, and if we turn to Utilitarianism to be our guide, we seem destined to end up in the “Personal Optimization Culture” that defines Silicon Valley. I’m not entirely against this, please note, as I’m not entirely against Utilitarianism (just “Autonomous Utilitarianism”), but the problem is that “personal optimization” often ends up meaning “how to be more productive according to Capitalism,” which means it assumes a Capitalistic ontology and metaphysic. If by “personal optimization” we mean “optimizing the art of being human,” that is no problem with me, and ultimately thinking must be “concrete” (as Hegel discusses) to be real thinking, but that is different from what is often meant by Utilitarianism. In Hegel, the concrete comes after the negation of abstraction: it does not replace “speculative reasoning.” This brings up the topic of what I call “Phenomenological Pragmaticism,” but I will wait to elaborate on that until (Re)constructing “A Is A.”
“Value Ethics” entail far more precision and particularity but is also more limited, while Utilitarianism is more blunt and general but also wider. Because of this, it can seem “more moral” to be a Utilitarian than to worry about “Value Ethics,” for isn’t it better to help more people than less? Indeed, we should always keep Utilitarianism as a “tool in our toolbelt” (a useful “mental model”), and Utilitarianism can help us think at scale on how to help many people at once who are united by a problematic, “common circumstances.” The more shared the circumstance, which by definition means it is more facticity-based and empirical, the more Utilitarianism will prove adequate, but the more unique and particular the circumstance, because it is based on personality, individual circumstance, and particularity, the more we will have to shift into “Value Ethics.” We can safely generalize the moral principle that it is “good to reduce poverty,” for this reduces the experience itself of pain and unnecessary suffering (though please note we cannot from this principle assume the effectiveness of a given program meant to reduce poverty), but we cannot generalize the moral principle that it is “good to give people time to be alone” except perhaps to artists who seek solitude for their work, and yet it is not self-evident who is an artist and who isn’t, and some artists might like working around others. “E-valuations” of particular circumstances and people will prove necessary, as will “e-valuations” of ourselves.
Prosperity requires metaphysics, and yet prosperity precisely suggests metaphysics is dead. Prosperity seems inherently pathological, and if we are to avoid that pathology, we will require the ability to choose values for ourselves—which suggests the need for us to be “The Children of Nietzsche.” Indeed, this is discussed throughout Belonging Again, and regarding that book there is a sense in which an emphasis on Utilitarianism is sufficient for ethics and “givens” before a nation prospers and becomes a First World Nation. But once it prospers, which tends to generate the leisure in which “givens” are questioned and discovered, then Utilitarianism will prove insufficient and “Value Ethics” will be required, which for me means we end up in Aristotle, Hume, Nietzsche, and Hegel, as discussed throughout O.G. Rose.
IV
Aristotle emphasizes the role of deliberation and judgment for ethics, and indeed we can see in Nietzsche a need for us to create values for ourselves that we then “commit to” (Matthew Stanley noted the role of “promise” in Nietzsche in “O.G. Rose Conversation #105,” a point which fascinated me). For Nietzsche, moving “beyond good and evil” is for us to move beyond Utilitarianism (which Nietzsche disliked, considering Book V of The Gay Science) in favor of something akin to “Value Ethics,” though these values are created by Children versus primarily realized in a metaphysic or ontology. This would be grounds for a debate, between perhaps “The Value Ethics of Nietzsche” and “The Value Ethics of Aristotle,” but at least here we are debating ethics as we need to in First World Nations so that we can understand humans as characters versus creatures which seemingly ought to be replaced by AI because AI is better at determining utility than us. And at the end of the day, in addition to not ending up in “Nash Equilibria,” those seem to be the stakes: “Autonomous Utilitarianism” is weak to protect us from AI and leaves us ill-prepared to face “The Singularity,” as I have discussed with Cadell Last (see episodes #60 and #97)
For too long, I fear we have thought of ethics primarily as “right action” in a situation, when really ethics is about life (“the real test”). Aristotle discusses happiness, which is ‘an activity of the soul expressing complete virtue.’⁴ “Complete virtue” is a phrase which doesn’t so much mean “being entirely moral” or something like, but instead can be understood according to the following note:
‘Complete virtue needs a complete life (which need not, however, be a whole lifetime […]) because virtuous activities need time to develop and to express themselves fully.’⁵
Ethics is a matter of a lifetime, and yet thought experiments like “The Trolly Problem” imply that “being ethical” is about making “right decisions” in strange situations. This is an “ethics of being,” not an “ethics of becoming,” and so we find ourselves unable to “become” anyone (and “effaced” versus “negated/sublated”): we are stuck without character, unable to become Children like Nietzsche described. Nihilism then sets in, and how can we be Utilitarian if we don’t care about anything? Sure, it’s good to “extend good to the most people,” but what if we cease seeing value in doing good?
Antinatalism (the belief that birth is immoral) might be the logical end of Utilitarianism in First World Nations, where facticity no longer readily guides us in how we should act. It is one thing to choose not to have children, but something else to claim childbirth is immoral, and this shift suggests something. In First World Nations, it seems like “we just shouldn’t act,” for there is nothing in our facticity which we should address, a sentiment which seems to color the discussion about Global Warming, which (in my view) often talks as if the world would be better off if humans stopped trying to grow their economies and produce. And certainly there is truth to this, but I find it interesting how First World Nations seem to emphasize stopping human action (with many discussing how agriculture was a big mistake, that we should have stayed “hunters and gathers,” etc.). Where we only have Utilitarianism and nothing “given” to act on, we seem to become very skeptical of acting at all…
What better way to stop human action than to never be born? Indeed, if there are no values we can choose for ourselves and act according to, and facticity doesn’t give us guides by which to determine action, there seem to be no standards according to which to guide action, and so little reason to think “value” emerges from our existence. The last ethical act left seems to be self-sacrifice, but if there are no wars to die nobly in while fighting or causes worth fighting for which could cost us our lives (which there often aren’t in First World Nations), then it seems the only form of self-sacrifice left is to wish we were never born and/or to never give birth ourselves. And so Utilitarianism may lead to collapsing birthrates, for children are selfish if there is no problem in our facticity which the birth of children could help address. In fact, children could devour resources which could be spread to others, yes? Birth brings sin.
Why act if there is no moral action? When our situation and facticity give us direction on what constitutes “good action,” it is possible for us to act morally, but otherwise, if all we have is Utilitarianism, right action cannot be determined. We can act, yes, but why? Considering this, in First World Nations, Utilitarianism may lead to an ethics which favors the removal of humanity, either through Antinatalism, Environmentalism, or AI—I don’t think it is by chance that these sentiments have grown so powerful in the West (and being “toward” these seems part of our “enframement,” to allude to Heidegger). To stop this, for one, we must think of the human brain as a garden we cultivate over a life more than a computer (which makes us replaceable by AI): indeed, worldviews and technologies entail “enframements,” but so do metaphors (as described in “Meaningful and Metaphoric Tendencies” and “Transitional Metaphors Are Not Mixed Metaphors,” both by O.G. Rose): to forgo the art of language is to let others decide what house we dwell in, and some houses are prisons, even diving bells. To avoid such imprisonment, we must see value in life for life’s sake, and we must be capable of “creating values” for which we are willing to die. We need a ‘special function of [the] human being’ which will be forever lost if humanity is replaced, and resources for this can be found in Aristotle.⁶
V
Generally, I am against “thought experiments” to make ethical claims (for they are unlimited, as discussed in “(Im)morality” by O.G. Rose), but I would argue that the following is not purely hypothetical, for situations like it are happening. Furthermore, my intention is not to deconstruct ethics and show that “all ethics fail,” but to instead suggest the need for a negation/sublation of Utilitarianism into something more. Anyway, the scene:
The young man wanted to kill people, but he understood that he would likely be caught after a single crime, so he found a website that encouraged people to commit suicide and became a regular and tireless contributor. The website was called “Just Do It” and marketed itself as providing a service to help people find the courage to end their suffering. When the man who wanted to kill read a comment where someone wanted and hoped to commit suicide but wouldn’t because that would cause her parents so much suffering, the man reminded her that it was immoral to have children according to Antinatalism, and that the suffering she caused her parents would be justified and in fact moral. The young woman thanked the man who wanted to be a killer for his wisdom, and the man said he didn’t need to be thanked. He was doing what he wanted to do, and in fact worried he was selfish, but fortunately he was doing what would increase the most good for the most sentient life.
The man never told anyone, but the reason he wanted to kill more than a single person was because he believed overpopulation was destroying the world and would lead to an environmental collapse. It was wrong for humans to destroy the biosphere, and the man wanted to do the most good for the most life on earth, and there were billions more insects than humans, and the man was a Utilitarian but not a bigot who believed human life was more important than other sentient life. All sentient life was equally as valuable and good, and removing humans would do the most to benefit the greatest amount of sentient life. Yes, humans would suffer, but they would suffer because they were born, which was immoral. Had humans never been born, there would be no more suffering — the second best thing was for humans to no longer exist. The world was already fallen, the man understood as he searched the “Just Do It” forum to help others do what they wanted to do, but the world could be redeemed.
Could Utilitarianism stop the above situation? Should the above situation be stopped? Why or why not? Please note that the above story was inspired by a video from Tantacrul, “Encouraging the Young to Die — The Most Toxic Site I’ve Ever Seen,” while also integrating in the thinking of Peter Singer on Animal Liberation. Considering this, the above is not a thought experiment with no basis in reality (also, “The Mental Health Crisis” is very real). Situations like the described are happening, and they are happening in a world that is far more in favor of Utilitarianism than “Value Ethics.” Is this good? Is this acceptable? Why or why not?
I have no doubt that there are Utilitarians who could make arguments against suicide as described above, against Antinatalism, and/or against making the basis of ethics “general sentience,” and after these debates we could then judge who we believed was right and who wasn’t. Such assessments are beyond the scope of this work, but what I find very interesting is that judging those assessments will inevitably require values which “Autonomous Utilitarianism” cannot arrive at on its own. Second, I find it interesting that situations like what has been described have arisen at all and are even imaginable, a reality which to me suggests Utilitarianism lacks the resources needed to stop, if not suicide, a kind of thinking that can come to view human life as fundamentally immoral at least indifferent. Yes, Christianity framed humans as sinners, but humans were also made by God to dwell with Him for eternity. The skepticism of human life was tempered with a view of humanity’s uniqueness in the cosmic order; now, without the resources we need to “value” human life as such, it seems we still get the skepticism of human life without the redemption. Is this where Utilitarianism tends to lead us once we gain the successes of Utilitarianism as a First World Nation? After Utilitarianism compels us to do what we can to help as many mothers survive childbirth as possible, does Utilitarianism then lead us to conclude childbirth is wrong?
In terms of what does “the most good for the most sentient life,” Antinatalism seems to follow and suicide is not so clearly wrong—unless that is we have reason to “e-valuate” humans as distinct from other animals and human life as uniquely important. But from where do we derive the basis for such a thought? Don’t we need a metaphysics and/or an ontology? If we say yes, then we must move beyond Utilitarianism and consider values, perhaps to land upon a form of “Value Designtarianism” (to allude to Thomas Jockin), but regardless the point is that “Autonomous Utilitarianism” dies.
However, we could answer “no,” and perhaps that is a perfectly reasonable position. If so, then there is no ethical foundation for considering Belonging Again (Part II) by O.G. Rose, and in fact the book might be immoral in encouraging us to become Nietzschean Children, thus prolonging humanity and the destruction of the biosphere. That could easily prove to be a consistent position, and “The Mental Health Crisis” a way for humanity to redeem itself in terms of Sentient Utilitarianism, to prove useful for once…
Before moving on, please note that in Longtermism (notably the work of William MacAskill, author of What We Owe the Future), we treat the future similar to a foreign country, and indeed Utilitarianism can help me act today in a manner that makes the world a better place for future generations. Of all the more Utilitarian philosophies, Longtermism is most appealing to me, but I’m not sure if Longtermism has the resources without slipping into “Value Ethics” to combat Antinatalism, which seems to me to be its most important opponent (though please note MacAskill himself might have no trouble with incorporating “Value Ethics”—I’m not sure). If birth is inherently immoral, then any philosophy which promotes birth for the future is problematic. Longtermism could reply that if we make the world a better place, then it will be good that people are born, but the Antinatalist may reply that “mental health problems” are highest in the West, which would suggest that the wealth we think will make life better may only make it worse. Why in the world should we think that any metric according to which we “make the world or future better” is a metric that won’t just be a mechanism of self-deception? For Antinatalists, Longtermism cannot assume that it knows how to make the world a better place at all, for where food is widely available, depression seems more common.
All this isn’t to say Longtermism has no reply to Antinatalists (I don’t know), but I personally would wager that this reply will likely venture into “Value Ethics” without perhaps realizing it. Perhaps the argument is that “some suffering is good,” but if so that would require an “e-valuation” of suffering, which would mean the Utilitarian defines “good” according to an abstract principle which isn’t given in empirical facticity, a move which would suggest that “Autonomous Utilitarianism” is impossible. And indeed, I see no way to counter Antinatalists but by shifting into “Value Ethics.” Or perhaps we shouldn’t counter it? Would that be wrong?
VI
In theology and legal reasoning, there can often be a problem of having to “honor something old in the new,” which is to say we cannot simply think “what we think is best” now, but at the same time have to take into account the tradition of Christianity (for example) and the Bible, while judges have to honor previous ruling (stare decisis) and embed their decisions in “the internal consistency” of the law. If not, there could be chaos both in religion and in law, for new decisions will not have to be grounded in the pre-existing structure. As a result, everything could become fragmented and unstable, making “shared intelligibility” impossible.
Similarly, I wonder if a problem with Utilitarianism is that it must only justify itself according to a result, and that the result is assumed to “honor the old” in being good for the most people. There is far less of an emphasis to question how the result might fit into the order of everything that came before (and if the result really “honors” the society and the preexisting). Rather, the emphasis is just on “doing the most good for the most people,” and what does it matter if we “honor the old” in the process? Indeed, this seems to be a fair point, but it seems to lead to fragmentation and the tearing apart of the social order (in which we are thinking about how we might help other social orders). Furthermore, focusing mostly on the result can help us develop a habit of thinking that doesn’t keep track of nuance, particularity, and difference. When dealing with a “shared circumstance of poverty,” there is not much need for nuance, but if we spend our life dealing with such circumstances, we may lack a mental habit useful for trying to determine how to do good for Sarah in particular, which always keeps in mind her particularity and particular values as we try to do “the good.” We must think of what we will do next while always keeping in mind who Sarah was from experiences in the past, and if the future doesn’t honor the past, Sarah will be dishonored. It will feel as if we never knew her.
“Value Ethics” seems to require many dimensions all at once which we have to always keep in mind, many of which are abstract, nuanced, and particular, while in “a shared circumstance” like a drought many of these dimensions don’t need to be worried about, addressed by the facticity itself. The habits of thinking found in Utilitarianism are not what are found in “Value Ethics,” and learning both habits is paramount.
VII
To review, where a people only have Utilitarianism, they will easily fall into a “Direction Crisis” if they become prosperous, because it will no longer be “given” by their facticity how they should act and what they should do. What constitutes “right action” will be more particular and nuanced, and if we claim Utilitarianism can still work, it’s just that “doing good for the most people requires us to address every particular good,” then at this point we will have to integrate “Value Ethics” into our Utilitarianism, thus making it dialectical, and that’s perfectly fine. Frankly, that is basically what I am arguing for, and those who call themselves “Utilitarians” who actually do this are people I have no trouble with at all. However, in not directly acknowledging “Value Ethics,” I do fear this can create a problematic impression that we don’t need to worry about values, setting us up for trouble.
To address what is “good for Sarah,” we will have to know Sarah, and what is good for her will not necessarily be what is good for Elizabeth, and ultimately we will have to ask questions about “Who is Sarah?” “Is what Sarah wants actually ‘fitting’ for Sarah to want?” “Is Sarah’s understanding of ‘the good’ self-destructive?” And so on — a host of questions come flooding in once we begin particularizing our ethical considerations relative to the character of the person we are considering — considerations like those systematically thought through by Aristotle. If we ignore the Nicomachean Ethics, prosperity could leave us aimless, and without direction, we may find ourselves wandering into trouble just so that we have a sense of what we should do. That, or we might conclude it would be better if we weren’t around to wander at all.
More could be said on ethics according to Aristotle, but hopefully these points are covered well enough in “Dialectical Ethics,” “(Im)morality,” “Absolute Moral Conditionality,” “The ‘Such/Lack Solution’ to the ‘Is/Ought Problem,’ ” and “A Valuable Life,” all by O.G. Rose. Every situation is one-of-one, and so Aristotle would have us gain the knowledge and wisdom we need (according to “experienced patterns”) so that we might discern justly and virtuously what we should do in every “one-of-one” we encounter. Aristotle can seem like he emphasizes knowledge at the expense of action, when really Aristotle believes that our goal should be knowledge so that we don’t just focus on action which applies to a single situation and then proves “unfitting”: to have knowledge is to have an ability to discern what is fitting across situations. Critically, this is not Relativism, but what I call “Conditionalism,” a notion which I think is desperately needed today. Believing our only choices are between “relativism” and “objectivism,” we are unable to morally reason situationally according to Aristotle, and ultimately since we associate “objective” with “right” and Utilitarianism seems more “objective” in being more “facticity-based,” Utilitarianism comes to be seen as all we need. And so the world turns…
For Hume, morally theory is dangerous because we can “outsource” ethical action to our theories and not take on the “existential burden” to rightly discern in our situations and circumstances what we should and should not do. This is why Hume argues that we cannot arrive at “ought” from “is” in favor of “suchness,” because Hume wants us to determine ethics from “the one-of-ones” of “common life.” He will not let us outsource our ethical responsibilities to theories, but I fear AI will give us a similar temptation soon (as discussed at “The Net (34)”) — but more on that at another time.
It could have been known ahead of times that “the right thing to do” in the Cuban Missile Crisis was to pretend like a letter wasn’t received from Russia, as it cannot be known except by being there how many straws a McDonalds needs to order this week (alluding to Fredrich Hayek). Calculation is impoverished compared to involvement, and in Aristotle we find an ethic of involvement which forces us to think about values, just like Nietzsche does. And the great trouble with Utilitarianism is that it works well until it does not, at which point it might be too late. The stakes are high.
To close, Aristotle says something that should be noted:
‘Wisdom produces happiness not in the way that medical science produces health, but in the way that health produces <health>. For since wisdom is a part of virtue as a whole, it makes us happy because it is a state that we possess and activate.’⁷
Is it true that the key to wisdom is not so much “adding” something to our lives but “subtracting” something that’s getting in the way of the expression and realization of something preexisting? This is a point Thomas Jockin suggested in our discussion at Other LIfe, and it is something we will have to consider moving forward. If ethics is more a matter of “design” and “art” than “utility” and “outcome” (if “Designitarianism” is better than “Utilitarianism”), a matter of cultivating a garden over a lifetime than arriving at the best decision for a linear situation today, than it would follow that “Value Ethics” is about cultivating seeds all of us already possess, right now. The tending of our garden can start immediately: we do not have to wait until we stumble into a “Trolley Problem” to do what’s right.
What is it that we all contain that makes possible virtue and values from us all? Well, I believe part of it is that we are all can be the conditions for beauty in the world, that the fate of beauty is the fate of us.
.
.
.
Notes
¹Aristotle. Selections. Translated by Terence Irwin and Gail Fine. Indianapolis/Cambridge: Hackett Publishing Company, Inc., 1995: 410 (1144b).
²Aristotle. Selections. Translated by Terence Irwin and Gail Fine. Indianapolis/Cambridge: Hackett Publishing Company, Inc., 1995: 350 (1095a).
³Percy, Walker. “The Delta Factor”. The Message in the Bottle. New York, NY: First Picador USA Edition, 2000: 3.
⁴Aristotle. Selections. Translated by Terence Irwin and Gail Fine. Indianapolis/Cambridge: Hackett Publishing Company, Inc., 1995: 363 (1100b).
⁵Aristotle. Selections. Translated by Terence Irwin and Gail Fine. Indianapolis/Cambridge: Hackett Publishing Company, Inc., 1995: 357 (1098b).
⁶Aristotle. Selections. Translated by Terence Irwin and Gail Fine. Indianapolis/Cambridge: Hackett Publishing Company, Inc., 1995: 356 (1097b).
⁷Aristotle. Selections. Translated by Terence Irwin and Gail Fine. Indianapolis/Cambridge: Hackett Publishing Company, Inc., 1995: 408 (1144a).
.
.
.
For more about Other Life, please see the website and consider joining the community today! Also, check out more by O.G. Rose here, and pick up our new book today: