Evaluating in Unclear Limits, Stakes, and the Meta-Structure of the Dialogue About AI
Reviewing "The Net (33)"
“The Net (33)” started with a consideration of AI and “The Framing Problems,” as recently discussed by Stephen E. Robbins (Video 78), a problem which Thomas Jockin suggested was also found in Aristotle. Basically, the problem is that it requires “common sense” to know (without thinking about it) what the next frame of time “could” contain. It is not naturally the case that “anything is possible” even if “many things are possible,” and being able to know the difference between “anything” and “many things” is a product of “common sense.” I simply just know that it is “possible but not probable” that a lion storm into my office a minute from now, and the question is if an AI could just have that “common sense” such as I do (regarding what the next “frame” of time realistically could contain). Robbins notes that this a profoundly difficult problem, and basically suggests that humans will have to program into AI the difference between “possible” and “probable,” but how could a human think up all the possible scenarios which have to be bracketed out? Likewise, how could the human think up every “probable scenario” which every new frame of spacetime contains? AI would seemingly have to program itself regarding this, but how? Perhaps it’s possible, but it’s hard to say, suggesting “the problem of evaluation” which this discussion orbited.
We then considered if AI could totally replace human intelligence, and Chetan made the point that AI doesn’t have to do so perfectly, only prove “similar enough” so that we are unable to evaluate (without great effort) the difference between something “human made” and something “AI made.” This alone would prove enough to change the world, and so the debate on if AI can “technically” replace human intelligence might be beside the point. Furthermore, if AI “changes everything,” there’s no clear standard according to which we can determine if there is a difference, so even if AI is that radically different, we might not be able to realize it is such.
Tom Lyons suggested that a question of AI is if there is a point at which “quantity becomes quality” (which is an ontological question), and Javier noted that we should never underestimate the human capacity to relate to things (noting Buber), which suggested that regardless what becomes of AI, it will never prove unrelatable to humans (which might be a unique power of ours). Javier also made the point that AI cannot die, so is death the limit according to which we should evaluate AI and our “otherness” to it? If so, Heidegger might be relevant to revisit, as might be Lacan, for if AI can “do anything” and we can use AI, we will have no excuse for getting what we want, which suggests we will have to confront “The Real” or endless fantasy? Hard to say.
Alex Ebert made the great point that the topic of AI is so tricky because there is no “obvious limit” to what AI can do and what it cannot do, which is to say it is not clear if AI will be able to do everything humans can do, leaving humans with nothing by which they can “protect their otherness.” Javier Rivera made the point that in Martin Buber “empathy” and “mystical experience” can be problematic because they erase “difference” into a “unified sameness,” meaning the relationship itself is lost — and Buber’s prime focus is depending and protecting the relationship. Relationships are primarily, essentially, deep, and ontological, and whatever might erase or replace them must be avoided, and that includes both empathy and mysticism. We eventually discussed how the trouble with AI is that it can potentially erase and replace “otherness,” and if this were to occur we could lose relationship.
When the wheel was invented, it was clear that it might replace horses, but it was also clear that it would not replace music. Until AI, everything invention has brought with it a sense of what it might replace and what it might not, but with AI it is not clear. We are “limited from experiencing the limit,” per se, which automatically brings to mind Hegel and “The Absolute” — but more on that later. For now, if we cannot be sure of what AI can or cannot do, we cannot say AI won’t impact realms like sexuality, relationships, creativity, or the military industrial complex — and those areas seem of particular importance to consider.
If we cannot determine the limits of AI, we cannot say for sure it can replace us, and if it does we might not be able to tell that it has. Furthermore, if AI can replace us, it is not “other,” and that means it will prove “practically equivalent” to a problematic “mystical experience” in which we are consumed and lost (A/A vs A/B) — again, a point highlighted by Javier. Also, is there something about us that wants to imagine AI destroying us all? As what Girard called “the scapegoat mechanism” fails, is the mechanism of masochism returning? Indeed, the question on how we might relate to AI brings into questions of sexuality and love, which begs a strange question: Could we have sex with AI and still be human (or gain sexual gratification by being humiliated by it)? Sex between people does not efface their distinction or humanness, but might that level of intimacy with AI cost us our humanity? Or could something “emerge” which otherwise would not? A child? Could AI and humans generate a child? What does that even mean? I’m not sure, but AI and question of Libidinal Economy seem paramount to consider.
Javier pointed out how perhaps our concern with AI is tied to how we have seen people avoid relationships with technology, citing the Tamagotchi craze of the 90s, and also the popularity of anime and CGI girls in some areas of the world. Indeed, the movie She highlights how we can fall in love with AI, and it is perhaps that possible relationship that confuses and even disgusts us. To this point, I’m lead to consider “The Left Hand Path” of some mystical and religious movements, which is considered a quick way to Enlightenment, but also very dangerous. It is consider vile and sinful to eat beef in Hinduism, and according to “The Land Hand Path,” this is exactly what the Hindu can do to deconstruct and destroy himself — as necessary for Enlightenment. More could be said, but might AI force humanity as a whole down “A Left Hand Path,” which means we will suffer the upmost depths of disgust and horror — and then come out Enlightened quicker than we could have any other way? Perhaps. And perhaps the only way we might stand this difficult challenge is precisely through “A Final Absolute Choice” where we ultimately believe there is something apophatic about ourselves which AI cannot replace? Perhaps it is only through such a “leap of faith” that we might prove able to sustain ourselves through the challenge? Hard to say.
There easily are more, but “military,” “creativity,” and “relationships” seem to be areas which most disturb us regarding AI possibilities, and yet all three of these seem most obviously libidinal. Shouldn’t these be safe from AI? It would seem that way, and yet what horrifies us is precisely that we don’t know what AI will do once we give it military power, as we don’t know what it will do to human creativity (will it help it or replace it), as we don’t know what it might do to community (will we atomize like Japan)? Might AI do everything which humans can do thus leaving us in the hell of boredom? We simply don’t know: we cannot get a clear sense of “the limit” of AI. Perhaps everything will be fine. Perhaps it won’t be.
We seem “limited from identifying the limit” of AI, which again brings to mind Hegel’s thinking on Kant’s noumenon, where we are “limited from experiencing that limit,” and that means we must experience our limit “as limitlessness.” AI seems limitless, so we have to decide if that is because it is limitless, or if we are only “limited from experiencing its limit.” Perhaps there is indeed something AI will never be able to do, and though perhaps this something is something humans also cannot do (meaning it cannot be a source for defining “human otherness,” as needed for a Buberian “relationship”), it is also possible that this “something” is something humans can do which AI cannot do. And perhaps we must choose and decide if this “something” expresses “the apophatic essence” which is the subject of “The Final Absolute Choice.” Is AI our ultimate projection and manifestation of the noumenon? Will AI force us to choose between Kant and Hegel? Time will tell — or not.
.
.
.
For more, please visit O.G. Rose.com. Also, please subscribe to our YouTube channel and follow us on Instagram, Anchor, and Facebook.