Dear Sublation Magazine Readers,

Thank-you for supporting us by reading and sharing our articles. To help us keep all of our content free, please consider supporting us with a donation.



Unintentional Faith in Technology: Outsourcing Meaning to AI

FB
X

Ted Chiang’s (2024) reflections on AI and creativity highlight a profound concern: the troubling reduction of art and creative expression to a mere series of mechanical choices, facilitated by tools like ChatGPT. In Chiang’s vivid analogy, AI operates much like a forklift in a weight room — an ostensibly useful device that, rather than strengthening our cognitive and creative faculties, risks atrophying them. He cautions that by automating elements of creativity, we may diminish our role as creators and interpreters of meaning, turning the rich, subjective experience of art into a hollow, automated production line.

At the core of Chiang’s unease is the tension between AI’s burgeoning capabilities and the philosophical question of what constitutes “true intelligence.” While AI can emulate certain aspects of human cognition, it fundamentally lacks the profound, subjective intentionality that defines genuine creative thought. By lowering our expectations — both of the art we consume and of our own creative potential — AI threatens to redefine not only art and intelligence but the very essence of what it means to be human.

Chiang’s critique extends beyond the quality of AI-generated art to the broader societal implications of embracing AI as a creative partner. He argues that perceiving AI as merely a tool to “improve and assist” creativity is a profound misconception. This perspective clings to the illusion that AI is just an extension of human ability, failing to acknowledge its potential to fundamentally alter our relationship with creativity. Such a view challenges the optimistic belief that AI can democratize creativity or enhance human potential.

Art is not simply a series of deliberate choices but a deeply intricate, often unconscious process of decision-making — one that algorithms and text prompts cannot fully encapsulate. This notion echoes philosopher Alain Badiou’s (2004) concept of “thought” as something that emerges through ruptures in existing knowledge frameworks, rather than something easily systematized or mechanically reproduced.

Similarly, philosopher Jacques Lacan sees the subject as a void within knowledge, continually redefined through acts of creation or insight that disrupt established understanding. From this perspective, the creative act transcends mere tool usage; it navigates and manifests the subjective void within existing knowledge constraints — something AI, as a learning machine, cannot replicate. Thought is irreducible to objective laws or fixed structures, and its universal medium lies in its engagement with the complexities arising from its societal integration.

The true danger lies not merely in the potential blandness of AI-generated content, as Chiang apprehends, but in the broader cultural shift towards diminishing the value of human intentionality and subjectivity in the creative process. Just as capitalism reduces labor to a mere function within a larger system, AI reduces creativity to a mechanical process, stripping it of its subjective and intentional dimensions. The so-called “equality” between AI-generated texts and human creativity is less about the intrinsic quality of the output and more about its role within a system that prioritizes efficiency and productivity over genuine artistic expression.

The prevailing logic of AI, deeply intertwined with the symbolic and imaginary aspects of human subjectivity, reveals a profound disjunction between desire and language, extending well beyond the confines of large language models. Words, despite their extensive use, inherently fail to encapsulate meaning in a complete or satisfactory manner. This gap underscores a pressing need to authenticate our experiences—an endeavor that language, with all its capabilities, struggles to achieve fully.

Am I who I say I am?

Language is often viewed as a neutral or benign medium, shaped by human intentionality. This functionalist perspective assumes that language simply serves as a transparent conduit for representing reality. It presumes that the role of language is to mirror reality faithfully, portraying it as a peaceful and accurate depiction of our world (Zizek 2008).

However, this assumption overlooks a crucial aspect: even when we share the same empirical reality, our understanding of it is mediated through symbolic representation. This process creates a spectrum of ‘realities’ influenced by symbolic interpretation rather than an unfiltered, immediate experience (Zizek 2009). In essence, we do not engage directly with the immediate reality of objects or events. Instead, we interact with ‘false’ realities—distortions or constructions shaped by symbolic images. These images, while flexible, ultimately frame and confine our perceived reality.

Generative AI, in this context, exacerbates this issue. It not only fosters a dependency that diminishes our cognitive engagement but also serves as a reflection of what scholars identify as our unconscious repression. Slavoj Zizek (2023) suggests that AI reveals the repressed aspects of our consciousness, while Duane Rouselle (2023) argues that it mirrors the unconscious itself. Avantika Tewari (2023) further contends that AI embodies a ‘stain’—a manifestation of the constitutive repression that shapes our reality from artificiality, sustaining engagement through this distortion. In this light, AI’s influence is not merely about simplifying or automating tasks but about reinforcing a specific mode of engagement that perpetuates a constructed reality. It highlights the inherent limitations and distortions in our symbolic systems, shaping how we perceive and interact with the world.

Chiang contends that while AI may find a niche within the broader creative economy, it falls short of replicating the intentionality and meaning-making intrinsic to genuine artistic labor. In line with this, efforts by “design justice” (Henry et al 2023) activists to embed intentionality into technology’s design aim to enhance user choices. However, this expanded range of options merely reflects AI’s encroachment into the creative sphere, echoing capitalism’s reduction of labor to a commodity which is paradoxically, devalued more than ever. In this context, the unique, subjective aspects of human creativity are subordinated to the system’s relentless demands for immediate functionality.

Further, correlationism between intentionality and meaning, suggests that our understanding of reality is always mediated through human cognitive frameworks, emphasizing how our choices of words and actions shape our perception. Yet, this view often overlooks a critical disconnection between language and our pursuit of an ideal of “intelligence and creativity” that remains perpetually elusive. In the realm of AI systems like ChatGPT, this disconnection becomes apparent. These systems generate responses based on intricate patterns and correlations from vast datasets but inevitably encounter fundamental gaps or inconsistencies that their algorithms cannot fully resolve (Gewirtz 2023).

The structural limits or inherent “flaws” of AI systems undermine the illusion of coherence and completeness they project. Rather than merely reflecting the machine’s inherent imperfections, these flaws highlight a deeper, more complex dynamic—one that reveals the tension between the AI’s limited capabilities and the inherently incomplete, inconsistent nature of human subjectivity. Just as no thing in the universe can claim an independent existence from its microcosmic emergence, AI’s potential and capabilities are bound by its foundational algorithms, data inputs, and developmental constraints.

Thus, AI’s promise of rationality and precision falls short when confronted with the unpredictable elements of human interaction, which challenge its claims to total understanding and control. This issue becomes particularly evident when examining attempts to integrate symbolic reasoning (Platzer 2024), such as sentiment analysis on social media platforms; which nonetheless engenders distracted self-perpetuation on attention harvesting platforms (Read 2014) which nonetheless leaves room for boredom and disenchantment. Thus, while AI systems strive to maintain coherence and efficiency, they are also subject to inherent limitations and a continual process of adaptation and adjustment.

Enjoyment at the Heart of the Paradox of Intentionality

The accelerated pace of technological production within digital platform capitalism has profoundly altered traditional rhythms of labor and creativity, morphing them into a mechanized, commodified process that prioritizes market demands over the depth and intentionality of human expression.

AI does more than streamline production; it transforms creativity into a cog in a system that prioritizes efficiency and marketability over genuine artistic expression. AI is not just a tool but a component of a broader system that redefines creativity as a mechanism for generating surplus enjoyment within the digital economy (Zizek 1999). This shift reflects the underlying capitalist logic driving AI’s role in creativity, where the focus shifts from the subjective act of creation to the production of consumable content valorised in their data-form.

This interaction embodies a complex interplay of pleasure and suffering, extending beyond mere satisfaction. AI systems, such as GPT, foster a form of enjoyment that is both expansive and superficial. The rapid increase in digital stimuli facilitated by AI leads to a surge of fleeting, often shallow pleasures. This enjoyment is not just about consuming content but involves deeper engagement with the digital system’s demands for constant interaction and participation.

The relentless churn of AI-generated content can be seen as a method of “managing” the subjective void through superficial engagement. By continuously providing new content and interactions, AI systems sustain a cycle of engagement that keeps users occupied. However, this may ultimately fail to address the more profound, constitutive void. This reflects Lacan’s notion that the pursuit of enjoyment often compensates for the inherent lack within the subject, manifesting as a drive for digital dissatisfactions. Users, in turn, engage with the system in a way that contributes to a surplus of data, which is used to further train AI—a form of undead engagement where users find a paradoxical enjoyment in their own destitution.

While ChatGPT functions as a rationalization tool, aligning individuals’ egos with the techno-economic system, it also serves as an over-identification machine, reflecting and amplifying the contradictions inherent in our collective desires (Fidaner 2024). Fidaner argues that having a robot perform such functions enabling a cyclicality to the stable, should be viewed as a chance for humans to pause and rediscover their own lost voices.

Echoing the notion of Lacanian subjective destitution, which implies reduction to an object rather than de-subjectivization, it results in a form of dehumanization (Zizek 2023). After experiencing subjective destitution, a subject loses their “human” depth—characterized by a rich inner life—becoming a pure void. This reduction allows the subject to experience interactions in a radically new way, revealing a revolutionary potential by creating a void in the fabric of history itself.

References

Badiou, Alain. “Eight Theses on the Universal” 2004.

Chiang, Ted. “Why A.I. Isn’t Going to Make ArtThe New Yorker, 2024.

Fidaner, Isik Baris. “Humanity Can Create Realistic Utopias with the Help of Robots,” Zizekian Analysis, 2024.

Gewirtz, David, and Alyson Windsor. “6 things ChatGPT can’t do (and another 20 it refuses to do)ZDNet, 16 February 2023, accessed 2 September 2024.

Henry, Nicola. “A ‘design justice’ approach to developing digital tools for addressing gender-based violence: exploring the possibilities and limits of feminist chatbots.” Information, Communication & Society, 2024, pp. 1-24.

Platzer, Andre. “Intersymbolic AI: Interlinking Symbolic AI and Subsymbolic AI” 2024.

Read, Jason. “Distracted by AttentionThe New Inquiry, 18 December 2014, accessed 2 September 2024.

Rouselle, Duane. “ChatGPT, From a Window in BaltimoreSublation Magazine, 2023.

Tewari, Avantika. “I Chat, Therefore I am!Sublation Magazine, 2023.

Zizek, Slavoj. “ChatGPT Says What Our Unconscious Radically RepressesSublation Magazine, 2023.

Zizek, Slavoj. “Subjective destitution in art and politics: From being-towards-death to undeadness,” Enrahonar. An International Journal of Theoretical and Practical Reason, vol. 70, 2023, pp. 69-81.

Zizek, Slavoj. Violence: Big Ideas/Small Books. Pan MacMillan, 2008.