When must we lobotomize ourselves?
Rhetoragnosia, superhumanly persuasive AI, and truth.
Rhetor, from Greek “rhētōr,” meaning “speaker” or “orator”.
Agnosia, from Greek “agnōsia,” meaning “ignorance” or “not knowing”.
In his 2002 short story “Liking What You See: A Documentary,” Ted Chiang explores calliagnosia, a technology that eliminates your ability to see physical beauty. I strongly recommend reading the story - major spoilers ahead.
Liking What You See centers around a debate on a college campus around making calliagnosia, or “calli”, mandatory for students. Proponents argue that calli addresses “lookism” - prejudice against unattractive people - and lets users see the true inner beauty of other people rather than their outward appearance. Opponents claim that calli stunts users’ ability to appreciate natural beauty and prevents them from internally developing the maturity to look past physical appearance.
Calli is an incredibly interesting concept which I’d like to explore it more deeply at some point. But this post isn’t about calli.
The anti-calli faction is supported by a marketing firm, Wyatt/Hayes, which releases a supernaturally persuasive video against calli. The video is revealed to have been edited to enhance the speaker’s “vocal intonation, facial expressions, and body language”. Calli proponents struggle to address these new developments, with some considering adoption of newer technologies that allow one to block out facial expressions or intonation in a bid to defend themselves.
Learning to speak on the speech team
I wonder a lot about persuasion and rhetoric. In high school, I competed in extemporaneous speaking, an activity which involves delivering a persuasive speech with minimal preparation. While topics were usually bounded to contemporaneous events, it was still quite difficult to stay on top of everything going on in the world at a deep enough level to write and practice a 5-7 minute speech from scratch with only 30 minutes of prep time.
One strategy we used was modular speeches - basic outlines that could be adapted by swapping in topic-specific theses and evidence. These outlines came with stock openers (hooks), argument frameworks, and other rhetorical flourishes designed to sound reasonable even if their relevance to the subject matter was tenuous. Good speakers would tie in the generic framework to their topic, and doing so was much faster and more reliable than coming up with the entire speech from scratch. I remember getting to the point where I could deliver semi-decent performances with almost no knowledge of the underlying topic - an incredible reward-to-effort ratio from the perspective of a lazy kid.
But something about it left me feeling unsettled - learning that the packaging of an argument can be almost entirely divorced from its content.
It’s difficult to separate our perceptions of a person from the first, visual impressions we have of them. It’s similarly difficult to separate the factual or logical content of a persuasive argument from the fashion in which it is delivered. Chiang writes about superhuman persuasion over video through enhancing nonverbal cues, but this also affects voice and text, though perhaps to a lesser extent. Just like absent-minded students might gloss over a hidden division by zero in an otherwise compelling proof that 1=2, the absent-minded reader can be all too easily pulled in by writing that seems well-reasoned but on further examination lacks substance.
(Despite my resistance, I still managed to learn some things about rhetoric and the world from extemp. I think that would be less true if I had competed today, though I may have learned some things about prompting LLMs instead.)
Some theories of rhetoric from an evolutionary perspective
A minor digression - what does it mean for an argument to seem well-reasoned? And why do rhetorical techniques work in general? I think there are several plausible evolutionary psychology-based perspectives on rhetoric.
First, ideas which are delivered persuasively may be more likely to be true. A person who puts a lot of effort into making their position sound good probably put a lot of effort into researching the argument as a whole. Story-telling lets the speaker adopt the feelings of truthiness and tradition associated with these stories onto their position. Confidence and appeals to emotion fall under this category. A speaker who argues passionately and with emotion signals a deeper confidence in the idea and raises the stakes by putting their social reputation on the line. If they’re wrong, it damages their credibility more severely, so they’re more likely to have put greater effort into ensuring they’re right.
Second, effective rhetoric makes you believe that the speaker is part of your ingroup and thus their ideas are aligned with (and will result in the greater adoption of) your values, regardless of their truth value. From this lens, story-telling demonstrates that the speaker is the kind of person who knows your stories deeply enough to find the underlying patterns and frameworks (such as the monomyth) and therefore is more likely to align with your values. Many other rhetorical techniques, such as social proof and reciprocity, would fall into this category. I think generally charisma - humor, relatability, and general likeability - falls under here as well, as understanding the audience intimately is a prerequisite to deploying these techniques effectively.
In small tribes where social reputation matters and deception is costly, these heuristics likely served us well. The most persuasive person was often the most knowledgeable, most investment, and most aligned with group interests.
But, as they are wont to do, our monkey brains become liabilities in our modern information environment.
The Great Decoupling
Conversations around rhetoric sometimes paint it as a bad thing - a distraction from the pure truth-seeking that leads to better outcomes. I think there are good reasons why rhetoric is effective and that it in fact helps us find better ideas quickly. In short, the persuasiveness of an argument is often a reasonably good indicator of its truth value.
It would be easy and convenient to draw the line there, but unfortunately it’s not a stable equilibrium.
We’ve seen time and time again that there are bad actors in the system, those who deploy charisma and rhetoric with the end goal of seeming right and convincing others rather than finding or spreading the truth. And so, perhaps since the invention of language, the persuasiveness of an argument has become increasingly decoupled from its truth value.
The advent of mass communication - whether through the printing press, radio, television, or internet - has only accelerated this decoupling. As mass communication makes it easier to reach larger audiences, it has also increased the returns on rhetoric, making it more and more profitable to invest solely in the delivery of a message without regard for its factual content. Demagogues, advertisers, and influencers ceaselessly push forward the frontiers of this decoupling. Radio allowed charismatic dictators to reach millions simultaneously. Television greatly increased the important of image and appearance in political discourse. Social media algorithms reward content that generates engagement over content that promotes understanding. Each technological leap has made it easier to optimize for persuasion over truth.
AI will be the final nail in the coffin.
LLMs completely decouple persuasiveness and truth value
The more I engage with AI, the more I worry about its effect on our ability to distinguish between the truth and true-sounding nonsense, an ability that is already being overwhelmed in the algorithmic age. Some researchers are already suggesting that LLMs can be as or more persuasive than humans in certain contexts. Both OpenAI and Anthropic have explicitly set policy for their LLMs to avoid their use for political campaigning and the generation of misinformation. But the problem runs far deeper than just political manipulation.
LLMs represent the democratization of access to and deployment of superhuman rhetorical ability at scale. This will have structural consequences for our ability to communicate at the most fundamental levels.
An LLM can generate text that sounds like it was written by a domain expert, complete with appropriate jargon, confident assertions, and seemingly sophisticated reasoning - all while being fundamentally wrong about key facts. Unlike human charlatans, who might slip up or show inconsistencies, LLMs can maintain a facade of expertise with superhuman consistency.
Where human deception requires individual effort and carries reputational risk, AI-generated content can be produced at massive scale at limited cost and consequence for the generator. We’re already seeing this with AI-generated academic papers, fake reviews, and synthetic social media personas. If you thought bots were bad in the early 21st century, you’re in for an incredibly rude awakening over the next few years.
Social media and search algorithms are already creating echo chambers. Future AI systems will be able to amplify and leverage these situations to tailor their rhetorical approach to individual psychological profiles, optimizing their persuasive techniques for maximum effectiveness on each specific person. This is Chiang’s enhanced video taken to its logical extreme - not just improving delivery but customizing the entire argument structure to exploit individual cognitive biases.
As AI becomes better at mimicking the signals we’ve traditionally used to assess credibility - institutional affiliation, writing quality, internal consistency - these heuristics become unreliable.
The result is a kind of rhetorical inflation: as everyone gains access to superhuman persuasive ability, the baseline level of sophisticated-sounding argumentation rises dramatically, but without any corresponding increase in the actual truth value of the claims being made.
We’re destroying our ability to communicate and connect at an unprecedented scale.
When must we lobotomize ourselves?
Calli represents a rebellion against enhanced beauty, giving individuals control over how they want to engage with aesthetic manipulation.
As superhuman persuasion becomes more common, we will need a way to protect ourselves - to opt out of rhetoric. From a naive perspective, this could look like browser extensions that strip emotional language from posts, only presenting factual claims. AI assistants trained to identify and flag rhetorical techniques, in hopes that they’re slightly less insidious out of the shadows. Social norms around clearly labeling opinion versus fact, emotional appeal versus logical argument.
In practice, I anticipate it will be much more difficult to separate rhetoric from facts. Chiang explains calli as working by blocking certain neural pathways associated with the recognition of beauty, as exemplified by clear skin, symmetry, and facial proportions. A precise definition of beauty can be hard to pin down. Unfortunately, defining truth is even harder. In offloading the separation of rhetoric from fact to a third party, we inherently agree to hand over the duty to define what the facts are. Philosophers have spent millennia trying to establish the natures of knowledge and truth.
Will our ability to define and understand the truth outpace a bad actor’s ability to muddy the waters?
Even if we can develop such cognitive defenses, will it be worth it?
There’s real value in rhetoric and persuasion that goes beyond manipulation. Good rhetoric can make important truths more accessible and memorable. The civil rights movement succeeded not just because it was morally right, but because great orators like Martin Luther King Jr. could communicate that moral truth in ways that moved people to action. Appeals to emotion are appeals to part of what makes us human. If we completely deafen ourselves to rhetoric - effectively lobotomizing ourselves in a sense - we must accept the immeasurable tragedy of losing the ability to appreciate so much of the human experience, from great texts to everyday passion.
But as AI makes rhetorical manipulation increasingly powerful and accessible, we may not have a choice. Just as Chiang’s story considers blocking facial expressions and vocal intonation to defend against enhanced videos, we may need to develop tools to help us separate the logical content of arguments from their emotional packaging.
The need is becoming increasingly urgent. In a world where anyone can sound like an expert and any argument can be made to seem compelling, our heuristics for detecting truth have crossed from unreliable to flat-out dangerous.
The lesson I take from Chiang isn’t about beauty or rhetoric specifically. At a deeper level, it’s about the importance of maintaining agency over our own perceptions. As AI makes it easier than ever to manipulate how we think and feel, the ability to choose what influences us - and how - may become one of our most precious freedoms.
The technology to enhance rhetoric already exists. The question is whether we’ll develop the wisdom to sometimes turn it off.
Do you like what I’m saying, or just how I’m saying it?