Rewritten to the Mean
The Illusion of Understanding: The Risks of Relying on AI Summaries
One of the final tasks of the book was to write the back cover. This provides an opportunity to summarize, engage, and be authentic. But how would generative AIs summarize the same content? What would they change, and what would survive the cut?
One of AI’s most touted features is its ability to summarize. However, generative AI, designed to predict the next word or phrase based on probabilities, often strips away the unique nuances of language, the authenticity of the author’s voice, and perspective.
AIs can instantly generate automated summaries or key points of documents up to 100 pages or 100MB. Yet, these tools attempt to replicate human executive function and higher-level thinking. This practice poses significant risks, especially in compliance and legal contexts, where precise language and nuanced meaning are paramount.
While generative AI excels as an editing and proofreading tool, offering semantic and pragmatic improvements beyond mere grammar and spell-checking, its summarization capabilities provide only a glimpse of strategic knowledge, critical thinking, and insight in the original document. For example, AI might struggle to accurately capture the subtle nuances of a complex legal document or the impact of a supplier’s adherence to processes, potentially leading to misinterpretation and non-compliance.
An AI cannot be allowed to act as an intermediary between the author and reader when it lacks the perspective, purpose, and intent of both parties. Compliance and legal documentation, in particular, are carefully drafted, vetted, and created with the best information, expertise, and experience from the document creator's perspective and must remain the source of truth. Moving forward, it's crucial to recognize that audiences may perceive AI as a tool to aid and speed comprehension. Authors must communicate from the outset why the reader should engage with the document and why it should be referred to in the future.
A generative AI prioritizes the most likely and probable wording, often neglecting the intentional and purposeful language choices of human authors. As such, AI-generated summaries may lack the depth, nuance, and authenticity that human-written summaries can provide. As AI becomes more prevalent, it's crucial to remember that it is a tool, not a replacement for human judgment.
What Would Elmo Think of Your Documentation?
Perhaps it’s time to shift the paradigm. Instead of relying on readers to synthesize complex documentation, provide clear summaries, engaging narratives, and diverse access points. For too long, corporate documentation has been a bureaucratic endeavor, prioritizing exhaustive coverage over compelling content. As a source of truth, documentation should not only inform but also engage.
It’s crucial to understand how AI might interpret your content. If an AI misses a nuance, so might your audience. By feeding an AI with a wealth of information, you can gain valuable insights into how your content is perceived and identify potential areas for improvement. You can also generate creative and unexpected versions and outputs. For instance, we’ve used AI to produce satirical reviews of our book, penned in the styles of famous authors and characters. Below are a few examples:
Assisting Intelligence launches December 9th.