ChatGPT as Revealed Text

By Michael Corn. We have all seen and been amazed by the ChatGPT hype cycle. From worship to cynicism, and somewhere in the middle, it has inspired endless articles and market adjustments. After reading the recent University of California article, “Is ChatGPT a threat to education?” I wanted to offer a slightly different perspective on the technology and raise what I feel is a more fundamental question: what role it and its siblings will play in how we digest digital information streams.

My concerns about ChatGPT et al, aren’t that it’s replacing us, or even going to radically change how we work, instruct, or learn any time soon. It’s fundamentally a language model, not an artificial intelligence. While it is capable of some fascinating output (ask it to summarize Tom Sawyer in the voice of different authors) it remains exactly as it is named, a “Generative Pre-trained Transformer.” It generates new language by transforming material it was trained upon using subtle statistical models. It awes us with the sophistication and naturalness of its language, which we mistakenly use as a proxy for intention or even thought. Thus, in many ways it is merely an updated Eugene Goostman.

Since we’re technologists it’s not surprising we’re reacting to ChatGPT the way we are (i.e., here’s a new hammer, go find some nails). Essentially, we’re wishing it were the truly intelligent agent we can insert into any interface or process and gain an infinitely scalable and flexible intermediary. Free staffing to put it crudely. Over time, I have no doubt this will begin to pay dividends: the future lay in the hands of our robot overlords, and we will race to install them.

But the more interesting question is what ChatGPT, and its ilk means for the ecosystem of information / disinformation and how people discover things and facts.

We know people are terrible at discerning fact from fiction – indeed, while a million librarians dancing on the heads of a million pins have been talking about this since students started using the Internet instead of libraries and librarians, the problem has only gotten worse. Witness our current political environment. No one has learned to recognize responsible sources from trolls. We read something and we don’t evaluate it against some rigorous rubric, but rather think, “sounds like something I already believe,” and thus confirmation bias rules our lives. (I believe this is true regardless of who you are, what your positions are, or how educated you are. The flood of information is simply too great to be as discerning as we like to believe we are).

Enter the chatbots: by providing elegantly phrased search results, but now with the imprimatur of “AI” and the illusory facility of natural language, their output appears to be even more of a ‘voice of authority’ than traditional searches. It’s tempting to frame this in the historical context of how the public receives information. As the tools for sifting through online information have become increasingly sophisticated, they have also become more subtle. The Internet may have liberated all knowledge but managing that flood of data has laid bare that there are mechanisms that control and shape what we see within the deluge.

While the arc of the search engine is growing longer, it bends towards obscurity. From the early days of keyword-based web searches to the highly honed algorithms of Facebook, Twitter, and Google, it is not only more difficult to understand how they choose to curate data, but that knowledge has become a highly guarded trade secret. ChatGPT takes this opacity to an even greater level. Indeed, our inability to truly understand how neural network-based models operate is an epistemological challenge to their scientific rigor. (See:

But it is the active voice in which our questions are answered that is both seductive and troubling. The linguistic fluency of ChatGPT changes our engagement posture from ‘reviewing search result’ to ‘getting lectured’, from active to passive. No longer are you asked to select and judge from a variety of sources, instead you have a highly fluent and polished equivalent of “people are saying”. The distillation of data into voice is what makes ChatGPT so powerful. ChatGPT offers a conclusion for which you must search for evidence, that is, revealed knowledge in its most biblical form.

I also wonder how this voice of authority problem (lack of attribution, summarization without understanding) is coupled to broader questions of DEI and gender relationships (1). If mansplaining disproportionately disadvantages women in their career, does surrendering to ChatGPT’s “Well actually…” presentation of facts, accurate or not, reinforce counterproductive stereotypes of behavior? As allies of DEI, should we not wrestle with this question before we sprinkle ChatGPT throughout our ecosystem?

There is, however, an irony to all the hand wringing about how technology like this erodes the skills of the mind. At times, the Internet has been accused of ruining our memory, intelligence, ability to focus, and possibly even relationships. But everything from the mundane (spelling and grammar) to the surprising (the impact of Google Maps), to knowledge publishing and discovery (e.g., and Google Scholar) are better thanks to modern information management tools. I have no doubt that as machine learning advances, the utility of programs like ChatGPT will be significant and like all technology its use will reflect both our best and our worst impulses.

In sum, my question to our educators is not, ‘how will ChatGPT impact teaching and learning’, but rather, ‘what can we do to better equip our students to recognize ChatGPT for what it is?’ I suspect the long tail of ChatGPT’s impact will not be the mechanics of classwork and interacting with courses, but in the pedagogy of critical thinking. The challenge we as technologists should take up is to ask ourselves how we, as purveyors of this tech, can do so responsibly, sensitive to the forces of disinformation, surveillance, and manipulation running amok today.

(1) With thanks to my colleagues Carolyn Ellis and David Hutches for valued conversation and insight on these issues.

About the author

Michael Corn Headshot Photo
Michael Corn
Chief Information Security Officer
UC San Diego