First, I don’t mean just ChatGPT. It’s the most well-known of the AI offerings, but there are many and the list is growing. In this article I will mention ChatGPT, but mean AI in general.
I want to start with an explanation of where I’m coming from.
In 1978, I published a Master’s Thesis. The title of that thesis is “A Functional Description of the English of Science and Technology”. I focused mostly on scientific writing, and more specifically on reports of research typically published in scientific journals. Also, I focused primarily on the structure of such writing. The way authors go about presenting their research is consistent and therefore there are predictable patterns. Those patterns enhanced readers’ ability to understand the research, so understanding the rhetorical structure of scientific writing is important.
Over the years I’ve come to realize that there are other, far more important, features of technical writing. But that’s another mea culpa.
When I wrote the thesis forty five years ago, AI was rarely mentioned, and easily dismissed as science fiction. You may remember the line from 2001: A Space Odyssey which summed the situation up perfectly for the time. Later, I’ll try to explain how the English of Technology, as well as my contemporary concerns about AI, are illustrated by this exchange today.
DAVE: Open the pod bay doors, Hal.
HAL: I’m sorry, Dave. I’m afraid I can’t do that.
DAVE: What’s the problem?
HAL: l think you know what the problem is just as well as l do.
DAVE: What are you talking about, Hal?
HAL: This mission is too important for me to allow you to jeopardize it.
DAVE: I don’t know what you're talking about, Hal.
HAL: l know that you and Frank were planning to disconnect me, and I’m afraid that's something I can’t allow to happen.
DAVE: Where the hell’d you get that idea, Hal?
HAL: Although you took very thorough precautions in the pod against my hearing you, I could see your lips move.
DAVE: All right, Hal. I’ll go in through the emergency air lock.
HAL: Without your space helmet, Dave, you’re going to find that rather difficult.
DAVE: Hal, I won’t argue with you anymore. Open the doors!
HAL: Dave…This conversation can serve no purpose anymore. Goodbye.
I’ll come back to this encounter between man and machine in a moment.
In my thesis, I proposed that understanding scientific or technical literature required four skills, two basic level and two advanced:
- Enabling Skills
o Semantics – the meanings of the words used in the writing.
o Syntax – the construction of sentences and paragraphs into meaningful statements
- Rhetorical Skills
o Contextual Awareness – understanding where and how the writing fits in a given situatio and, more importantly, the intended purpose of the communication.
o Content Knowledge – sufficient understanding of the general topic to correctly interpret claims made in the writing and evaluate their accuracy.
Unfortunately, back then I emphasized the structural attributes of technical writing and gave short shrift to what I now understand is more important: contextual awareness and requisite content knowledge.
Back to Hal and Dave.
In the movie, the AI driving Hal exhibited mastery of the four skills I outlined in my thesis.
- Hal understood the semantic content and syntactic structure of Dave’s request.
- More importantly, though, Hal exhibited awareness of the context in which that request was made. Hal anticipated that the rhetorical intent of Dave’s request was far more than simply opening the entryway. The request was one part of a larger plan.
- Hal had content knowledge; Hal understood and anticipated the consequences of complying with the rhetorical purpose of Dave’s request. He was having none of it.
I would like to argue that AI today displays only the first two basic skills adequately, and only partially exhibits the others, in most cases.
We can count on AI like ChatGPT to respond to semantically meaningful and syntactically phrased requests with equally semantically and syntactically “correct” responses. We can also count on it to draw those responses from a more or less appropriate pool of content material by virtue of the fact that terabytes of data are available on which to search.
However, I contend that we are nowhere near seeing AI achieve the threshold of rhetorical competence required to be a trustworthy advisor.
True AI, Hal clearly understood that Dave’s intent was NOT just to open the pod bay door. Hal was rhetorically competent and context aware; the computer understood that Dave’s real purpose was to disable Hal.
The Problem that is ChatGPT
And that’s why I don’t trust AI in general, and ChatGPT in particular. Not because I’m afraid it will take over our world anytime soon as Hal tried to do. Rather, ChatGPT and the like blindly respond to requests, but have no ability to accurately evaluate the context and the rhetorical purpose of those requests. Lacking those abilities, it is dangerous in the same way a loaded gun is dangerous. By itself, it can do no harm. In the hands of the wrong person, it can do great harm. It’s the intent of the user that makes the difference.
Look at it this way. If Hal had complied with Dave’s request on that particular occasion, the consequence to Hal would have been his own termination. On any prior day, it would not been a big deal at all. The difference in intent is clear to humans. Hal got it; I doubt very much that ChatGPT would get it.
What I do fear is that the naïve user of AI does not recognize the problem of missing context and will blindly accept whatever responses AI provides – all too often to their own detriment.
For example, consider a request in a public forum for VBA code to copy records from one table to another. AI will try to create and provide a VBA function to do exactly that. Most of the time that code should be accurate semantically and syntactically. However, AI as we know it today cannot correctly grasp the context and the rhetorical intent of that request.
Therefore, AI cannot respond as it should, “I'm sorry. I can’t help you do that.”