One of the things that can drive you crazy about the nature of the continuous improvement beast is the definition (and re-definition) of terms, and this can create a jabberwocky.
Many experienced professionals know this well:
Definitions Abound, Meaning Eludes Us
“…definitions don’t define, names don’t identify, examples aren’t exemplary, and an organization’s processes are essentially unknowns (but, thank goodness, not unknowable).” – Fred Nickols, 2016
As we apply multiple practice frameworks, whether that be ITIL for IT, COBIT for Finance, or any other of the many practice frameworks tailored specifically for different parts of the enterprise and many use cases, the meaning of words can shift and change based on these perspectives.
/
Even for seasoned professionals, these debates about terms can be maddening, and it is worth coming to agreement on these definitions. But more importantly, it should not be about one definition ‘winning’ over another, it’s far more important to accept a definition for purposes of discussion and understanding.
This often involves unlearning what we’ve traditionally understood a definition to be. The USM method has spent considerable time defining its terms, and these can be found in a glossary/dictionary on the USM Wiki.
To understand requires us to know what a person means when they use a word, and that may (or may not) align to what we’ve traditionally understood that word to mean.
Reaching an understanding can be a painstaking process, but it’s a road worth traveling.

The redefinition of terms can indeed contribute to the proliferation of what some might call “bullshit,” as discussed in Michael Townsend Hicks, James Humphries, and Joe Slater’s paper “ChatGPT is bullshit.” This phenomenon occurs when language becomes increasingly divorced from its original meaning, leading to confusion and miscommunication.
This blog post, “The Terminology Jabberwocky”, highlights how the misuse and constant evolution of terminology can create a sort of linguistic chaos. This redefinition can obscure the true nature of concepts, making it easier to manipulate discussions and harder to hold meaningful, fact-based conversations. The critique underscores the idea that when terms are constantly redefined or used inaccurately, they lose their power to convey precise and clear information, thus increasing the potential for “bullshit” to flourish.
A glossary or dictionary is indispensable when analyzing any text to ensure a clear and accurate understanding of the author’s intended meaning. As terms evolve or are redefined, having a reliable reference can help decode the specific context in which they are used. This is particularly crucial in fields where precise terminology is essential for effective communication.
For instance, when dealing with the critiques presented in Hicks, Humphries, and Slater’s paper “ChatGPT is bullshit,” having a glossary of terms they redefine or use uniquely can help readers grasp the nuances of their arguments. Similarly, the blog post “The Terminology Jabberwocky” underscores the chaos that arises from shifting definitions, further emphasizing the need for consistent terminology.
Large Language Models (LLMs) like ChatGPT can reference these glossaries or dictionaries in their responses to provide clarity and context, effectively explaining terms that might otherwise be misinterpreted as “bullshit.” By doing so, LLMs can help mitigate misunderstandings and enhance the quality of discourse, ensuring that conversations remain anchored in precise and mutually understood language.
…just more bullshit? 🙂