What Does it Mean to be in Dialog With a Silicon Conversation Partner?
Dialog and stories are core components of human organization and customer value...
Dialog: a conversation between two or more people. I believe this definition needs to be updated to, Dialog: a conversation between or among two or more conversation partners — human or otherwise. In this, I don’t believe that machines are conscious, nor do they “think” or understand the way that people do. In the extreme, my friend David Reed points out they are simply a “stochastic parrot”. However, at the current level of complexity and multi-dimensional semantics, and even at this early stage of Generative AI (GenAI), they are conversation partners, observers, summarizers, and useful co-pilots for many things including, but not limited to helping with customer service, sales support, image design, exploring potential new materials, etc., etc.
Steven Wolfram has pointed out that Socrates helped discover logic by looking at the patterns of thought and reason in dialog. Wolfram also suggests that the brute force algorithms of the large language models may be discovering a set of patterns in the microstructure of language that over time we may have a more theoretical basis for. (Check out Lex Fridman’s podcasts with Wolfram for more.)
As a frequent user of LLMs I am struck by how different my interface with the machine is as I “discuss” what I’m after. Strangely I find myself using “please” before my requests — not something I use when I’m “talking” with Excel or Google. I spend time thinking about how it works so I can create a prompt to get the best out of the conversation partner. Also, I find interacting with these LLMs to motivational when I’m doing creative work because it is easier to edit and critique than it is to start with a blank sheet of paper.
I think that as we welcome these new silicon-based conversation partners, it will enable a gradual, but eventually radical rethinking of organization structure, processes and policies. Almost thirty years ago I wrote a paper with my dear friend Benn Konsynski on “Cognitive Reapportionment” which explores the implications of thinking of organizations as a set of human and machine cognitors with the ability to dynamically allocate decision rights based on context and competence. But for this screed, I’m going to stick with the idea of adding “conversation” to products and services.
The Dog Can Talk…
The $1,600 (closer to $3,000+ fully delivered and configured) amazingly agile robot dog now can converse with you using ChatGPT. If you haven’t seen this video, it’s worth a view. With aging populations all over the world, the adoption of robot dogs is increasing — which is a sign of things to come. In time many, many things and experiences will have a conversation robot in the mix. Today’s Alexa and Siri and Hey Google are clumsy and not really conversation partners. Those devices will graduate to superior interactions.
I believe there will be a new set of design principles that we discover for conversation. Today’s voice robots and voice trees are to aural interaction what teletypes were to computer interface. There’s a whole world of new designs to make these conversations more productive, less frustrating, more functional and even emotionally satisfying.
Let me posit at least three design principles: Context, Continuity and Conversation. By context I mean that the conversation partner gathers and keeps as much as context as possible — just as a human being does. Who among us has not had to repeat and repeat basic information to get access to functions on the phone when dealing with today’s rudimentary voice recognition/generation. ChatGPT and the other LLMs are very good at “listening” to complex contexts as prompts can be long and intricate, and then they “remember” the context across the chat. This is a huge deal and our first design principle.
The second dialog design principle is continuity. One of the things that is beautiful about human interaction is that it is continuous across contexts: in the car, in the restaurant, over the phone, at work the next day. Continuity of context is one of the most important parts of dialog. When it is missing or people “forget” too easily it can indicate lack of caring or even a neurological problem. Continuity is essential.
Conversation is defined as an exchange of new and/or ideas between two or more people. In terms of Generative AI, the more it can add relevant news, or ideas into the mix the more helpful it can be in providing value to the client. Alexa is trying this to a minor extent by providing me with suggestions on books I might like. But it fails my tests for context and continuity because it does not have a way to recognize the context that I don’t want suggestions, and it does not have the continuity to remember that preference over time.
Will it be Practical?
On July 18, 2023 Qualcomm announced it is working with Meta the Llama 2-based AI implementations on smartphone chips starting in 2024. This powerful decentralization and efficient implementation of such a powerful GenAI model will certainly enable more decentralization and the addition of conversational capabilities in all types of devices from cars, to fork lifts, to your garage.
The Question for Business
In order to find the potential value of conversational improvement one can look at their business process from the odd angle, of “where do conversations happen” in my businesses processes? Marketing and sales are clear places. Customer service. Hiring. Many compliance tasks.
Have you looked at your products and services and asked, “Where would a conversation enhance the value of my product to my end customer?” Clearly any product with a complex instruction manual would benefit. Value added content may help. Bundling recipes with grilles will be easy. A self-aware hot tub can provide advice on water improvement. All current functions of the simple voice robots like Siri/Apple Car Play will be vastly enhanced. How many industrial products from fork lifts to artificial knee replacements will benefit from a conversation robot enhancing the value.
In short, we are just beginning the dialog with our silicon friends, and there’s a whole new design capability to be created for this new domain.
John, we ask "please" in a chat with ChatGPT because some research suggests that the models respond more effectively that way. Their training on language suggests that "please" generates better responses. Other researchers disagree. ChatGPT, when asked (once) said "please" makes no difference. On the other hand, if you write, "Go slowly and do your best work," It seems to have an effect. One query: Does the system, trained on data, get consistently better when you treat it nicely, or does that make it less likely to hit on the best response?