Jeff Dodge is the General Manager of BlueMetal, an interactive design and technology architecture firm, matching the most experienced consultants in the industry to the most challenging business and technical problems. This is an interview I did with Jeff about chatbots.
Lee: Chatbots have the potential to be a frictionless kind of customer engagement, but will they ever be smart enough to satisfy us?
Jeff: I suppose you can look at that as a way of setting expectations. I prefer to call them narrow artificial intelligence, and in fact, even our branding of that type of experience has evolved into a conversational agent now. I think artificial intelligence almost sets the expectation too high, but since Mark Zuckerberg featured chatbots at the F8 Conference last year, there’s been a ton of buzz and attention around this. And I think chatbots, to me, has evolved to sort of have a sort of commercial or gimmicky persona to it. So I prefer more of an enterprise approach, which I think narrow artificial intelligence resonates with.
Lee: How do you think brands could be or should be using chatbots? Are there some examples of chatbot successes?
Jeff: Yes, absolutely. There’s a great deal of them. If you think about chatbots, chatbots is a term that basically means communicating with a computer via your own natural language, right? Whether be English or Spanish or Chinese or whatever language that you have. Think about Alexa. Think about Siri. These are actually chatbots. Users are interacting with chatbots constantly all around you. Nearly every one of us is interacting with chatbots on a daily basis.
Lee: Are we correct in thinking of chatbots as something that we type and talk to, or something that we talk and interact with? Or both?
Jeff: I think if you had three different knowledgeable guests on here, you’d get three different answers. My answer would be both. I use chatbots to mean a computer communicating with a human via natural language. In fact, we’ve built a number of these, so I like to sort of ground conversations around chatbots or narrow artificial intelligence, pick your poison, by helping people identify with an experience they’ve already had about Siri or Alexa. That being said, there are a great deal of good use cases at my company and other companies that already brought to market in the realm of enterprise chatbots or artificial intelligence system. And those are basically brand ambassadors, if you will, that allow…so, for one example, we worked with a medical device manufacturer that wanted to bring a chatbot to there, to allow chronic disease sufferers to be able to interact with both a conversational agent or chatbot, to manage their chronic disease to get a timely advice from a chatbot or narrow artificial intelligence, but also to have a continuum of conversation with their medical practitioners, their clinical team, and with the brand, the medical device manufacturer themselves.
So, while you think about a large commercial application, being the Siris and Alexas of the world, which everyone can identify with and understand conceptually, there are really great examples of narrower, more purpose-filled chatbots that are in the market now that my company helps bring to market.
Lee: I wanted to dig in a little bit to the medical application you just mentioned, because there’s probably a really big future in the emotional side, the emotional support that narrow artificial intelligence can offer, maybe with people who are ill or elderly or alone. How convincing is it to people really buy into the idea, and do they share, do they become emotional enough to render the narrow artificial intelligence useful?
Jeff: I would caution you and those who want to bring chatbots to market to not try to convince or fool users into thinking they’re passing the Turing test or, you know, they’re actually speaking to human. That is not the goal. Certainly not in the next decade or so. The goal is to provide good useful information and support, as you put it, in a very easily accessible, universally interactive way. So if you think about, there’s a lot of research out there, you know, in Japan in particular, a lot of other markets that are doing research with robots, particularly, you know, with those needing special need services. Just the presence of those robots has been shown to actually help with patient outcomes.
And, frankly, it’s something we can all identify with, the vast majority of us. I talk to my dog, Lee. I mean, my dog looks at me, and perhaps it understands what I’m saying. Most likely it doesn’t. But having something there, right, when there might not be something there, there’s undeniably some value in that, I would say.
Lee: Animals have maybe a 150-word vocabulary, and there’s, of course, a soul there. So this is a very interesting new area because the machine doesn’t have a soul that we identify with yet, but we might. Let’s dig in into the history of this kind of narrow artificial intelligence for a moment. They came to the fore, as you wrote to me in an email, with Zuckerberg’s mention of them at the dev conference in 2016. But are they really that new? Can we really say that they’ve just been around a few years?
Jeff: You know, I think that that’s part of why I’m somewhat trying to distance myself in the term chatbot. I think it sounds, again, a little bit cheap frankly. And to answer your question directly, no, chatbots have been around for quite a long time. The reality is chatbots are just now getting to the point where the vast majority of users would find them acceptable, right, to find them actually…[inaudible 00:06:05] enjoyable from the perspective of being able to self-serve in, again, those narrow confines of context. The attempts to implement a chatbot or a very narrow, you know…frankly, if you look at, like, a blinking cursor, right, that, in essence, is a chatbot, right, back to the DOS days, right? The computer is prompting you to write some text to it, to chat with it. However, the breadth of what it can understand, you know, that boils down to are you a developer or not to some extent. This is democratizing the ability of all humans to be able to interact with humans, excuse me, with computers on their own terms.
Lee: Right. This notion of interaction and democratizing, that’s really interesting, which brings me to this whole natural language processing idea, NLP. Could you explain a little bit what that is for people who might not know, and explain how natural language processing might help this kind of artificial intelligence do a better job?
Jeff: Absolutely. So, as you look at the spectrum and the evolution of the way that humans interact with, you know, business machines, right, the Genesis of IBM, International Business Machines. Yeah, you went from vacuum tubes, to transistors to, you know, machine code language, to object oriented language. And now it sounds a little bit grandiose, so I don’t…I say this with a grain of salt. This is that sort of next paradigm shift where people can interact through, as you said, NLP, an acronym for natural language processing, which in essence means, a computer is processing the language that you speak to it in your own tongue, whatever language that might be, and communicating with you back in your own tongue. That’s where I sort of come up with the idea of the democratization of that interaction with computers. So, natural language processing has been a science. It has been around for quite some time. There’s been really great, great progress and strides in the area of voice dictation and translation. It’s now getting to the point where computers can begin to actually also understand intents and the entities that are within those comments and statements that those humans make to those computers.
Lee: It comes down to this notion of the way we’ve been teaching computers thus far has been giving them a series of rules or instructions that they then carry out. But if you put NLP into the equation, you might get computers that can draw conclusions, if I can go that far. You know, it’s not so much a logic gate. It is a logic gate, of course, but it’s a much more complex logic gate that allows the computer to draw a conclusion that we might not have assumed that it could get to. Is that making sense? Am I on track there?
Jeff: I think you’re on the right track. I think the reality is that it takes a toolbox, right? It is absolutely one of the tools. It does require a toolbox, which is generally referred to in the tech circles as cognitive services. We at BlueMetal, use a large array of different tools and different contexts to find the right tool or toolbox for the job. You have companies and CEOs, like Satya Nadella at Microsoft, saying that AI is the future of the company. Mark Zuckerberg doing the same. You heard the same things from, you know, Larry Page at Google, AWS with Alexa. And, obviously Jeff Bezos with Alexa and through the AWS service. So all of them major technology companies are moving in this direction.
Lee: Does this…the progress of natural language processing, the progress of narrow artificial intelligence, it really has a lot to do with the way we are teaching the computers, right, the way we teach the machines and how the machines learn. Is NLP an accelerant in that, or is it just a way that we’re gonna interface with machines?
Jeff: So, I would absolutely argue that it is an accelerant in that, with folks like Satya Nadella, the CEO of Microsoft, saying that AI is the future of the company. You’ve got from Larry Page, Jeff Bezos, Mark Zuckerberg, all the major technology companies saying that artificial intelligence is the future. NLP is that piece that, quite frankly, allows these large tech firms to socially graze from us, all of that sort of teaching their AIs as they deploy these platforms or, as they refer to it, cognitive services. All these cognitive services, which obviously speaks of cognition or intelligence, these cognitive services are being deployed in very useful ways. But all of us are interacting with them all the time. Every single interaction through NLP and otherwise is informing the machine or the cloud with that cognitive service to become better every single interaction, every minute, every single user.
Lee: Right, and a lot faster than people maybe. I mean, there’s a strange argument to be made for evolution is for humans has been pretty slow stuff until now. When you put machine learning into the equation, whatever evolution that people are going through and, indeed, machines, seems to be going a lot faster than, you know, the way it used to be.
Jeff: Yeah, it’s a very good point, and I’m glad you brought in machine learning. Again, I talked about earlier in the show here that you bring on three different guests, you get three different answers, and none of them are wrong. When you think about narrow AI, or the grand idea of AI, and general AI, all of these pieces factor into some manifestation of artificial intelligence. You’ve got the idea of NLP, which is the way to communicate with a computer. But machine learning is a separate piece, incredibly important piece, that allows the computer to learn from those interactions. One of the biggest developments recently was with the DeepMind program from Google, that allowed them to actually win or beat the greatest AlphaGo champions in the world. And that, you know, you’ve seen it’s been pretty publicized. There are literally more moves in AlphaGo than there are, I believe it’s atoms in the universe or something ridiculous like that. And what that shows is, a computer did not out-compute the user. The user did not…the computer did not figure out every single possible move and how to get to the best results and win. It actually thought about the user is doing, what the board looks like very situational awareness, and that’s how it won.
Lee: Yeah, that’s pretty awesome stuff. It’s getting away from that sort of brute force approach where you just hammer away at it into something more cognitive. We talked about medicine and applications in medicine, and you mentioned briefly brand ambassador, this whole notion of brand ambassador. So how could this kind of intelligence be a brand ambassador?
Jeff: Well, it’s funny. You think about customer service. I imagine you’ve called customer service more than once in the last month. When you call customer service, you get a human, and you get all the good and you get all the bad that comes with that human. That human might have had a fight with their significant other that morning. That human might have not had their cup of coffee yet. You get a very non conformative experience. You get a unique experience every time. When you interact with a chatbot, now, it can be designed to give you a slightly variable experience if that’s what the users and the business analysts and the user experience folks that designed that bot decide as best from an experience perspective. But for better or worse, there is a plan behind that conversation. If you want something to be referred to as conversational agent instead of chatbot, or whatever that brand might be, whatever the branding might be, that chatbot is gonna deliver it how you want it delivered every time.
Now, I do wanna be cognizant of what I said a moment ago around machine learning, allowing the bot to learn and converse in a different manner. But that’s controllable thing. You can either allow it to do that or not allow it to do that. You can put confines on it. I used like to refer back to my good friend Isaac Asimov, in his three laws for robotics that were articulated in “I, Robot” and many of the books that he’s written, you know. There can be laws with which we can constrain these artificial intelligences.
Lee: Let’s talk about some forward-looking companies that are probably gonna need some kind of strategy around all of these. One very simple, dirt simple strategy really is being honest with people and letting them know that you’re dealing with some kind of AI. Now, I’ve tried that. I used to have an AI setting appointments for me, and I tried not being honest, and people got really pissed off with me. “What do you mean this is a…I’m talking to a computer? How dare you?” And then I tried being honest, and people said, “Oh, I hate this. This is a computer.” And some people loved it. So, what I’m getting to is a question, a simple question is, are forward-looking companies going to need a strategy around all of these, and when, and what might that be?
Jeff: You can so easily, if you look retrospectively at the art of technology implementation and adoption from, why would I record a show when I can watch it on TV, to why would I bother to use a voicemail? Voicemail, they’ll call me back later, or call waiting, all the way up to, more recently, mobile? Like, I have a website. Why would I need a mobile app for this? What will you do? And all of the enterprise, and really almost every business and many individuals have apps out there now. Well, as we’ve seen publicized lately that the adoption of apps is dramatically dropping off. People typically download new apps in any given month. They typically use two to three apps with Facebook deploying the vast majority of that.
Well, guess what? In Facebook Messenger and in Skype and in Slack, in all these other channels, your users are already there and manifesting a conversational agent or chatbot within one of these channels, which all of the big providers allow those sorts if you build once in the cloud, you deploy through all of these social channels. You don’t need to get that huge friction, get past that huge friction to get a user to download a new app. You just hit the user where they already are, where they want to be on their terms.
So I would argue as an enterprise, you absolutely need to have an NLP or chatbot strategy now. Like, literally, right now. The companies that are doing it now are going way up in front. One of the quick tangent I wanna take you to, Lee, there are some large firms out there that are doing great job implementing some of these in an enterprise manner. Some of them were doing it through a very sort of a spoke legacy type implementation with IP or intellectual property built over many years, and they are best-of-breed right now. But with the amount of money that big tech firms like Microsoft, Facebook, Google, Amazon, are pumping into cognitive services and NLP, the amount that they’re spending on R&D dwarfs the revenue of these other companies that are launching or trying to go on their own. So if you just look at the billions being spent on the maturity of these services, the enterprise needs to figure out a way to capitalize and build on top of that investment to bring their own brand to life, interact with their users, and connect where the users already are.
Lee: Yeah, that makes a lot of sense. I can readily see that we, as users, won’t even really be aware. You know, kind of creep in. It’s a bit like the technology march that you were describing, you know. Why use an answering machine when you can use voicemail? Why use voicemail when you can use FaceTime? Why use FaceTime? You know, it goes on and on. And there is a sense of we won’t be aware of this advance. But it is also strange to think that a conversation might not need a human on the other end of it to be satisfying, and to result in whatever we want, right, result in a sale or result in a good interaction. Are we really close to that? Is that kind of satisfaction now, or are we still in the novelty stage here?
Jeff: I think we are at satisfaction stage. And again, that narrow…in within narrow use cases. So you mentioned, do you reveal that you’re a chatbot at the time, that your interaction’s being handled by a chatbot? And I think the answer to that is unequivocally, yes, you do reveal that, and you have to be…and there’s literally an art to user experience, art, user experience that are forever in the advent of web and mobile. It applies to NLP interactions as well. You need to be very clear about what can be done with the bot, what can’t be done with the bot. And then you do end up having a very pleasant and positive experience as long as you set expectations correctly.
In fact, if you look at China with WeChat, there are something in the order of 850-plus million users on WeChat in China right now. And McKinsey has just released a report in April saying that of those 850-plus million users, 31% of those users have initiated purchases on WeChat, and that is at a 100% growth from the year prior. I mean, this is absolutely coming, and it’s already happening in many markets, the largest market in the world, China.
Lee: Jeff, thanks so much for speaking with me today.
Jeff: Thank you, Lee. It was a blast, and I hope this is informative.