The truth about AI, NLP and ML – human involvement is mandatory
By cameron in Uncategorized
This is a viewpoint from Mike Slone, chief experience officer for Travelaer..
The technology revolution today – led by artificial intelligence (AI), natural language processing (NLP), machine learning (ML) etc, is like an awkward adolescent trying to make its way in the world. Many of today’s leading edge technologies will mature and become part of the business mainstream, others will struggle to cope.
The analogy with adolescence works, to an extent, but one related area of concern is the historical tendency to apply generic, traditional human terms to very specific machine advances. This can confuse matters and skew expectations.
For example, a chatbot is not really able to “chat”, except in a very limited sense, while”deep learning” only means that an artificial neural network has several hidden layers which are not nearly as deep as one would assume. It is a trend that is coming back to bite us.
In addition, people read the AI headlines in the tech press, or browse some fancy brochure on machine learning that they picked up at a trade show, and walk away thinking that the technology industry is pushing these solutions so hard and so fast that we actually need to be concerned about robots exterminating the human race. Give me a break. If bots with NLP and machine learning were really that good, why aren’t we all using those bots right now on a daily basis? Why isn’t one writing this story for me?
Keeping a perspective
First, let’s address language and history. The words “intelligence”, “language” and “learning” are what the late AI pioneer Marvin Minsky called “suitcase-like jumbles of different ideas”. These jumbles have different meanings for different people, and some often have deep historical context, which conflates things further. By mixing traditional and technical meanings, these terms have thus become a source of confusion when applied to modern machines.
For example, this is an actual conversation within a demonstration of IBM’s Watson Assistant (“Formerly Watson Conversation”), which is “trained on a specific set of car capabilities”:
Watson: Hi. It looks like a nice drive today. What would you like me to do?
Me: Where is the nearest restaurant?
W: Of course. Do you have a specific cuisine in mind?
M: No
W: I see a few restaurants nearby which you can select from, serving tacos, burgers, seafood and pasta. Which cuisine would you like?
M: Turn on the lights
W: [no reply]
M: Pasta
W: Super! I’ve found 5 locations for you. Which one would you like to drive to?
M: The first one
W: Sure thing! The first restaurant gets great reviews. What day, or date, would you like to go to the restaurant?
M: Any time
W: What day, or date, would you like to go to the restaurant?
“Turn on the lights”, is one of the sample orders in the demo and, in a perfect world, should have returned an obedient “I’ll turn on the lights for you” response, but instead it ignored the request. This is not a flaw in the IBM Watson Assistant, it is simply the way most AI assistants work today. The end product doesn’t come close to mirroring the hype.
After the conversation blocker, Watson and I resumed our “happy conversational path” until we found ourselves at another critical point – the timing of my to visit the restaurant. Watson would fully understand the term “now”, which translates to “the exact second”. However, “any time” is not a recognized standard term and takes Watson off-script and as a result the chatbot tirelessly tries to get an actual time-based answer from me.
How much artificial intelligence, natural language processing or machine learning are actually involved in this conversational process? It is difficult to determine from such a short dialogue, but then take into consideration that a chat program called ELIZA, which was created in 1964 at MIT’s Artificial Intelligence Laboratory by computer scientist Joseph Weizenbaum, could have produced a similar outcome.
The difference between ELIZA and Watson is that the huge amount of coding written in ELIZA’s case has been replaced today by a few training sentences that are used as inputs in order to train a neural network. In the end, you get the same ‘intelligence’ level, but the amount of work required to reach it is simpler by orders of magnitude. That’s a positive trend.
Where do we go from here?
Now apply this take on emerging tech specifically to the travel industry, where the analogy around adolescence applies. However, there are travellers expecting chatbots that actually chat (but don’t) while travel company CTOs sit on their thumbs, not sure what to do about those Facebook Messenger bots, Alexa skills or anything else that pushes the status quo.
Human Augmented Intelligence – the co-operation between human and machine – is the solution.
Whether the human is assisting the machine, or vice versa, useful artificial intelligence will show the airline executive and his customers an improved user experience. AI will thus support IA – Intelligence Augmentation.
Intelligence augmentation will then produce better purchasing and booking experiences and improved customer support. Specialized vendors will, as a course of business, incorporate the necessary background knowledge into their products to allow for continuous improvement, either through formal procedures or statistical models. Carefully designed user interfaces, which now can include text and voice, will allow for better human-machine cooperation, enriching each travel company’s functional knowledge.
These predictions might not be science-fiction sexy, but they will solve concrete business problems.
The importance of a taking a long-term view
Keep in mind that a smart chatbot will always be a work in progress. No matter who is providing the training tools, it is important that the project team is able to access and integrate all sources of relevant information, from simple question/answer classifications to formal descriptions of requests and processes. That information will become part of the knowledge assets of the company.
The chatbot is the natural storage depot for business information, and it will work as an intermediary between the end customer and the corporation. Moreover, any party involved in creating or curating such information should work with open standards, in order to avoid vendor lock-in.
The presence of a bot within a business is also added motivation for having sound knowledge management practices: if the bot can use some information, even only for internal use, it means that it is part of the company repository.
Implementing NLP and machine learning into the bot does not signal the end of the work needed to manage a successful chatbot – as I stated above, human involvement or human augmented intelligence is needed in conjunction with NLP/ML in order for the bot to succeed. Airlines can’t just ‘launch’ a bot and expect it to work in the wild on its own. No, they need to spend weeks, months, and years building up the data repositories and libraries for the bot to access.
The other option is to work with a company, such as Travelaer, that has been building data repositories for travel companies that can be used to jumpstart or add years of intelligence to a chatbot.
Don’t get left behind
Those airlines that haven’t built or launched bots should start right now. With each day that passes they fall further behind when it comes to building these libraries and giving the bots actual experiences. Building and launching a bot is like having a child – you need to teach them to become functioning members of society. The sooner you start the lessons, the better for the child and those in its periphery. Otherwise, you are going to be stuck with a perpetual teenager – anti-social, unable to integrate, awkward and dependent on the parent.
This is a viewpoint from Mike Slone, chief experience officer for Travelaer. It appears as part of the tnooz sponsored content initiative..