3 min read
In the last few years, AI and chatbots have become buzzwords synonymous with digital transformation. Press often focuses on the pilots and prototypes, framing them as overnight wonders. Yet, many of these projects have only been made possible by years of work. The resulting prototype is just the tip of the iceberg.
Here are five of the biggest misconceptions about this technology in the industry.
Right now, there are three categories of chatbot in the market.
Yes - you can develop bots quickly. But in our experience, they fail fast.
It is all too easy to create a 'bad' bot. For instance, you can upload commonly asked questions and answers within an hour to create a FAQ chatbot to many commercial bot frameworks, such as Microsoft's. While this proves a concept and functions as a nice tool, it is a nice-to-have; often, this adds little to no value to your bottom line - at least in the insurance context.
Chief Analytics and Data Officer of Morgan Stanley, Jeff McMillan, discusses this here. To make your chatbot implementation successful, it is key to align it with your wider operational strategy, objectives and vision - particularly if you are looking to transform a mammoth-like firm like an insurance company.
Some things in life fix everything: Mum's chicken soup being one of them. Unfortunately, deep learning is not one of those things.
Deep learning is simply one of the many algorithms and methodologies available within the wider family of machine learning algorithms. However, in production systems, sometimes the easier, traditional algorithms can achieve your business goals. For instance, logistic regression and/or TFIDF (Term Frequency - Inverse Document Frequency) work for many tasks.
If you are using a legacy system and facing vast amounts of disorganised data, AI is not a magic wand. The most important thing is to collect the right data. This way, you can also avoid historic biases.
Not all data is suitable for the problem at hand. For instance, if you are training NLP for the insurance context, you can access libraries filled with standard predicted answers. But for your specific insurance product or customer, your answers may be far from 'standard'. It is key to build a strong data acquisition strategy with a data pipeline and compliant cleaning process.
A common problem in AI development is context. Expert understanding of the context and impact of a solution in that context, coupled with the wider context of strategic and operational business objectives, is essential.
Explore the ethical implications of this further here.
AI boils down to a set of machine learning and automation algorithms, which are not necessarily statistically driven.
In very simplistic terms, algorithms work this way:
No machine can think by itself how to write code - or, at least not without being programmed by a human first.
No machine writes equations out of nothing. Again, not without a human telling it.
No machine can decide or think how to automatically update those equations' parameters. This is done by aliens. Wait. Humans.
Ultimately, the 'how' always come from a human. The 'what', such as an equation or parameter, can change thanks to the machine but always because that machine is optimising metrics designed and chosen by people.
AI takes a great deal of work, and it is our responsibility as humans to design it.
Ultimately, we can define AI as an intelligent agent capable of rationally achieving a given goal. The quadrant below illustrates the four schools of thought. Here at Spixii, we fall into the 'Acting Rationally' camp.
Does it take the glamour away from chatbots and AI?
Maybe.
But does it mean they are any less powerful?
Not at all.
The four schools of thought (http://aima.cs.berkeley.edu/)