4 min read
Insurance companies are seeing the ground-breaking potential of 'the humble chatbot'. It's now not just a messaging interface. It's a powerful tool for data insight.
For any new technology, tech alone won't solve a challenge. Yet when designed with emotional intelligence, it can transform processes, making you (and your company) wildly successful. For insurance executives, customer care teams and data scientists here are 3 pillars to consider when designing a successful chatbot.
Value:
Before building your chatbot online, map out your digital customer journey on paper first. Our user experience (UX) and user interface (UI) designers start by writing the conversation on post-it notes and presenting it visually on a table.
Once you’ve mapped out your conversation, consider what value you're bringing to your customer. For instance, are you giving more information about their insurance policy?
Mind-map ways you can deliver this value most effectively. Build this into your chatbot design.
Meaning:
Is the conversation meaningful? This is one of the most important, yet hard to measure metrics. For example, when using a chatbot for life insurance, you'll want to give customers support as quickly as possible with a human touch. The last thing you want is your customer desperately searching your website for a number to call, feeling lost at their time of need.
If a chatbot gives this number and passes them straight to an attentive customer service team, this quick and smooth interaction will be memorable. It helps return the trust to the relationship between insurance companies and their customers.
Review how your use-cases differ in performance. If you use the chatbot to give support after an emotional event, the positive impact may far outweigh the chances of being unable to give customers a quote estimate quickly. Prioritise which use-case you put most attention to, and think critically of the potential impact.
Engagement:
Was the chatbot fun to use? How did it make your customer feel? Were they delighted, excited or reassured?
Not all customers will use the chatbot. Some might prefer talking over the phone. Yet, investing in an omni-channel service tailored to an individual's preferences enhances customer loyalty by giving them a choice.
For the customers who enjoy using the chatbot, or even feel more comfortable using it to share personal information, it's important to avoid gamification. Gamification is often used in technology to make it addictive for the user, building a habit using reward psychology.
Using gamification to help people get better protected? Great! Use it to sell more insurance products? Less so. Strike the right balance.
Once you've mapped out your conversation on paper, it's time to optimise your workflow.
Imagine you've designed a chatbot to give customers a quote estimate for their car insurance. Your current workflow includes 10 questions. After user testing, you notice the majority of users drop out on the sixth question.
What now?
You could do two things here. You could:
a) Reconsider the phrasing. Is the language clear enough? Are you asking for specific paperwork or highly sensitive information?
b) Consider reducing the total number of questions. Perhaps the workflow is too long, and people start disengaging after the fifth or sixth question.
Think back to the last conversation you had with a friend or family member. Did you need to consciously manage the conversation? The chances are, no. Day-to-day conversations have a natural flow, which usually happens without much thought. However, when you're giving a clear service, it's important you're in control. This benefits not only you but your customer too.
To correctly optimise your workflow, guide your customer through the process. Signpost regularly. Where are they in the workflow? How long will it take? How many questions will you ask? Having this kind of information is reassuring, and helps make the experience as easy and hassle-free as possible.
To do all of this, invest in a management platform tool that'll help you visualise your workflow clearly and see every possible outcome.
So you’ve optimised your workflow, translated it into a management tool and started user testing. What now? Now's the time to review whether you need natural language processing that allows your customer to type freely, just like messaging a friend. This has brilliant use-cases. It's great for giving detailed feedback. It adds a personal touch to the interaction. Yet, for insurance and finance, you may not need it.
If you want to get a quick quote estimate for car insurance, you might need to give just limited information. In this situation, you might write:
"I'm 27, live in London and own a Fiat 500"
NLP can handle this fairly easily. You can train it to look for 3 things:
A number, relating to age
Place, city or postcode
Car model
Easy!
However, in the majority of cases, you'll need to ask a lot more questions.
What's more, Natural Language Processing makes the chatbot difficult to audit for compliance or marketing purposes. Essentially, you're using an algorithm to understand your customer. You need to be able to map out every possible outcome.
Let's take the example of life insurance. Imagine your customer writes: "I need insurance for my family". The chatbot asks, "Ok. How many people are in your family?" The customer replies, "My wife and two kids."
If Natural Language Processing makes a mistake and reads this as a large family, the chatbot will switch to a workflow for selling life insurance for a larger family. The difference would be subtle. The customer might think they’d been fully understood, which means they may have been misled into buying an ill-suited product.
In this case, who's liable? The nature of machine learning means it can never successfully predict 100% of situations. That would point to overfitting. This is where the model has been trained with a model of data and can accurately predict outcomes within that model. However, it will not be able to predict with the same level of accuracy when presented with new data.
If a 1% error is acceptable for your specific use-case, then Natural Language Processing is great to use. If not, approach slowly and with caution. It’s also worth remembering that it’s exceptional to have only a 1% error. If the regulator’s happy, then this may well be worth going forward.
For now, for immediate audit-able impact, one-click buttons are fully compliant.
Ultimately, it circles back to one of the key learnings as a data scientist: it’s key to collect data in a robust, strict manner. We’ve designed SPIXII to do this, and use behavioural data to continually optimise the conversation design and workflow. The analytics provides insights your customer service team may have from gut instinct. Chatbots can make them measurable and scalable. However, they cannot replace the customer service team but rather, take pressure off them and make their workload more manageable.
For more details on how to design good chatbots read the blog: How to avoid chatbots' failure with Conversational Process Automation.