Some 40 percent of large U.S. companies—those with more than 500 employees—will have incorporated chatbots or virtual assistants by the end of 2019, according to a Spiceworks survey. This study, as well as other data, leads us to conclude that using intelligent agents to handle the growing number of conversations between brands and audiences has gone from a mere trend to an irrefutable reality.
The mass use of smartphones has meant instant messaging has become the default communication mechanism in our personal lives. Multiple recent studies confirm that consumers want immediate answers in their brand interactions. This is further supported by the main benefits that consumers associate with chatbots, which include:
- 24-hour service (64 percent)
- Immediate response (55 percent)
- Answers to simple questions (55 percent)
The pressure these expectations have put on companies has led to an environment that favors the use of company chatbots.
Chatbots and their potential impact on reputation
To work properly, a chatbot system should go through a “training process” in which a human team provides it with information about the typical questions, answers and conversation flows it will be involved in. Not paying enough attention to a chatbot’s training process can have consequences beyond what one might initially think.
The case of Cleo, a financial services chatbot operating on Facebook Messenger, is a good example of this. In the week of Valentine’s Day 2019, Cleo’s creators introduced a special conversation mode, designed to give a touch of “romance” to the state of its users’ finances. However, the choosing of some rather unfortunate expressions horrified many women. An independent technology journalist, Holly Brockwell, drew attention to these messages, relating some of them to sexual violence.
The impact on brand image can be very serious and hard to detect, as these types of messages are part of an autonomously-functioning artificial system—the chatbot. And at the same time, brands must not forget the importance of the chatbot’s personality.
“A chatbot system should go through a training process in which a human team provides it with information”
Microsoft’s Senior Manager of Global Engagement Purna Virji demonstrated the need to build artificial conversation agents with clearly defined personalities, aligned with brand style and tone.
However, despite the fact that chatbot personalities have steadily improved, we must not lose sight of the fact that, in conversations between humans and chatbots, a human brain comes into contact with an artificial one, with a radically different architecture and way of functioning.
Very often, psychological and social questions tend to take second place when developing chatbots, creating chatbots with conversational models more befitting artificial “brains.” Carefully designing the conversations a chatbot can have to make it a more natural, persuasive and useful agent is one of the key aspects to a good conversational experience.
Resistance to artificial agents
Regardless of the benefits consumers associate with chatbots, we cannot lose sight of the fact that certain user profiles are very reluctant to speak with machines. In a recent CGS survey, 70 percent of those interviewed said they were reluctant to communicate with a brand that did not have a human customer service agent available.
Customer experience as the goal
Chatbots, far from being a passing fad, are here to stay. Their limitations in handling more complex situations, however, make it essential to have people available to take charge of certain conversations between brands and their audiences.
These new conversation technologies have a major impact on company and brand communication and reputation, so communication professionals should play a leading role in this process. Not using a chatbot because of the possible risks it entails may mean being left behind. This is a technology with clear potential for disrupting the all-important relationship between brands and their audiences.