[UX Labs] OVH Chatbot: artificial intelligence in support of customer experience

As part of the OVH UX Labs, a new R&D collaborative space dedicated to improving user experience, we’re happy to give you the details on our very first project: the OVH virtual assistant. Now available through and , this chatbot can perform a diagnostic as well as answer a set number of questions about your web hosting service. But this is only the beginning: we now need your help to increase its functional coverage, so we’ve made it open-source. More on this here.

We’ve created this chatbot because we want our customers to be supported in more ways when facing certain issues. At the moment we’re only referring to web hosting, but it could eventually be extended to other OVH services, if the original project is a success. One feature that users will find particularly useful is the ability to get an on-the-spot diagnostic if a web hosting service goes down or is being affected by a configuration error, without having to contact support or open a ticket.

When it comes to designing the chatbot, we first started by focusing on the main reasons that customers call web support. It’s not surprising to learn that the most common reason often goes like this: “my website doesn’t work anymore”. By putting this call category under the microscope, we’ve been able to isolate a few different cases, from the simplest to the most complex. Some causes of failure are pretty basic and can be diagnosed automatically, whereas others require the development of an artificial intelligence in order to be accurately identified. From a partially configured SSL certificate, to DNS configuration issues, to database connection errors, this virtual assistant can detect the root cause of your problem with only a few questions. In the event of an incident affecting either part or all of the OVH infrastructure, it will soon be able to direct you to the appropriate task on the OVH status web page, so that you can monitor progress for the incident resolution.

Now, let’s see what’s hiding under the hood of that chatbot.

The technologies we use

• Language processing
In order for our chatbot to understand what you want to do, it must use Natural Language Processing. This NLP is useful for the chatbot to extract the meaning behind a sentence formulated in human language. We’ve used the wit.ai service, which is not only free but also provides a REST API (please note that there are a few competitors such as api.ai, recast.ai, and luis.ai). In short, this NLP engine takes a sentence and ignores any words that aren’t necessary for understanding the request, such as determiners, plural form, personal pronouns, etc. Then, after deleting these words, the system searches for the request among the remaining words, based on the “catalogue of requests” it has been taught. In fact, for the artificial intelligence to work properly, it needs to go through a learning phase beforehand. For example, the sentence “my website doesn’t work any more” corresponds to the request “website_break”. Learning simply consists of feeding the artificial intelligence engine various examples of sentences by matching them to the corresponding request.

• Communication platforms
Now that our chatbot “understands” sentences from the human language—or at least is able to match them with a request—it’s time to provide it with a channel where it can communicate with a user. We quickly chose Facebook Messenger because of its popularity. Messenger provides a HTTP API to retrieve various events such as receiving a message, reading a message, logging in, etc. These events are escalated with the help of a HTTP webhook we provide during the creation of the Messenger application on the Facebook Messenger developer interface. Answers are then sent to the Messenger API through HTTP calls. The Slack API works in the same way, apart from a few different concepts in terms of authentication and rights when it comes to the Slack teams.

To reply to the event sent over the webhooks, we need a HTTP API. The one we’ve developed was made in Node.Js using the Express.Js framework, and is connected to a MongoDB database as a service provided by OVH. Each communication platform has different routes we have identified on the Slack developer interface and Facebook Messenger. When we receive a sentence, we detect the request with wit.ai and perform the appropriate processing, notably by searching information on api.ovh.com. All our logs are sent to Graylog using the Logs Data Platform service offered by OVH. This allows us to monitor our service, so that we can quickly detect any failures or errors on our API.
Here’s an important point regarding this API: for security reasons, Facebook and Slack require it to be accessible in HTTPS. In order to activate HTTPS, we used another OVH service available in beta, called the SSL Gateway. This service can speed things up significantly both with the SSL installation and its maintenance (certificate renewal), since OVH takes care of everything. In practice, simply make it point to an IP address and a domain name, and that’s all there is to it! The SSL Gateway will send an email with the details to enter in your DNS zone, so that you can communicate in HTTPS through that domain.

The OVH chatbot code is open-source: it’s now up to you!

Using a chatbot to feed you information or answer questions opens up a world of possibilities for us. At this point, we just need to push our imaginations to the limit in terms of our ideas, and what we build. What we want is to develop those ideas with you.

This is why we’ve made the code open-source after refactorising and open sourcing a few internal dependencies. What kinds of contributions are we expecting? How are we going to enrich the project on our side? Find out below.

In-house projects

Since the project’s launch, we’ve collected some interesting suggestions in-house when it comes to increasing the functional coverage of the service, and expanding its use. We will soon give our continuous improvement managers - who are spread across all our product departments and constantly in contact with the technical support teams - the opportunity to build a database with new requests based on recurring requests from users, as well as inputting the appropriate answers.

The use of the chatbot will eventually be extended to all OVH services and, beyond the diagnostic mode, users will be directed to guides or threads from the OVH Community forum when searching for an answer.

What’s more, some of our Customer Advocates have come up with a way to use the chatbot’s AI to provide some basic sales advice based on customer needs, before offering to schedule a phone appointment to continue the conversation with a human being if needed.

Your contribution is welcome!

Alongside this work done inside OVH (which also includes the implementation of a feedback module in the chatbot to assess and continuously improve the tool), we would like to collect your contributions. Now that the chatbot code is open-source under the BSD 3-Clause Licence, you can now add requests and answers as well as diagnostic ideas. This will apply to all OVH services, thanks to the information available in the OVH API. For example, your customers could benefit from using a virtual assistant as a white label solution if you’re a reseller of OVH services, or you could let everyone take advantage of it by offering us your code bits through a pull-request on GitHub.

But that’s not all! Why not come up with a monitoring or alert system that would be based on the chatbot, and would notify you about a full disk or a CPU overload on your server? And what if you made the chatbot available on some new applications like Twitter? Don’t be afraid to surprise us...and if you get stuck whilst implementing your idea, ask for our help!

Developing your own chatbot?

And finally, this chatbot was designed using standard technical components available at OVH (SSL Gateway, DBaaS MongoDB, Logs Data Platform…). There will soon be a beta version of another component, which will make your chatbot even easier to deploy: Function as a Service. In other words, a serverless computing service capable of executing on-demand computing in reaction to specific events, without any need for you to worry about the underlying resources.

We hope that many of you will try to deploy your own chatbot. Because beyond the buzz - a lot of people from various sectors are talking about this phenomena as being revolutionary - it is a powerful tool that opens up an infinite number of possibilities, due to its lack of interface.