Using Facebook’s to Enhance NLP in Bots

Before you read this post, check out:
Part I. Hybrid Bots: How to Best Leverage + Optimize AI Quickly
Part II. Using Google’s Dialogflow in Chatbots for Beginners

If you’ve read Part I + Part II, then you can skip this. For those who are cheater, cheater pumpkin eaters, here is a quick recap to bring you up to speed.

The Challenge:
The challenge we set for ourselves was this: take one version of a bot and integrate the bot with the following AI systems:

a.)    Dialogflow (formerly API.AI), owned by Google

b.)    Wit.AI, owned by Facebook

c.)     Lex owned by Amazon

To make this exercise simple, we will:

·         Use the same decision tree in each bot scenario;

·         Only integrate NLP for one node in our conversation;

·         Use NLP for only one skill set: to answer questions about pricing

We will then A/B test the bots on our website over a period of a month. We will review the results of which bot performed best, as noted by the amount of times it correctly identified the intent of the user and gave an appropriate response.

Training Data:
In a hybrid bot model, training data isn’t nearly so voluminous and we will often rely on the AI systems in place to fill in those gaps of data. Prior to this test, we had run a bot on our website that allows the user to ask an open-ended question, “Ask me anything!”

From the user responses, we collected 100 responses asking about pricing for our product, Instabot. I then pulled them into a pivot table to extract all the unique ways people had asked about price. We will use this as our input for all three platforms.

See the table below:

 Table I. Unique Statements Asking About Price

Table I. Unique Statements Asking About Price

Part III. PLATFORM I:’s pitch is that it allows developers (and after this tutorial, you) to build a Siri-like speech interface for their app—an API that turns natural language processing (NLP) into actionable data. We’re going to try to leverage this platform to enable your chatbots with NLP.

Create an Entity
When you first sign onto, you’ll need to create an app, and then create an “Entity”.

An “Entity” is used much like Dialogflow uses Intent or a specific goal the user is inquiring about or needs assistance to fulfill. We’ll name our entity, “Pricing”


Set Your Trait
Next, you’ll need to click into your Pricing entity and choose a “Trait”. (*Now the way they explain this is the most confusing thing ever. The definition for a trait is, “Entity values for this entity. This is the value that will be returned in the API when Wit extracts this entity”--Dear Team, please see this.)

I’ll explain it differently. When the user inputs a phrase or expression (e.g. what is the cost?) that matches the intent for pricing (my entity), then the bot should respond with a “Trait” or a set way of responding (e.g. Sure, the pricing for Instabot is…).I put my template response into the trait value.


Add Your Expressions
Now is where you add your data. In the section, “Test how your app understands a sentence” you’ll need to type in all the phrases that support that intent. I type in all 50 of the statements from Table I, classify it under the right entity, “pricing”. And then I choose the “Trait” or response for each expression.  (Side note: you can’t put in your expressions until you have chosen a trait, as this will result in an error message.)


Using Webhooks to Access
Now I just have to connect it with our bot. I can connect this to a chatbot by using Webhooks. (If you’re not sure what webhooks are, check out this post). Webhooks are a way to link your bots to external services.) We’ll be leveraging webhooks within our own chatbot, Instabot.

In your Application Settings, click on integrations and tap Add/Edit Webhooks, and click “Add New Webhook”


Once in Webhooks, name your webhook. We’ll name ours, “Wit.AI.Request”. Use the Method: Get and type in your API Endpoint. This is unique to each API provider and will tell our bot to use this data.  We will use the following in our settings:


We will put this in the section marked API endpoint:


We will then tell the webhook how we want the data to be displayed. Under “Request Parameters”, click “Add New”, you can name it “q”. (This is how tells you to make the request.)  In this case, select “String”. This will tell the webhook, that you are expecting an unbroken, unstructured string of letters, numbers, etc. (This could be an e-mail, a word, a phone number, etc, etc...)


Test Your Webhook
Now you need to test your webhook to make sure it works. Click the words, “Test Request” and type in one of your Intent phrases (I typed, “What is the cost?”)  and then hit, “Send”.

asset9.png is different from Dialogflow in that it gives a confidence level--This is a throw-back to statistics and has a much deeper explanation for which there are no really awesome resources (though I recommend Khan Academy). For our purposes, we should say that the higher the confidence, the more likely the intent for pricing. So one of our expressions is “what is the cost?” and we received a confidence level of 0.9955, or 99.55%. This makes sense and looks good!


A Couple Challenges:

Having a confidence interval is really awesome and a form of transparency into how the NLP is working, or not working, for that matter. However, there are challenges with this method.

Challenge I: There is no easy way of putting conditional restrictions on the confidence interval. For example, there should be a way of saying, “if isn’t 99% sure that this is about pricing, please don’t use a pricing response.” There should be a simple way this is built into the platform. (If it’s there…I couldn’t find it?)

For example, when I test a couple other phrases, such as “what are the features?” it comes in with a .79 confidence that it should respond with our pricing information. No bueno!


Challenge II: Secondly, there is no way of putting a catchall if your intent does not match in the system. Right now the response will just be blank.

This means you would have to spend SO MUCH time coming up with every possible scenario, that it almost seems futile to continue. At this point, I was about to abandon ship, but I went to do some brainstorming with our technical product manager, Jimmy, and he came up with this clever work-around.

We created another “entity” (intent) called “catch-all.” (We followed the exact process as we did above with the pricing entity.)

We gave “Catch-all” entity a trait value of, “This would be better answered by my teammate. Let me connect you. Just need some quick information.”

And then we proceeded to fill the “catch-all” entity with what would essentially be useless and nonsensical phrases:


The idea behind this is that if it meets the intent of pricing, it will give a pricing response. If it doesn’t meet the response of pricing, it will fall to the test of the whether it meets the intent of “catch-all” entity. So we also have to do something a little different when we connect the system with our bot. Let me show you.

This time when we create Output Parameters, or the way that the bot responds to the user’s inputs, we do something different.

First, we add an Output Response for our pricing entity. Under “Output Parameters”, we click, “Add New”, and then we name is pricing entity and select our entity.pricing.value from the dropdown. This is telling the webhook to use the response or “trait” in the pricing entity to respond if the user input matches the pricing intent.


Secondly, we add another Output Parameter. So we follow the same steps. Under “Output Parameters”, we click, “Add New”. Under this we’ll name it “catchall entity” and then select, “entities.catchcall.value”. This is telling the webhook to use the response or “trait” in the catchall entity to respond if the user’s input matches the catchall intent.


Connecting Your Webhook to Your Bot

Now, we’ll go into our pre-built conversation. I have created a part of the decision tree that allows users to “Ask a Question” and numbered each node of my conversation. Statement 2 says, “Ask a Question”, the bot in Statement 3 says “Sure! Ask me anything!” and allows the user to put in a free-text response.


As my user types in anything in the free-text box, the bot will send the language to who will decide whether it matches our created intent. I want the bot to respond with my pricing response when the intent is pricing; otherwise I want it to use my catchall response. I’ll open up the conversation node just after Statement 3, which should provide a response to the question, “Sure! Ask me anything!”


When I open it up I type the following:

[=Wit.AI.Request(q:@Statement_3).pricing entity]

[=Wit.AI.Request(q:@Statement_3).Catchall entity]


Let me break down how this works:

[=Wit.AI.Request(q:@Statement_3).pricing entity]

·         “[=” notes that it is a webhook

·         “Wit.AI.Request” is the name of my webhook

·         “(query:@Statement_3)” says use the webhook in response to the free text the user inserted in

·         “Statement 3” which is the name of my conversation node

·         “pricing entity” is what I named my output

It will look to match this intent first, and then if it doesn’t it will go to the catchall entity:

[=Wit.AI.Request(q:@Statement_3).Catchall entity]

·         “[=” notes that it is a webhook

·         “Wit.AI.Request” is the name of my webhook

·         “(query:@Statement_3)” says use the webhook in response to the free text the user inserted in

·         “Statement 3” which is the name of my conversation node

·         “catchall entity” is what I named my output

Now hit “Save”, and you’re all done! Now when people type in free text, if they ask a word or phrase that is part of my intent, the response will be pricing language. If not, the user should receive the catchall entity response. It’s better, but certainly not perfect.

You can test it out here. Boundary test away, and feel free to send us feedback at