The ABCs of NLP: The Basics

What is NLP?

Natural language processing (NLP) is a branch of artificial intelligence that helps computers understand, interpret, derive meaning, manipulate human language, and then respond appropriately.


In practice, NLP is accomplished through algorithms that compute data to derive meaning from words and provide appropriate responses.

 

How does Instabot use NLP?

We’re building out a platform that allows non-technical people to use natural language processing, so we constantly try and test out NLP APIs to implement natural language processing to make our bots smarter. You can check out our past experiments with different NLP platforms here and here

As we use NLP in practice within our own bots, we can see both the positive and the downsides of implementing NLP tools first-hand.

One of our goals is to bring NLP to the people, which we accomplish by sharing our knowledge, learnings, and tips on how to implement these tools. Just as we've made our chatbot platform intuitive and easy to use for the non-technical person, we want to help the layperson stay informed in order to progress with emerging tech in our society.

 

The Basics

This practical, high-level lesson to cover some of the basics (regardless of your technical skills or ability) to prepare readers for the process of training and using different NLP platforms.

Our first task is to understand the lingo.

*Disclaimer*: NLP is a field that is under construction and constantly evolving. The use of vocabulary words may vary across different NLP platforms. So you want to be careful not to get too closely focused on semantics.

Terms to Know

Intent: Understand what the user wants. There are numerous phrases, sentences and expressions that have the same meaning. Intents are way of categorizing meanings for a string of words.

Entities: Entities are building blocks of intent-based systems that allow you to extract and categorize information from a string of text or phrase.

Fallback Intent: Fallback Intent is really just a catch-all that most NLP systems use, when they aren’t able to understand the actual intent of the user.

Training Data: Data used to train an algorithm to better understand entities, intent and context.

 

In order to better understand these terms, let's use an analogy to visualize these terms in a more practical context.  

With NLP, it’s very similar to training a new Home Depot associate. You want to train the new associate to do certain things: interact with customers, figure out what they’re saying, and interpret the customer's intent from that interaction.

NLP Presentation12.png

 

The purpose of establishing an "Intent" is to understand what your user wants so that you can provide an appropriate response.

In the example above, the user is interested in understanding the cost of a plant.

In the Home Depot analogy, the new hire won’t know anything about the way the store operates or how to best help customers, so you’ll have to give them a lot of historical requests that are similar to what your customers might ask them.

During training you might tell the new Home Depot hire that "these types of questions relate to pricing requests", or "these questions are relating to the soil types we have". A vast majority of these requests will fall into different buckets, or "intents". Each bucket/intent have a general response that will handle it appropriately.

In practice, deriving intent is a challenge, and due to the infancy of this technology, it is prone to errors. Having a "Fallback Intent" serves as a bit of a safety net in the case that your bot is not yet trained to respond to certain phrases or if the user enters some unintelligible or non-intuitive input. 

In the diagram above, when a user responds to the bot with content the bot has yet to be trained on, i.e. "How do you eat beluga" or something unintelligible like "Aaaaaa****", its fallback response would say "Not sure I understand. Can I connect you with customer support?" 

The Home Depot associate can be trained to learn certain types of intents and respond appropriately, but new hires have limits to their knowledge. If they’re not trained to handle certain types of things, they have to recognize that they don’t know how to handle that request, and they will then hand it off to someone who can help—that’s a fallback intent.

If a gardening associate were to be asked about free shipping for an online purchase, they won’t know how to answer that request; it's not in their repertoire of knowledge. Their fallback will be to direct the customer to someone who can help out with that question, probably saying something like "Someone at our customer service desk can help you with that".

Next are "Entities", which help you to extract and categorize information from a string of text or phrase that you are given.

 

NLP Presentation22.png

In the example above, you can see different categories of entities, grouped together by name or item type into pretty intuitive categories. Categorizing different information types allows you to understand a user’s specific needs.

For example, if a Home Depot customer is interested in understanding the cost of a snake plant in a ceramic pot with Miracle-Gro soil, the associate knows exactly what to look for on the floor to calculate the cost and give the customer a quote. Having entities set makes the intent more specific, and allows the associate to be of most assistance to the customer in a timely manner. 

While there are a few entities listed in this example, it's easy to see that this task is detail oriented. In practice, building out your entities is a time-intensive process.

After you have gathered intent and categorized entities, those are the two key portions you need to input into the NLP platform and begin "Training".

NLP Presentation32.png

In the example above, there are examples of way in which NLP programs can be trained, from data libraries, to messages/comments and transcripts.

Providing expressions that feed into algorithms allow you to derive intent and extract entities. The better the training data, the better the NLP engine will be at figuring out what the user wants to do (intent), and what the user is referring to (entity).

Training data is often labeled and organized. Test data is a separate set of data that was not previously used as a training phrase, which is helpful to evaluate the accuracy of your NLP engine.  

In the Home Depot analogy, you start training the new hire from a level of zero knowledge, then you provide them with a large amount of historical data, tell them what it corresponds to, and put them out on the floor to see how they perform. At this point, the associate (or NLP system) has about 60-70% competency.

Training starts at a certain level of accuracy, based on how good training data is, and over time you improve accuracy based on reinforcement.

In practice, training material can come from a variety of sources, to really build a robust pool of knowledge for the NLP to pull from. If over time you recognize a lot of people are asking a lot of the same thing, but you haven't yet trained to do it, you can set up a new intent related to that question or request.

NLP Presentation5.png

The next step, to fortify the NLP program's efficacy, would incorporate NLP machine learning, which is essentially a feedback loop system where you take a look at the behavior of how the NLP platform has operated, to then provide validation of responses. Validating the responses is Pavlovian in a sense, with positive reinforcement, the machine or NLP program begins to notice patterns and better understand how to interpret the language and inputs it is fed, getting smarter over time.

Over time, the Home Depot manager will observe how well the associate is doing, and either affirm the associate's performance, or in the case of questions they do not know how to answer, give them the right response to handle those requests, therefore strengthening their knowledge and increasing their level of competency.

 

This diagram demonstrates an outline of the basic process to gather, input, and test data to train an NLP program.

NLP Presentation4.png

NLP Training in Practice

One of our successes with NLP and machine learning leveraged Google's Dialogflow platform to train our bot to answer pricing questions. The process included:

1) Building out a bot to collect data- (Gathering user intent phrases)

Using open-ended, free-response question prompts in our bot conversations with users allowed us to collect information about questions users have, that a basic decision tree bot was not always equipped to respond to accurately. This bot always allowed users to “Ask me anything!”, in order to collect that date and parse through it for patterns or FAQs not included in the bot's initial structure.

 

2) Downloading all data- (Understanding user intent)

The next step was to review all the data collected from the bot's free-text questions to better understand what users are looking for from the bot, and common questions that could be added to the bot's repertoire of knowledge. Part of bot building and NLP training requires consistent review in order to optimize your bot/program's performance and efficacy.

 

3) Organize & Label Data- (Establish entities)

While going through the responses, it's important to categorize them based on user's intent, especially since the same question or request can be worded and phrased in so many different ways. After categorizing the data, it's much easier to come up with groups of entities that correspond to the different user intents, and therefore will contain the most pertinent information with which to train the NLP program. The most popular and more relevant intents would be prioritized to be used in the next step.

 

4) Input into NLP Platform- (NLP Training)

Once intents and entities have been determined and categorized, the next step is to input all this data into the NLP platform accordingly. It's easy to group responses by category as well.

We used Google Dialogflow, and recommend using an API because they have access to larger data sets and that can be leveraged for machine learning.

After we entered our training data into Dialogflow, we then connected the API to our bot using our Instabot webhook platform, to make our bot more dynamic when responding to any questions users have regarding pricing for our product. 

You can try talking to the bot here, and test its limits! 

One of the ways this training process is different from the common chatbot building process, is that by employing NLP and creating a hybrid bot based on intent data taken directly from the source (our users and intended audience), we're able to have a jump start at seeing things from the user's POV. This way we are creating bot conversations with some insider knowledge and continually employing machine learning to strenghten the NLP component, as well as reviewing bot performance analytics to improve the bot.  

 

Some Challenges when Employing NLP

While the pricing project outlined above has been largely successful, it's important to note that this technology is not perfect just yet. Since the technology is still relatively young and consistently developing at a rapid pace, there are some challenges when it comes to NLP, including:

Audio Speech Recognition- There are many words or phrases that sounds similar depending on pauses and cadence.

e.g. Youth in Asia v. euthanasia

Context- Unless you have context, it is sometimes challenging to see the meaning in language.

e.g. “We saw her duck.” Is it a bird that belonged to a person? An observed female person ducking? A person named 'We' who saw her pet?

Parsing Language- Instances in which someone speaks out of grammar and being understood.

For example, “Tank working” is the same as “The tank is working.”

Polysemy- Situations in which one word meaning several different things.

e.g. “I need some paper.” “I wrote a paper.” “I read the paper.”  

 

Language is a bit complex (especially when you're talking about English), so it's not clear whether we'll ever be able train or teach machines all the nuances of human speech and communication.

Even super-famous, highly-programmed, celebrity bot Sophia gets a little flustered in conversation (or maybe she was just starstruck).

 

Luckly, Sophia has a team that is consistently working on enhancing her knowledge and communication capacity with machine learning, so be rest assured these kinks will be worked out quickly; but hey, that's one of the benefits of this process! 

 

If you want to test these steps out and train your own NLP program. there are some free NLP platforms you can use, including:

Google Dialogflow (our favorite thus far!), and Facebook's Wit.ai

And if you need help leveraging NLP within Instabot, feel free to contact our team at help@instabot.io

 

 

Interested in learning more about NLP and machine learning? Let us know what you'd like us to try next in the comments below!