A Little More Conversation, A Lot More Action: Conversational Interactions Double Conversion Rates
Having visited Graceland and being a huge fan of Elvis, it’s hard for me to say it--but Elvis made some mistakes in his time. Among other things, while watching one of his signature hits, “A Little Less Conversation”, you can find a couple errors in judgement.
- Several late-60's dance moves
- The word "satisfactioning" (not a word)
- The lyrics (on multiple levels)
I’ll discuss just one of the ways the lyrics are incorrect: Elvis states that “a little less conversation” will relate to “a lot more action” and be, “satisfactioning” (let’s go with it.)
The Debate about Conversational Interactions
We had a similar debate among our team, relating to product strategy and functionality. Part of our team felt that simple, clear bot conversations would outperform more complicated bot conversations. When I say “simple”, I mean bots that have less “non-essential” interactions (e.g. what’s your favorite color?) and get to conversion goals much faster—similar to an interactive contact form (e.g. What’s your company name? Great, would you like a demo, etc.). The argument being that simple conversations:
a.) Make it easier to interact
b.) Are less distracting (e.g. taking people to auxillary pages that are not conversion-focused)
c.) Get people to conversion goals faster, and hence improve contact capture
Admittedly, this seems intuitive and a strong argument. Moreover, from a product perspective, it is easier to execute on, and seems to be the path of many bot-based products. They provide simple templates that are easy to replicate and are essentially interactive contact forms. And if this is true, why focus on building out more robust conversational bot functionality if the same goals can be accomplished by something like a simple bot, or to take it one step lower-a push notification or pop-up form? In this case, it may make sense to just use those.
The other half of our team felt that the true power of bots was in conversation, the fact that the give and take of information is, from a user experience, more satisfying (or, um, satisfactoring). Also, through conversation, you end up collecting more information—hence allowing you to better nurture users over time, especially in a place where people research options, products, and services, etc. While it’s more complicated to create, the end-user experience would be more valuable. This would point to the fact that we need to build a platform that could support sophisticated conversations.
This resulted in a bit of a fight:
Okay…so really more of a battle of the bots—it’s a fight of sorts (ahem).
The Battle (A/B Test)
We tested two types of bots on our own website, both created by the same chatbot architect, and used the same language where there was content overlap, (e.g. when asking for name, both bots said, “Great! And what is your full name?”):
Bot I: CONVERSATIONAL- This “conversational” bot asks “non-essential” questions such as, “Is this your first bot?” and “Do you want to compare us to other platforms?” etc.
Bot II: SIMPLE + DIRECT- This bot cut right to the chase, immediately asking for the user’s e-mail address and name, etc.
We launched both bots from a call-to-action button marked, “Get Started”, on our homepage. We felt like this would be a perfect place to compare the bots against each other (if not a little more advantageous to Bot II), because theoretically, if someone taps, “Get Started”, their true intention is to sign-up and begin building bots, and hence would be more likely to convert on a quick form that removes any possibly distracting questions such as, “Compare us to other bot platforms” and other auxillary information.
We then decided to run each bot for 2 weeks at a time. During this time, we would leave all marketing tactics and budgets constant to minimize variation in website traffic, etc.
NOTE: It is worth mentioning that this is far from a perfect scientific test. For example, a true scientist would change just one variable, not multiple. Apart from this, we did not run them simultaneously—rather on separate weeks. So while this would never cut it in a university research department (any research fellows who would like to follow-up on this, please do!), I think you’ll see that the data is still trend-worthy—and very interesting.
Metrics for Success
The only way you decide the winner of a battle is by choosing the metrics for success. Metrics we continually track are as follows: Website traffic, engagement rate with the bot, leads with an e-mail address captured from the bot, and finally, demos booked and completed.
This is a table of the metrics where each variable represents a 3-month-average (before the date of the A/B test), as a constant.
The bot we would consider best is the bot that led to the most demos booked and completed.
Also, in an effort to share data without giving away too much company information; I’ll be transparent about increases and decreases over the 3-month-average.
Bot Battle Results
See results in Table II, below. The first two weeks using Bot I (more conversational) were pleasant, with a nice uptick in demos and demo completion rates. We then switched to Bot II (direct bot). Our inbound leads dropped a little but stayed fairly consistent, however, when it came to demos, we started getting concerned.
And 5 days in, our team started to get extremely worried about the number of demos. Not only had demos dropped by 50% below our 3 month average, there was also a HUGE drop in people showing up to our demos.
We then went through a series of checks to make sure there was not another factor outside of the bot at play, Were all the webpages in the site up? Was there a drop in traffic? Did the bot break? Did our demo booking platform go down?
We did all the routine checks; we searched every possible variable, but things were otherwise running smoothly. And initially, with the quantity of leads staying the same, we assumed it must be something else. (But that ol’ pesky marketing gut feeling was gnawing at my soul.)
Avoid a fear of change, resulting in a strong bias for the status quo
Remove an excessive aversion to loss
Don't misunderstand randomness or see patterns that don't exist
Don't allow emotional feelings to override reason
Do not seek information that confirms one's opinions or decisions
With this in mind, I stayed strong to commit to a.) finish the experiment and b.) make decisions based on data only. Once the second week of Bot II was complete, we went back to Bot I and demos booked and demo completion rates immediately shot up again. All the data was clear: conversation interactions have a huge impact on conversions.
At first glance, both simple and conversational bots appear to gather the same amount of leads, but the actual conversions after lead capture tell a much different story. Conversational bots far outperform simple bots in terms of conversions.
Why this happens is still unclear, but it there are a couple of theories floating around our office:
· Conversational interactions build micro-trust with your prospects
· Conversational interactions help inform prospects with useful information so that making the conversion step is easier
· People don’t like to be told what to do; they prefer optionality—it provides a feeling of power
· People have a distaste for transactional interactions (e.g. give me your credit card), so the act of “getting to know them” first softens the transactional element
Hence, when looking to create your bot strategy, you need to pay great attention to how you are going to “talk” to your users, and explore how can you get to know them, so they will provide organic and useful responses.
As always, if you have any questions, or would like replicate this test on your own site, please reach out to us at firstname.lastname@example.org