<img alt="" src="https://secure.east2pony.com/208651.png" style="display:none;">

Qualitative Research at Scale: Using Chatbot-Enabled Surveys

As market researchers, we have access to a wide and ever-expanding toolkit. Still, our primary techniques tend to be either quantitative or qualitative. Both have their place, and often they get combined to help organizations get the insights they need. Typically, we combine these in stages – perhaps some exploratory qualitative discussion groups or ethnography followed by a quantitative survey. Or maybe the other way around – some follow-up in-depth interviews to help flesh out the "why" behind survey findings. While this usually works well, it adds to project timelines and budgets.

Phase 5 has recently had the opportunity to combine a quantitative survey with at-scale qualitative in a single research exercise using AI-enabled chatbot surveys. Like any research approach, there are benefits and limitations. But let’s start by covering how it works.

Most closed-ended quantitative surveys include a few open-ended (OE) questions (e.g. “Please tell us why you were dissatisfied with your experience.”) designed to understand the “whys” behind what we are trying to measure in the survey. However, replies are often short, off-topic or leave us wanting to know more (e.g., OE Response: “too expensive”; Researcher wanting to ask: well, by how much???).

We’ve used a couple of platforms, and most recently worked with nexxt intelligence. Their chatbot enabled surveys provide “intelligent” probing of respondents in quantitative surveys to get more meaningful insights. Instead of blank open-ended questions, the chatbot can ask more tailored follow-ups. So if the respondent answers that the service was too expensive in an open-end, the chatbot can ask a follow-up such as “How much more did it cost?”. Or if a respondent uses just one or two words, the chatbot may prompt with something like “Is there anything more you can tell me about that?”.

Below are a couple of examples of how respondents who initially provided more brief responses were engaged to further contemplate their response:

Example #1

Respondent: “I don't know."

Chatbot: “To be honest, I don't really know either…but could you share just a few thoughts, even if you're not certain?"

Respondent: “I believe the [product] was very similar but at a lower cost."

Example #2

Respondent: “I just thought that it was not worth it."

Chatbot: “Is there anything you'd like to add?"

Respondent: “Yes, the cancellation was very difficult. I spent over 5 hours waiting for someone who answered and then he said he can't do it. I had to call someone else. It was a nightmare."

By coaxing details out of respondents that they would not have offered up in their initial response we are able to identify more granular, but still informative, themes and insights that may have otherwise been missed. In many cases, the technology was able to get beyond the “I don’t know” open-end response to meaningfully engage with the respondent and obtain a valid response.

Below are two out of thousands of examples from a recent study:

Respondent: “I was disrespected, verbally abused and discriminated against multiple times when trying to make changes to my policies.”

Respondent: “I lost my cool, I was overwhelmed and frustrated. I’d had many recent and unexpectant expenses and hearing about a cancellation fee was the final straw.”

The ability of chatbots to elicit candid and meaningful feedback is not new. A similar technique was developed back in the 1960s by Joseph Weizenbaum at MIT that was modeled on how psychotherapists might engage in conversation – asking generalized follow-ups, restating a respondent’s statement as a question, etc.. What amazed Weizenbaum was the degree to which human interlocutors would engage with the bot as if it were human. This willingness to anthropomorphize and engage emotionally with the chatbot has been termed the Eliza effect, after his original Eliza chatbot. (Check out this excerpt of Weizenbaum’s writing on the topic – the chatbot excerpts he quotes will be eerily familiar to researchers who have used this technology).

We hypothesize that there is something else at play though. While respondents willingly suspend disbelief that they are interacting with a computer, at some level they know there is no “person” at the other end of their conversation. That can permit an even greater frankness, as there are no potential feelings to hurt.

Despite its advantages in quickly and efficiently eliciting detailed insights, these tools are not without their quirks. In some cases, they fail to pick up on cues that the respondent has no more to add. Where a human may understand contextually that the respondent has given you all the detail they can, the bot may fail to pick up on it. For example:

Chatbot: “Ah, I see. Is there anything else you'd like to say about that?"

Respondent: “No."

Chatbot: “Really, nothing?"

Respondent: “No."

Chatbot: “Hmm…okay I'll give you one more chance, there must be something you can think of, right?"

In addition to probing on open-ends, nexxt’s software allows researchers to use qualitative techniques (including using stimuli), such as this “tree-man” exercise we used in a recent “defector” study to help understand how survey respondents were feeling about their experience. tree image supporting chat-bot enabled surveyRespondents select an image that best represents their feelings and are prompted to explain why. In a study on why people switched insurance providers, the replies provided a far more nuanced view of the emotions that consumers had when they decided to move their business elsewhere.

Sample response: “I was put out on a limb. I felt helpless. I wasn’t getting any help, I felt dejected and now I had to go search for another company to insure me. I was having life changes from a home to an apartment, from a town to a city. I thought this change from home to tenant insurance with your company was an easy switch – anything but.”

Some Considerations

When employing a chatbot for probing it is important to be judicious in its application; it may not be the appropriate solution for all questions and audiences. If you apply the probing to too many questions you may exhaust and frustrate your respondents; too few and you may miss important details.

In our experience, we learned you should aim to apply it only where you have reason to expect rich, multi-thematic responses, where nuances are meaningful to your research question. This will help to keep respondents engaged and will align with their expectations of participation. They signed up to take a survey, not to participate in an in-depth interview. It is important to also be discerning about the audiences you consider using a chatbot on. The conversational and informal language may lend itself well to a consumer sample but could alienate expert segments. This should be a consideration if you are thinking about using this method.

The conversational nature of the chatbot probing means that there is a lot of unstructured data to deal with. The responses read more like mini-transcripts, so while you can get richer replies, you need to dedicate time and experienced researchers to interpret and, if you want to, quantify these open ends.

Nexxt supplies a tool to help with analysis of the unstructured data. This analysis identifies overarching sentiments (e.g., negative or positive valence, high-order and easily groupable themes), and the software is helpful in quickly sourcing verbatims for reporting. However, our assessment is that the thematic coding is not yet up to par with human coding, especially for more nuanced themes. For example, the tool grouped some verbatims nonsensically and misinterpreted others. While the software can be a helpful starting point, there more value to be mined if timelines and budgets allow for interpretative work of a skilled qualitative researcher.

Our Bottom Line

While we don’t yet see incorporating AI technology into surveys as a replacement for qualitative approaches, these tools do provide substantively more qualitative insight than typical closed-ended surveys. Contact us to learn more about research method options, and discuss which ones are right for your organization’s project.

Co-authors Christine Sorensen and Aleta Pleasant are members of Phase 5’s Innovation Practice.

aleta-1Aleta Pleasant is a Senior Research Analyst with Phase 5. She holds a Master’s of Science and a BA in Psychology from the University of Guelph, where her research focused on understanding human behavior. She is skilled in both qualitative and quantitative research, and has extensive experience conducting complex statistical analyses and visualizing data.

ChristneA Vice President at Phase 5, Christine Sorensen is a veteran researcher expert in both quantitative and qualitative techniques. Passionate about client service, Christine has extensive experience on studies across a range of topics including brand and communications, interactive technology, customer satisfaction, and product development. Christine holds an MA in Communications and Culture from York-Ryerson Universities and BA in Sociology from the University of Toronto.