The Power of Dialog for Stakeholder Feedback

Created with Sketch.

As we develop efficient innovations in the field of stakeholder feedback & customer insights, let’s not lose sight of the importance of iterative questioning. I came across an article recently which sent me into a spin.

The article discusses blurring the lines between Quantitative Research (large amounts of structured data) and Qualitative Research (small amounts of unstructured data), yet I heard something different: An offer for clients to get answers even more quickly and inexpensively by replacing slow, expensive studies with a quick-turnaround Agile Research - using just one single question with text analytics  for efficiently analyzing the answers.

Ask a single question? One question -?

For me, the heart of obtaining quality stakeholder feedback research is dialog. Not taking answers at face value, but exploring them with respondents – which often unearths real surprises!

Our industry is developing rich ways of bringing dialog to other platforms through smartphone diaries, online ethnography and insight communities, to name a few. The key is creating the opportunity to follow up on answers in a personalized way. Not just asking “please tell us why” – but following up personally, e.g. “Hi Jenny, when you said xyz… I’m not sure if I understood what you meant. Can you please describe to me the last time that happened?”

Here’s an example: We can ask women why they don’t use a certain hair care product.

Many answer “I don’t have the time for this in my routine”. But we don’t stop there. We use dialog to find out where this impression of “no time” comes from. It could be that using the product is boring, and therefore feels like time wasted. Or that the results she gets aren’t striking enough to justify the time spent. Or that the process of using it is laborious, and something she doesn’t enjoy. Through discussion we learn it’s not a time issue, after all – other things are blocking her.

Or another example: We can ask what’s important in the choice of what people eat for breakfast.

Many answer “it has to be healthy”. But what is healthy? For some, it may mean low-calorie & low-fat processed breakfast cereal from the supermarket. For others, relatively high-calorie & high-fat freshly cooked bacon and eggs. Or locally-grown produce from the local farmers’ market. Or gluten-free. Or dairy-free… You get my point.

Without a follow-up discussion, we can only guess what “healthy” means, and we risk guessing wrong.

I don’t want to prematurely dismiss innovative ideas such as text analytics. Rather, I’d like to put in a plea for not losing sight of the value of dialog – and challenge us as an industry to keep personalized, interactive question-asking in the picture.

On a mission to find the "right" answer

Created with Sketch.

Market research provides many answers --- here are 3 steps to help research teams work together to get “the best” answer to research questions.

Who hasn’t been there: The moderator enters the viewing room following the focus group to be met by a chorus of “which concept won?” It’s hard to dampen the enthusiasm by replying “well, it depends…”

In our culture of fast and decisive decision-making, we are under increasing pressure to deliver quick, actionable answers. However, qualitative research isn’t programmed to deliver a yes or no, A or B answer, but rather to help us gain insights to determine which is the right way forward.

Here are 3 steps – before, during and after fieldwork – that can help research teams do just that.

1. Before Fieldwork: Clarify the questions behind the questions 

Research teams often come to me with a list of questions they’d like to ask potential customers. I always ask for detailed background information: Why do they want to know that? Which decision depends on it? What else do they already know? And most importantly, what will they do with the answer? This is critical to be able to adjust probing in interviews “on the fly”, in order to avoid wasting time exploring what the team already knows and missing potential surprises which may influence broader decision-making.

I also encourage research teams to reach out to their internal customers during this phase. Product engineers can shed light on which designs have already been tried and rejected, or the advertising team can explain guidelines which govern the choice of color schemes or the placement of taglines.

Example: Concept evaluation for a new medical treatment - knowing limitations

Prior to the interviews, I spent time with the product team discussing details of the treatment, how it compared to others on the market and results of clinical testing. During the interviews, patients expressed distrust in a broad claim, stated without numbers. I knew the client was not in a position to state those numbers – so rather than returning with the finding “patients would like to see numbers to back this up”, I explored which elements could potentially strengthen the broad claim, which features or benefits could be more compelling in its place, or if it would be better to leave the claim off entirely rather than state it in broad terms.

2. During Fieldwork: Ensure we’re asking the right questions

During a study, teams often become very focused on specific details and lose sight of what else may be influencing customers’ behavior. This increases the risk of only getting answers to those specific questions – but potentially missing factors which are even more important! During interviews, I always start new topics with general, non-specific questions before drilling down into more detail – to allow room for discovering new, unexpected factors.

Example: B2B Product Satisfaction – leaving room in the discussion guide for surprises

During the interviews, I asked respondents to walk us how they used the product – before going through a detailed list of features. We were surprised to learn dissatisfaction wasn’t driven by any specific feature, but because many users disregarded the instructions and didn’t use the product correctly! Had we only focused on the feature list, we would have missed this huge insight – and the recommendation to revise the product based on how professionals actually used it, which was very different than how it was designed to be used.

3. After Fieldwork: Weigh potential answers in the context of implementation

Across qualitative interviews, different respondents express different preferences or behaviors. We also see polarizing reactions – people who completely love and absolutely hate the same thing. That’s why we can rarely say who is the clear “winner” – or the clear “right answer” immediately following fieldwork.

It’s important to evaluate findings in the context of the overall strategy, in order to determine which feedback to weight as more relevant. Taking a pragmatic approach can help teams choose which features to implement or which creative to develop into the final advertising campaign.

Example: Usability testing – evaluating responses in the context of the intended target audience

I recently helped an online retailer choose a design for their new shop re-launch. The designs were very polarizing: The one which was preferred by young, savvy shoppers was strongly rejected by other users, whereas the design which was least often rejected was not rated as very new or different. In choosing the “winner”, we helped the team understand which type of users they would attract or alienate with which design – so that they could choose as “winner” the design which most closely aligned with their target audience and their brand image.

In closing: These three steps can help you engage the whole research team to best respond to the “which concept won” question with confidence – as you’ll be able to base your recommendation on

  • Framing your findings within what the company knows, or has already decided
  • Including surprises which either strengthen existing hypotheses, expose fallacies or illustrate potential new strategies
  • Outlining which option best aligns with strategy, future product roadmaps or target audiences

Behavioral Economics in Stakeholder Feedback: My own little Nudge

Created with Sketch.

Behavioral Economics reminds us to go beyond asking people in interviews why they do what they do: Invest time to explore the mental models and frameworks they have surrounding categories, brands and products – so we can help clients better understand what truly motivates people to make the decisions they do.

I recently had the pleasure of attending an ideation workshop with a team of researchers, clients and experts. Our mission was to start with what seemed like an intimidating amount of insights from recent qualitative interviews, and use principles of Behavioral Economics to identify potential nudges for consumers in our client’s category.

This reminded me of some fundamentals I’ve been using in interviews for years. But as one of the workshop attendees playfully pointed out – Behavioral Economics has finally legitimized many of those conclusions we intuitively draw ourselves, based on what we’ve learned through countless interviews we’ve conducted.

So here’s my nudge to myself :

Be mindful of investing more time in exploring and understanding the environment surrounding the decision or behavior we want to learn about, and spend less time asking specifics about why.

Behavioral Economics teaches us that we often rely on mental models, using connections between things we know, understand or admire to minimize the effort of decision-making. These simplified, but effective stories about how the world works can make what we see in advertising or marketing seem plausible to us, or not.

Although it’s tempting, asking respondents why they do something isn’t always a fruitful line of questioning in qualitative interviews. People often don’t know why they decided for a certain product and against another, or may entertain us and themselves with what seems like, in retrospect, a logical explanation for the choices they made.

So when we use qualitative interviews to learn why somebody did or bought something – we should be seeking to learn more about the mental models they used to make that decision. We should be asking, for example: 

  • Who: Who else uses these kinds of products? Who are their role models, or who would push them away from a certain brand or product? 
  • When: What is the ideal “moment” for consuming this brand or product? What frame of mind, what phase of their day, or their life, are they in? What other things are going on at that moment? 
  • How: What do they do with this product? How does that impact which features and functionality are important? 
  • Where: Is this something they consume or do in public? Or in private? At work, at home, on holiday? Where are relevant places where brands can find and engage with them?

When we design interviews to learn more about the mental models or frameworks our respondents have when it comes to products and brands, we can help our clients go a long way towards understanding what their true motivations are for making the decisions they do.

Über die Jahre habe ich für viele spannende Firmen gearbeitet!


Agenturen: Acuity, Arundel Street, Burke Inc., Boston Consulting Group, Customer Value Systems, Deep Brain, FDY Consulting, Homburg & Partner, Illuminas, Insight PD,  Insites Consulting, IPSOS, Innosight,  Krämer Marktforschung, Medicys, Phase 5, Planworx, Quixote Group, Schlesinger Associates, Verus Group

Unternehmen:  3M, Abbott Diabetes Care, AMD, Bayer, Baxter, BenQ, BMW, CareFusion, Dell, 

Eli Lilly, Engage Research, E-Plus, Feelgood Active Aging, Fresenius Kabi, Hilti, Imation, Intel, Johnson & Johnson, Kao, Payback, P&G, Pfrimmer Nutricia, Lego, Lenovo, Lifescan, Medtronic, MSN, Nestle,  Roche, Sanofi Aventis, Shutterstock, Thomson Reuters, Zarges

Öffentliche Hand: Global Affairs Canada,  Government of Ontario, Visit Wales