Marc, I've long advocated for the creation of premium-quality access panels, but until now, I’ve struggled to see a viable path to achieve that goal. Your proposed middle-ground solution may indeed offer a breakthrough — and I believe there’s a compelling variation worth considering.
Conversational research is rapidly gaining momentum. While it promises to be far more engaging than traditional surveys, most current implementations simply repackage conventional questions into a chatbot format. This mirrors the early shift from CATI to online surveys, where methodology changed but the user experience did not.
However, the true potential lies ahead: as researchers begin to personalize conversational interfaces using profile data or respondent personas, we’ll see meaningful gains in engagement. These adaptive conversations can transform static surveys into dynamic dialogues.
Crucially, this form of qualitative input is ideal for training AI models. And while the increased cognitive effort required from respondents will necessitate higher compensation, it comes with the added benefit of rich, multidimensional data — including voice, video, and behavioral cues — that can serve as powerful quality indicators.
Thanks Frank. I think conversation research is a great area of exploration. I think the respondents will love it but it shies away from what marketers ask for which is formulaic dense survey content. I think the first group that can turn conversationally collected survey data into scalar results that are stable over time will reap rewards.
I recently saw a presentation from a company called GroupSolver that really stuck with me. Their approach starts with qualitative probing—open-ended responses that encourage rich, nuanced input. Then it seamlessly transitions into quantitative insight as respondents begin rating each other’s ideas. It's a smart blend of qual and quant that feels organic rather than forced.
This kind of self-evolving survey—where participant input shapes the direction and structure of the research in real time—feels like the future. I don’t think we’re far off from seeing this become standard practice.
Another one to look at that I've had brought to my attention is https://hootology.com/ based in Brooklyn. I haven't looked closely but they seem to be taking a similar approach to GroupSolver.
Great stuff Marc; you nailed it. I recently wrote about the issue as well but took an even bigger view; many of the problems you've pointed out impact the entire data ecosystem, so it's more than just research - in the era of AI it's virtually everything at risk. It's time to fix this; we cannot afford not to. I'd welcome your take on my post; maybe between us (and a few others) we can make enough noise to force the issue? https://www.linkedin.com/pulse/data-fraud-economy-wake-up-call-insights-industry-lenny-murphy-fg72e/
I read through your post. Great summary and lots of solutions to pursue. My worry is the MR space for so long has been performative in their concern about what's killing the industry. It's easiest to just check the fraud box by writing a check to Bob Fawson, or Mikhel for some fraud solution when the problem is much deeper.
I think the solve will likely come from the other direction - the synthetic insights providers. Quilt and Evidenza are growing like weeds (faster than any other company in the space) and that shows the power their going to end up wielding. After they have their 10th skeptical conversation with a Chief Research officer they're going to look at the panel companies and either buy or build their way into the market. The good news is that at the same time they're going to be stealing share from the Big MR companies which will put them into panic mode and then with enough blueprints such as those you and I have shared maybe they'll deal with the innovators dilemma.
Agreed on all fronts, although I do worry about the larger systemic data fraud issues that can contaminate the non-sample approaches. I think GIGO is an even bigger potential risk in the era of AI, synthetic personas, etc... That said, I am aware of several "discussions" happening that could bring together AI-first solutions and high-quality 1st party data/panels to ensure the data supply chain can be managed for quality at the source. The market is speaking, just not to the usual suspects (although they are trying to get in on the conversation!).
Marc, I've long advocated for the creation of premium-quality access panels, but until now, I’ve struggled to see a viable path to achieve that goal. Your proposed middle-ground solution may indeed offer a breakthrough — and I believe there’s a compelling variation worth considering.
Conversational research is rapidly gaining momentum. While it promises to be far more engaging than traditional surveys, most current implementations simply repackage conventional questions into a chatbot format. This mirrors the early shift from CATI to online surveys, where methodology changed but the user experience did not.
However, the true potential lies ahead: as researchers begin to personalize conversational interfaces using profile data or respondent personas, we’ll see meaningful gains in engagement. These adaptive conversations can transform static surveys into dynamic dialogues.
Crucially, this form of qualitative input is ideal for training AI models. And while the increased cognitive effort required from respondents will necessitate higher compensation, it comes with the added benefit of rich, multidimensional data — including voice, video, and behavioral cues — that can serve as powerful quality indicators.
Thanks Frank. I think conversation research is a great area of exploration. I think the respondents will love it but it shies away from what marketers ask for which is formulaic dense survey content. I think the first group that can turn conversationally collected survey data into scalar results that are stable over time will reap rewards.
I recently saw a presentation from a company called GroupSolver that really stuck with me. Their approach starts with qualitative probing—open-ended responses that encourage rich, nuanced input. Then it seamlessly transitions into quantitative insight as respondents begin rating each other’s ideas. It's a smart blend of qual and quant that feels organic rather than forced.
This kind of self-evolving survey—where participant input shapes the direction and structure of the research in real time—feels like the future. I don’t think we’re far off from seeing this become standard practice.
Another one to look at that I've had brought to my attention is https://hootology.com/ based in Brooklyn. I haven't looked closely but they seem to be taking a similar approach to GroupSolver.
Great stuff Marc; you nailed it. I recently wrote about the issue as well but took an even bigger view; many of the problems you've pointed out impact the entire data ecosystem, so it's more than just research - in the era of AI it's virtually everything at risk. It's time to fix this; we cannot afford not to. I'd welcome your take on my post; maybe between us (and a few others) we can make enough noise to force the issue? https://www.linkedin.com/pulse/data-fraud-economy-wake-up-call-insights-industry-lenny-murphy-fg72e/
I read through your post. Great summary and lots of solutions to pursue. My worry is the MR space for so long has been performative in their concern about what's killing the industry. It's easiest to just check the fraud box by writing a check to Bob Fawson, or Mikhel for some fraud solution when the problem is much deeper.
I think the solve will likely come from the other direction - the synthetic insights providers. Quilt and Evidenza are growing like weeds (faster than any other company in the space) and that shows the power their going to end up wielding. After they have their 10th skeptical conversation with a Chief Research officer they're going to look at the panel companies and either buy or build their way into the market. The good news is that at the same time they're going to be stealing share from the Big MR companies which will put them into panic mode and then with enough blueprints such as those you and I have shared maybe they'll deal with the innovators dilemma.
Agreed on all fronts, although I do worry about the larger systemic data fraud issues that can contaminate the non-sample approaches. I think GIGO is an even bigger potential risk in the era of AI, synthetic personas, etc... That said, I am aware of several "discussions" happening that could bring together AI-first solutions and high-quality 1st party data/panels to ensure the data supply chain can be managed for quality at the source. The market is speaking, just not to the usual suspects (although they are trying to get in on the conversation!).
We should catch-up and exchange notes!
Sounds like a plan. I’ll drop you a note over email