They say that you never really know you’re in a recession until you can look backwards and see what happened. Likewise, I think we’ll never really know when the consumer panel industry died until we can look back and see what happened. Don’t get me wrong, I think we have passed the starting line, but the decline has the potential to be long lived. We won’t be able to say anything with authority until 5-8 years down the line. In the meantime, I’m curious to see who has the ability to evolve and who’ll end up on the trash heap of history.
Before jumping in it's only fair to be clear on what we’re talking about here: the survey access panel industry. Not every panel. Not the special-purpose, high-commitment panels like Nielsen’s people meters or Circana’s receipt scanning armies. Those panels have their own drama to deal with, sure. This is about the access panel world, where people get pinged for online surveys, earn a buck or some points, and the industry turns that into "insights."
The survey access panel industry is massive. Global numbers are tricky to pin down because most players are private or folded inside holding companies, but conservative estimates peg the market at several billion dollars a year. In the U.S. alone, major players like Dynata, Toluna, IPSOS, Kantar, Cint, PureSpectrum, and dozens of niche networks churn out millions of completes a day. And that’s not even counting the exchanges, aggregators, and ghost suppliers that make up the long tail.
By all measures that's a big industry and in case you haven’t noticed, it's in trouble.
A Dumpster Fire of Issues
If the access panel industry were a house, the foundation would be cracking, the roof leaking, and someone just lit a match in the living room.
Let’s walk through the problem list:
Opaque Supply Chains: Who's taking your survey? Good luck finding out. Panels source from panels who source from exchanges who source from... no one knows. And at the end of that chain? You'll often find panel companies that exist solely to supply exchanges, with zero direct relationship with the brands buying the data. That lack of connection means they face no accountability for the quality or authenticity of their respondents, they're just slinging completes into the void, hoping no one asks too many questions.
Self-Selection Bias: Panels are made up of people who choose to be in them. That means your "average consumer" is often anything but. And even panels that boast about their "high quality" respondents are usually finding those people through affiliate partnerships, referral programs, or recruitment schemes that tend to attract a very specific type of participant, people already predisposed to engaging with surveys. A 2009 study in Public Opinion Quarterly found that long-term panel participants scored significantly higher on measures of conformity and conscientiousness than the general population. So, while you might think you know your panel, you can be pretty sure it doesn’t reflect the general population in a meaningful way.
Price Pressure: Buyers want cheap sample. Sellers want margin. The result? Corner cutting, lazy targeting, and dropping quality standards. And here's where it gets performative: everyone talks about wanting quality, but data is invisible, it doesn't come in a pretty box you can inspect. It's easy to say you value accuracy, but when the price tag drops and the data still 'looks fine' in the dashboard, the temptation to trade down is strong. Quality is a materials cost, not a visible feature of the final product. It's the same logic behind choosing store-brand Fruity O's over Fruit Loops. You know the ingredients aren't identical, but hey, its cereal, it tastes close enough, and it costs half as much. Out of sight, out of mind.
Boring Surveys: Most surveys are still designed like it's 1999. Long. Ugly. Clunky. No wonder dropout rates are sky-high. Back in the day, surveys were run through RDD, random digit dialing on the phone. And because there was a real human voice on the line, respondents felt guilty hanging up, which meant you could get away with 30- or 40-minute monstrosities. But when those same surveys got ported online, nobody stopped to ask, "Wait, will people actually want to do this?" Spoiler: they don’t. Especially Gen Z, who'd rather scroll TikTok than wade through a soul-crushing battery of scale questions.
And thanks to research productization, the same cookie-cutter surveys are sent to the same people again and again, just swapping out the brand or product. That trains professional respondents to game the system. Say I know five yogurt brands? Cool, now I'm on the hook for 10 follow-up questions on each one. Next time? I might just "remember" one brand so I can get to my reward faster. It's Pavlov meets capitalism, and it's killing data quality.
Demographic Skew: Panels tend to over represent older, female respondents, the folks most likely to have time, patience, and an opinion. Likewise, they under represent men, younger populations and massive swaths of higher income earners. But it goes deeper than age or gender. As previously mentioned, panels attract "joiners", people with personality traits inclined toward participation, routine, and structured activity. So even if the demographics look balanced, the psychology of your sample is still skewed. You're not just getting average people; you're getting the people who love being asked questions.
Fraud & The Incentive Trap: And don’t even get me started on incentives. Micro-payments and points-for-prizes systems might seem efficient, but they create short-term thinking. Respondents chase quantity, not quality. Fraud becomes lucrative, and worse, it's growing. According to the Insights Association, up to 30% of responses in some survey panels are estimated to be fraudulent. A 2022 report by CloudResearch found that 95% of surveys conducted through online panels contained at least some level of fraudulent activity, ranging from bot participation to identity misrepresentation. Fraudulent respondents are increasingly sophisticated, using tools like local proxies and AI-based click farms to bypass basic quality checks.
But fraud isn’t just about bots or AI. Sometimes it's about survival economics. If you're a Venezuelan national, the average monthly income is around $230, in that case earning $50 a week slogging through boring U.S.-based surveys becomes an attractive side hustle. Respondents in lower-income regions may misrepresent location, identity, or behavior just to qualify. And can you really blame them? The system incentivizes it. This isn’t just a minor annoyance. It fundamentally breaks the trust model of survey research. And when you pair that with boring, repetitive surveys and an incentive model that rewards speed over accuracy, you're just begging people to cheat.
The Structural Problem
Here’s the kicker: non-panel sampling methods have been shown to outperform access panels. Studies comparing panel recruits to methods like random intercepts, social media recruitment, or even survey walls can and often show that non-panel respondents look more like the general population and give higher quality data. The data is fresher. The answers were more thoughtful. The bias smaller.
If access panels are so riddled with problems and if validated alternatives like random intercepts and social recruitment routinely deliver better, more balanced results why hasn’t the market evolved? That’s the frustrating paradox. We know what’s broken. We know how to fix it. And yet the system stays stuck, mostly because inertia is easier than change. The dysfunction is visible, but the data still lands on a dashboard, and as long as the charts keep moving in the right direction, too many buyers are happy to look the other way.
Before we get into client behavior, there's a big, structural reason the market hasn't shifted: scale. The more innovative approaches, random intercepts, survey walls, social recruitment, often exist as part of specific research methodologies or tools, not as stand-alone sample sources. That makes it harder to scale up or swap into a traditional research study. Meanwhile, the big research firms, think IPSOS, Kantar, GfK, have sunk massive costs into building and maintaining access panels. These companies need to monetize those panels to maintain margins. So, what do they do? They package them inside broader research solutions, push them to brands, and defend their quality and applicability even when better options exist.
The Client Conundrum
This ties back to something I wrote in a previous piece, "All Market Research is About Choices," where I laid out the three fundamental types of research: to understand, to adjudicate, and to track. Of those, tracking is where the real money is. It's the annuity product, a predictable, recurring source of revenue that's been funding research departments and agency P&Ls for decades. But here’s the rub: the clients paying those annuities hate when the data changes in those trackers. Especially when those changes are driven by the sample source.
Many clients are stuck. They’re using trackers with fixed source blends written in stone years ago. They’re terrified of breaking a trend line, even if that line is already warped beyond reason. Everyone in this business has a story of a client who knows their sample is garbage but refuses to change it.
It’s not just inertia. It’s fear. "What if the data moves and I have to explain it to the CMO?" Especially when the CMO's variable compensation is tied directly to those tracker results. A sudden dip, even if it's caused by a cleaner, more representative sample, can set off alarms across the org chart.
There’s also a skill and priority gap at play. Many buyers of insights work aren’t digging into sampling methods or evaluating representativeness. They’re often project administrators, managing requests from internal teams and just trying to get answers quickly and cheaply. What they want is a proposal that sounds credible, not a debate over methodological nuance. Vendors know this. And the very vendors clients look to for research leadership, many of whom are sitting on massive sunk costs in panel infrastructure, have every incentive to keep the broken model alive and kicking.
From Craft to Copy-Paste
Sample design isn’t the only thing clients turn to vendors for, survey design is also on the menu. But that, too, has fallen off. Survey design used to be a craft. You learned from mentors. You argued over question wording, articles used to surface in academic journals about the validity of scales with and without a midpoint. You thought about respondent cognitive load and preventing instrument bias. Now? It’s a copy-paste from the last tracker. Slap on a five-point scale and call it a day.
Part of the reason for this decay in quality is structural. The massive global expansion and consolidation of MR firms through the ’80s and ’90s turned the vendor account management function into something far more administrative than strategic. Today’s research professionals at a vendor are more likely juggling fielding schedules and managing quotas across 80 markets than sweating over the cognitive load of a 30-question grid. There’s barely time to optimize survey design when your job is to keep the engine running, on time, and on budget.
How does this relate to access panels? Bad design alienates good respondents and encourages fast-click fraud. It’s a vicious cycle, bad surveys attract bad respondents, and bad respondents teach us the wrong things.
Exchanges: Blessing or Curse
I know, I can hear some of you asking... what about the exchanges? Exchanges ushered in a wave of technical innovation that the market desperately needed. They modernized an industry that was still limping along on voicemails and email invites, replacing it with APIs, real-time routing, and programmatic infrastructure. For the first time, survey supply could be automated, standardized, and scaled, giving rise to the now-ubiquitous concept of "sample liquidity." This shift enabled dynamic targeting, real-time feasibility checks, and instant project setup across geographies. In short, they brought the tech stack up to speed with the rest of the digital world.
But with that modernization came an unintended consequence: a race to the bottom. Exchanges function like marketplaces, and in marketplaces, price often wins. The default trading logic rewards lower-cost suppliers with a bigger share of completes. If you're a supplier willing to undercut your competitors by a few cents per complete, congratulations, you just got more business. Quality? That’s someone else’s problem downstream. The result is a perverse incentive system where vendors who cut corners or flood their panels with low-quality traffic are rewarded, not penalized. J.D Deitch has similarly pointed this out in his ebook the Enshitification of Programmatic Sampling. Programmatic sample. RTB for humans. What could go wrong? Well, it turns out, quite a bit.
This Thing Is Broken
The access panel industry is cracked at every level. From sampling to design, from incentives to buyer behavior, it's a system that limps along only because everyone's afraid to stop.
One might think that the DOJ indictment is a wake up call and turning point for the industry but I doubt it'll be more than a blip. Fraud has been in the headlines of MR conferences for years and the only thing we'll see in the wake of the DOJ trial will be panel and market research companies flocking to LinkedIn to espouse how their approach to panel is better. We're bound to see distancing not differentiating, after all it's not like the sunk costs, or corporate incentives have changed.
Fraud, fatigue, and fake data aren’t temporary glitches. They’re symptoms of a model past its sell-by date. The panel industry as we know it isn’t evolving. It’s unraveling.
So, what’s next
The unraveling of the traditional access panel model isn’t just a slow fade, it’s happening against the backdrop of an existential threat: the rise of synthetic research firms and alternative collection ecosystems. These aren't side projects or experimental labs, they're full-blown, investor-backed companies built to sidestep the entire panel paradigm.
Companies like Evidenza, Quilt.ai, Delphi, RealEyes and others offer research solutions that don’t rely on fielding surveys to real humans in the traditional sense. They use synthetic personas, behavioral modeling, and AI-generated respondents to simulate market reactions at scale. Meanwhile, other players like n-Infinite, Fairgen, and others are reinventing how data is collected by using enhanced AI systems to boost samples or impute missing data for sparse datasets. Leveraging these systems means no longer chasing the last 25 respondents for a survey or in some cases even worrying about a survey at all.
These companies don’t just offer a different tool, they offer a fundamentally different cost and scale model. And because they’re not dragging around the same panel-based overhead, they’re far nimbler. They don’t have to protect an outdated asset; they’re building from scratch. That’s a major advantage in a market ready for change.
The Middle Way
On the one hand we have the establishment players: the panel and big MR complex, on the other hand we have the synthetic players, smaller but nimbler in execution. Who's going to win the day?
Regarding this question I'd argue that neither should win as both have merits. This is the case with most methodological debates. The ideal path is often right down the middle, a hybrid approach that takes the best of what both solutions offer. We've done this before in research with things like hybrid audience measurement, and data fusions, and there's no reason we can't do it again.
Synthetic solutions scale phenomenally. They are generative in nature; they easily construct answers to questions they receive but those answers need to be grounded in a robust training dataset. Specifically, a training dataset based on human data, the higher quality the better. Panels don't scale like synthetic, but as we've seen in many specialty panels, with money, time and effort you can create high quality human feedback panels that could generate the data needed for synthetic solutions to succeed.
Today, panels are trying to compete on scale, it's the, “I can fill your n=500” game. I believe that competing this way is going to go into decline. Sure, the traditional MR survey will always have a role, but for the big solutions, access panels will never be able to keep up with the scale and speed of synthetic solutions for insights generation. Chasing sample size is a lost cause. On the other hand, synthetic is generative, it needs to be validated, ideally by humans, and to generate insights it needs training data, ideally high quality human training data.
There's a future version of the panel industry where panels are fixed cost assets used not on a project-by-project basis but as an R&D cost to validate synthetic outcomes and create the training data to feed the models. In that version of the future, panel size still matters given that bigger panels create more training data and better validations. But as an R&D asset, the incentive for creating a panel changes and suddenly two things come to the forefront as critically important: quality and panelist experience.
Companies that treat a panel as an R&D cost are going to want to make sure that investment is paying off with great data. This is the opposite of the current access panel transactional model that treats panels as COGS on a specific client study, in that model it’s all about monetizing panelists as fast as possible. These panels will be monetized across the full line of products, and they will be more representative, more valuable and the participants will feel they're receiving better value for their time.
If an R&D panel is not there to deliver the n=500 for a client study, it's there to create the data to make the models work and to validate the models. For the panelists, they reap the rewards of having researchers design more engaging and tailored data collection experiences, and they are no longer presented with a sea of survey opportunities and routers to navigate. They'll receive surveys they qualify for, are interested in and feel fairly compensated for the experience. Without the pressure of the access panel being what's front and center in the client deliverable these panels will become true assets, treated and managed well and used to drive the future of insights.
In this post access panel world owning a panel will still be a status symbol and a barrier to entry. The real question is who'll get there first? Will the synthetic suppliers secure funding or partnerships to make an investment into global R&D panels, or will the big MR firms use the panels to fuel the creation of their own synthetic offers? The jury is out, but while it might not yet be obvious the race is on.
Marc, I've long advocated for the creation of premium-quality access panels, but until now, I’ve struggled to see a viable path to achieve that goal. Your proposed middle-ground solution may indeed offer a breakthrough — and I believe there’s a compelling variation worth considering.
Conversational research is rapidly gaining momentum. While it promises to be far more engaging than traditional surveys, most current implementations simply repackage conventional questions into a chatbot format. This mirrors the early shift from CATI to online surveys, where methodology changed but the user experience did not.
However, the true potential lies ahead: as researchers begin to personalize conversational interfaces using profile data or respondent personas, we’ll see meaningful gains in engagement. These adaptive conversations can transform static surveys into dynamic dialogues.
Crucially, this form of qualitative input is ideal for training AI models. And while the increased cognitive effort required from respondents will necessitate higher compensation, it comes with the added benefit of rich, multidimensional data — including voice, video, and behavioral cues — that can serve as powerful quality indicators.
Great stuff Marc; you nailed it. I recently wrote about the issue as well but took an even bigger view; many of the problems you've pointed out impact the entire data ecosystem, so it's more than just research - in the era of AI it's virtually everything at risk. It's time to fix this; we cannot afford not to. I'd welcome your take on my post; maybe between us (and a few others) we can make enough noise to force the issue? https://www.linkedin.com/pulse/data-fraud-economy-wake-up-call-insights-industry-lenny-murphy-fg72e/