<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Grey Matter Unloaded]]></title><description><![CDATA[Innovation / Product / Data Guy. Playing with Ad-Tech, Decentralization, Privacy, GenAI, Decision Intelligence & M/L at the edge.]]></description><link>https://www.greymatterunloaded.com</link><generator>Substack</generator><lastBuildDate>Thu, 09 Apr 2026 10:58:59 GMT</lastBuildDate><atom:link href="https://www.greymatterunloaded.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Marc Ryan]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[greymatterunloaded@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[greymatterunloaded@substack.com]]></itunes:email><itunes:name><![CDATA[Marc Ryan]]></itunes:name></itunes:owner><itunes:author><![CDATA[Marc Ryan]]></itunes:author><googleplay:owner><![CDATA[greymatterunloaded@substack.com]]></googleplay:owner><googleplay:email><![CDATA[greymatterunloaded@substack.com]]></googleplay:email><googleplay:author><![CDATA[Marc Ryan]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Rumors of the Death of Market Research are Greatly Exaggerated]]></title><description><![CDATA[Thinking through your insights career in the age of AI]]></description><link>https://www.greymatterunloaded.com/p/rumors-of-the-death-of-market-research</link><guid isPermaLink="false">https://www.greymatterunloaded.com/p/rumors-of-the-death-of-market-research</guid><dc:creator><![CDATA[Marc Ryan]]></dc:creator><pubDate>Thu, 11 Dec 2025 16:41:47 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!7OfY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64780337-e406-46fd-adc4-c584de4b44a4_978x660.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7OfY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64780337-e406-46fd-adc4-c584de4b44a4_978x660.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7OfY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64780337-e406-46fd-adc4-c584de4b44a4_978x660.png 424w, https://substackcdn.com/image/fetch/$s_!7OfY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64780337-e406-46fd-adc4-c584de4b44a4_978x660.png 848w, https://substackcdn.com/image/fetch/$s_!7OfY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64780337-e406-46fd-adc4-c584de4b44a4_978x660.png 1272w, https://substackcdn.com/image/fetch/$s_!7OfY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64780337-e406-46fd-adc4-c584de4b44a4_978x660.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7OfY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64780337-e406-46fd-adc4-c584de4b44a4_978x660.png" width="978" height="660" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/64780337-e406-46fd-adc4-c584de4b44a4_978x660.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:660,&quot;width&quot;:978,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:886243,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.greymatterunloaded.com/i/181335827?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64780337-e406-46fd-adc4-c584de4b44a4_978x660.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!7OfY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64780337-e406-46fd-adc4-c584de4b44a4_978x660.png 424w, https://substackcdn.com/image/fetch/$s_!7OfY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64780337-e406-46fd-adc4-c584de4b44a4_978x660.png 848w, https://substackcdn.com/image/fetch/$s_!7OfY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64780337-e406-46fd-adc4-c584de4b44a4_978x660.png 1272w, https://substackcdn.com/image/fetch/$s_!7OfY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64780337-e406-46fd-adc4-c584de4b44a4_978x660.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>It&#8217;s hard to scroll through LinkedIn these days without seeing another post declaring that AI has finally killed market research. You&#8217;ll see demos of <a href="https://x.com/aiwithmayank/status/1998694868651212833">Google&#8217;s Gemini</a> or a new ChatGPT feature, followed by the breathless claim that the insights industry is officially cooked. If you&#8217;re a professional in this space, it&#8217;s enough to give you a low-grade hum of anxiety.</p><p>My advice is that you ignore most of that noise.</p><p>What these posts call &#8220;market research&#8221; is often just desk research, the work most enterprises expect their employees to engage in when building business cases or strategies. It&#8217;s the tedious task of sifting through public data, press releases, and syndicated reports to find an answer that&#8217;s already out there. And yes, AI is making that task ridiculously easy. But let&#8217;s be honest, that was never the core business of the professional insights industry. It&#8217;s a problem of outsiders using vocabulary in a different way to describe the industry <a href="https://www.greymatterunloaded.com/p/meet-the-five-families-of-insights">than we&#8217;re used to</a>, not a career-ending crisis.</p><p>The real story is far more nuanced. AI isn&#8217;t killing the insights profession; it&#8217;s unbundling it. The true impact isn&#8217;t replacement, it&#8217;s creating <a href="https://www.greymatterunloaded.com/p/your-ai-vendor-cant-save-you">scale</a> and fundamentally restructuring of how insights are created and used. We&#8217;re at the beginning of a shift where &#8220;insights&#8221; moves from being a department you go to, to a task you just&nbsp;do.</p><p>If you&#8217;re thinking about what this means for your career, you&#8217;re asking the right question. While the death of research is greatly exaggerated, everyone in the space should have a little anxiety about AI. And that anxiety should be fueling your thinking about how you&#8217;ll fit into the industry that&#8217;s taking shape. The path forward isn&#8217;t the same for everyone. It depends entirely on where you are in your career journey and how you choose to evolve.</p><h2>From Department to Distributed Task</h2><p>For decades, the enterprise insights department has been the operational hub for understanding the consumer. Got a question? Go to the insights team. They&#8217;d run the RFP, manage the vendors, and eventually, deliver a report. It was a centralized, project-based function.</p><p>This is where the real change is happening. AI is flipping that old model on its head. As predicted years ago in <em><a href="https://grokipedia.com/page/The_Sovereign_Individual">The Sovereign Individual</a></em>, a &#8220;job&#8221; is becoming a task to do, not a position you have. Insights is no longer a department; it&#8217;s becoming a capability distributed across the organization.</p><p>Over time, marketing teams won&#8217;t need to schedule a kickoff meeting and exchange dozens of emails to run a segmentation study. They&#8217;ll interact with AI agents, proprietary datasets, and automated tools to get the answers themselves. There are already startups in the space doing this. The methodological rigor and vendor knowledge that insights teams once guarded can now be embedded directly into these agentic workflows.</p><p>So, does this mean the end of the enterprise Insights Department? No. It means it evolves. It shifts from being an operational &#8220;doer&#8221; to a strategic &#8220;curator.&#8221; The new insights team will be responsible for validating the tools, managing the APIs, and building the playbooks that allow the rest of the organization to generate insights safely and effectively. They move from running the projects to building insights ecosystems.</p><h2>The Great Talent Reshuffle</h2><p>So whether you&#8217;re on the client side or the vendor side, this fundamental shift is what really matters for your career. It creates a new dynamic in the talent market, and contrary to popular belief it&#8217;s not the entry-level folks who will feel it most. When you break down the workforce into three stages: new talent, mid-career professionals, and veterans, a clear pattern emerges.</p><h4><strong>The Veterans: The AI Supercharged</strong></h4><p>If you&#8217;ve spent decades in the industry, you&#8217;ve accumulated something AI can&#8217;t replicate: deep domain knowledge. Research from <a href="https://iawponline.org/news/canaries-in-the-coal-mine-what-ais-early-job-impacts-mean-for-workforce-professionals/">Stanford</a> has shown that AI disproportionately enhances the productivity of experienced workers. Your expertise is the critical context that turns a generic AI tool into a precision instrument. You know which questions to ask, how to interpret the nuances, and where the data is likely to lie. Companies are waking up to this, realizing it&#8217;s easier to teach a veteran AI than it is to teach an AI three decades of experience.</p><h4>The New Talent: The AI Natives</h4><p>At the other end of the spectrum are the recent graduates. The grunt work they might have been assigned in the past is disappearing, research from <a href="https://mitsloan.mit.edu/ideas-made-to-matter/workers-less-experience-gain-most-generative-ai">MIT</a> shows their value is skyrocketing. They are AI natives, unburdened by the baggage of &#8220;how we&#8217;ve always done things.&#8221; They bring a blank slate easily upskilled by AI and a fluency with these new tools that allows them to see solutions others miss. Their advantage isn&#8217;t experience; it&#8217;s a fresh perspective, a willingness to experiment, and let&#8217;s be honest, an affordable wage rate.</p><h4>The Mid-Career Challenge</h4><p>And that leaves the mid-career pros, and they are facing the trickiest challenge of all. They&#8217;ve been in the game long enough to become skilled and established, but not long enough to have the deep experience of a veteran. They&#8217;ve mastered how the current system works, but the system itself is changing. With expertise tied to processes that are being automated or outright disintermediated, they may lack the AI-native mindset of the younger generation. And let&#8217;s be honest, in the insights world, this group makes up the biggest chunk of the payroll, a natural target for AI-driven &#8216;efficiency gains.&#8217;</p><h2>How to Survive the Squeeze</h2><p>This evolution doesn&#8217;t mean your career is over; it means your career needs a strategy. Panic isn&#8217;t a plan, but action is. The path forward looks different depending on where you are on your journey.</p><h4>For Veterans: Direct the AI</h4><p>Your superpower is context. You have decades of domain knowledge that AI can&#8217;t replicate, and research shows that AI disproportionately enhances the productivity of experienced workers. Lean in and learn the tools, not to become a coder, but to become a better strategist. Pairing your deep understanding of <em>why</em> things work with AI&#8217;s ability to execute the <em>how</em>, you become the savvy guide who can point the new machinery in the right direction.</p><h4>For New Talent: Be the Catalyst</h4><p>Your advantage isn&#8217;t experience; it&#8217;s the lack of it. You are an AI native, unburdened by the baggage of &#8220;how we&#8217;ve always done things.&#8221; The grunt work of the past is gone, freeing you up to do what new talent does best: ask naive questions and challenge the status quo. Sell yourself as a fresh set of eyes with AI fluency. Be the one who asks, &#8220;Why are we still doing it this way when an AI could do it better?&#8221; Your role is to be the catalyst for change from the ground up.</p><h4>For Mid-Career Professionals: Pick Your Pivot</h4><p>If you&#8217;re in the middle, you have the most work to do, but you also have the most agency. You understand how the current machinery works better than anyone. Now, you have two strategic paths:</p><ol><li><p><strong>Accelerate to Veteran Status:</strong> The fastest way to become indispensable is to broaden your expertise. If you&#8217;re a specialist in one area (like brand tracking), actively seek experience in others (like media research or product testing). The goal is to move from knowing <em>how</em> a specific process works to understanding <em>why</em> different approaches solve different business problems. This strategic breadth is much harder to automate.</p></li><li><p><strong>Lead the Automation:</strong> Instead of waiting for AI to disrupt your role, become the one who leads the disruption. Go all-in on becoming the expert in how AI can transform your current function. Volunteer to lead the charge in automating your own processes. It feels risky, but it positions you as the critical bridge between the old way and the new. You become the person who can build the next-generation machine because you know the current one inside and out.</p></li></ol><p></p><p>The key is AI opens the opportunity for incredible growth in Insights. It&#8217;s already having an impact in other industries. In radiology, AI reads X-rays better than humans now, which had many speculating that radiology was &#8220;cooked.&#8221; But the opposite is true, AI is supercharging radiologists <a href="https://www.webpronews.com/ai-fuels-radiologist-boom-40-job-growth-by-2033/">creating a boom in hiring</a>. In the legal profession, AI is automating the drudgery of contract review, freeing up lawyers for higher-value strategic work while delivering <a href="https://www.forbes.com/sites/larsdaniel/2025/03/18/lawyers-using-ai-produce-better-work-in-half-the-time-landmark-study-finds/">high quality efficiency gains</a>.</p><p>The message is clear: AI doesn&#8217;t just replace tasks; it revalues skills. While claims that market research is dead are overly sensationalized, its still true that change is afoot. The future of the insights profession belongs to those who can either direct the AI with deep wisdom or creatively apply it in novel ways. The comfortable middle ground of simply operating the existing machinery is vanishing. It&#8217;s time to pick a path.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.greymatterunloaded.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Grey Matter Unloaded! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Your AI Vendor Can't Save You]]></title><description><![CDATA[And why AI is the end of the insights industry's oldest constraint]]></description><link>https://www.greymatterunloaded.com/p/your-ai-vendor-cant-save-you</link><guid isPermaLink="false">https://www.greymatterunloaded.com/p/your-ai-vendor-cant-save-you</guid><dc:creator><![CDATA[Marc Ryan]]></dc:creator><pubDate>Thu, 04 Dec 2025 14:15:31 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!V_GK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd87c47d9-f9b2-4fc5-b2ae-b60c6b5b6d16_1536x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!V_GK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd87c47d9-f9b2-4fc5-b2ae-b60c6b5b6d16_1536x1024.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!V_GK!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd87c47d9-f9b2-4fc5-b2ae-b60c6b5b6d16_1536x1024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!V_GK!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd87c47d9-f9b2-4fc5-b2ae-b60c6b5b6d16_1536x1024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!V_GK!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd87c47d9-f9b2-4fc5-b2ae-b60c6b5b6d16_1536x1024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!V_GK!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd87c47d9-f9b2-4fc5-b2ae-b60c6b5b6d16_1536x1024.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!V_GK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd87c47d9-f9b2-4fc5-b2ae-b60c6b5b6d16_1536x1024.jpeg" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d87c47d9-f9b2-4fc5-b2ae-b60c6b5b6d16_1536x1024.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:371717,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.greymatterunloaded.com/i/180617064?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd87c47d9-f9b2-4fc5-b2ae-b60c6b5b6d16_1536x1024.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!V_GK!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd87c47d9-f9b2-4fc5-b2ae-b60c6b5b6d16_1536x1024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!V_GK!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd87c47d9-f9b2-4fc5-b2ae-b60c6b5b6d16_1536x1024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!V_GK!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd87c47d9-f9b2-4fc5-b2ae-b60c6b5b6d16_1536x1024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!V_GK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd87c47d9-f9b2-4fc5-b2ae-b60c6b5b6d16_1536x1024.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>I think most of the people that read the articles on this Substack are more than well aware that the pace of change in the AI space is ridiculously fast. It&#8217;s so blazingly fast that one tends to think you&#8217;ll never catch up. For many insights professionals, the temptation is to just sign a multi-year deal with Microsoft, Google, OpenAI, or Anthropic and be done with it. Let them bring AI engineers to the table and solve your problems.</p><p>Some have <a href="https://www.microsoft.com/en/customers/story/24603-kantar-microsoft-365-copilot">already taken this path</a>, but in a market this volatile, does locking in early really help?</p><p>There&#8217;s an interesting memo that was leaked from Google in early 2023 called &#8220;<a href="https://newsletter.semianalysis.com/p/google-we-have-no-moat-and-neither">We Have No Moat. And Neither Does OpenAI.</a>&#8221; The memo highlighted how the rules of the game had changed. Open-source models were catching up to the giants, not by outspending them, but by leveraging the free labor of a global community. It&#8217;s hard for any single company, no matter how big, to compete with tens of thousands of researchers creatively solving the same problem from their home offices.</p><p>It didn&#8217;t take long for this observation to be proven right. In the last year, open-source models from companies like China&#8217;s <a href="https://www.forbes.com/sites/greatspeculations/2025/02/03/deepseeks-ai-shockwave-hits-nvidia-hard-wiping-out-billions/">DeepSeek</a> have been released that rival the quality of the big, proprietary players. The moat is gone.</p><p>In a race this fast, with the leaderboard changing every few months, what sane company would lock themselves into a single AI vendor and outsource their strategy? It&#8217;s a mistake.</p><h2>Strategy First, Vendors Second</h2><p>The reality of the insights space is that it&#8217;s not dominated by powerful tech innovators. It&#8217;s full of researchers who think more about collecting data than they do about implementing state-of-the-art technology. So, before you pick a vendor or sign a contract, you need to figure out your own strategy.</p><p>And that strategy should be grounded in a first-principles approach to a simple question: <em>Why</em> is AI so transformational for the insights industry?</p><p>From what I&#8217;ve seen, the answer isn&#8217;t about chatbots or synthetic respondents. It&#8217;s about solving the one, massive problem that has been the industry&#8217;s Achilles&#8217; heel for decades.</p><p>AI creates <strong>scale</strong>.</p><h2>The Industry&#8217;s Achilles&#8217; Heel</h2><p>For as long as the insights industry has existed, it has been defined by its human limitations. I recall sitting in a company meeting with Bob Myers, former CEO of Millward Brown, who proudly claimed we had more dialing centers than the competition, creating a substantial competitive advantage. That advantage quickly fell away and was replaced by online surveys and, later, sample exchanges. Each step was a leap forward, a way to talk to more people, faster.</p><p>But we always hit the same wall.</p><p>Despite decades of innovation, all these advancements really did was bring down the cost of an interview. This was an optimization of the data collection pipeline, not an optimization that opened the aperture for more consumer feedback.</p><p>As a result, we never escaped the fundamental scale constraint: <strong>Clients can&#8217;t use insights for every problem.</strong> They&#8217;re forced to choose when and where to use consumer insights, limiting them to the few big problems that can justify the cost and effort of finding a few hundred respondents.</p><p>AI is the first new technology to come along that offers a genuine solution to that constraint. Once you see AI as a tool for creating scale, your strategy becomes clearer and more actionable. It opens the door to an age of abundance, an abundance of problems that can be solved, an abundance of consumers you can &#8220;talk&#8221; to, and an abundance of valuable work that can be done.</p><h2>Picking Your Major: Refining Your Scale Strategy</h2><p>So, if your strategy is to use AI to create scale, the next question is: where, specifically, do you apply it?</p><p>It&#8217;s hard for any company to be good at everything. Most successful companies have something they do really well; that&#8217;s their <strong>major</strong>. &#8220;Picking your major&#8221; is the process of refining your scale strategy. It&#8217;s about identifying the specific area of your business where AI-driven scale will create the most value and getting the entire organization aligned behind it. This focus prevents you from wasting time and resources on minor projects that don&#8217;t move the needle.</p><p>For example, is your major going to be <strong>scaling decisions for clients</strong>? This could mean building systems that help marketers make thousands of micro-decisions, like optimizing ad creative at a volume that human-led research could never handle. Or is your major <strong>scaling expertise for your own teams</strong>? This might involve using agentic AI to automate the 70% of operational work that bogs down your best people, freeing them to deliver more strategic advice.</p><p>These are just two examples. The key is to choose. Once you know your major, you know what to build, what to own, and where your true competitive advantage lies.</p><p>This clarity is your best defense against vendor lock-in. The parts of your business that are core to your major are the parts you need to understand deeply and control. This is where you invest your internal resources and build proprietary knowledge. For everything else, the &#8220;minors&#8221;, you can confidently partner with third-party AI companies. You can license their tools for non-core functions without risking your strategic future, because you haven&#8217;t outsourced the one thing that makes you unique.</p><h2>The Strategic Question</h2><p>It&#8217;s easy to get lost in the technical details of AI. We can debate whether synthetic respondents are &#8220;real&#8221; or if an LLM is biased. These are valid questions, but they can distract from the bigger picture.</p><p>History shows that the companies that win are the ones that master the benefits of scale:</p><ul><li><p>Scale in relational databases gave us the internet.</p></li><li><p>Scale in cloud computing gave us YouTube.</p></li><li><p>Scale in cameras in cars gave us self-driving technology.</p></li></ul><p>AI is waiting to give the insights industry scale.</p><p>So, go back to your strategic planning documents. Do they talk about new AI features, or do they have a clear answer to the fundamental questions:</p><blockquote><p>How are we going to use AI to scale? And what major area in our business are we going to scale first?</p></blockquote><p>If you can&#8217;t answer those questions clearly, you don&#8217;t have an AI strategy. You have a list of ideas. If you start from a foundation of optimizing for scale, your strategy will be inherently linked to the biggest unlock AI is delivering.</p><p>So, stop asking which AI vendor will give you an edge, they&#8217;ve already admitted they have no moat. Instead, start asking what part of your business you are going to scale. That&#8217;s the only question that matters.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.greymatterunloaded.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Grey Matter Unloaded! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[The Three Unbreakable Laws of Panel Quality]]></title><description><![CDATA[Beyond the Checklist: A Simpler Way to Judge Panel Quality]]></description><link>https://www.greymatterunloaded.com/p/the-three-unbreakable-laws-of-panel</link><guid isPermaLink="false">https://www.greymatterunloaded.com/p/the-three-unbreakable-laws-of-panel</guid><dc:creator><![CDATA[Marc Ryan]]></dc:creator><pubDate>Thu, 13 Nov 2025 14:15:46 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!n5j8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe01fad40-6e63-40e2-9d94-eab9413b7aa5_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!n5j8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe01fad40-6e63-40e2-9d94-eab9413b7aa5_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!n5j8!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe01fad40-6e63-40e2-9d94-eab9413b7aa5_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!n5j8!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe01fad40-6e63-40e2-9d94-eab9413b7aa5_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!n5j8!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe01fad40-6e63-40e2-9d94-eab9413b7aa5_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!n5j8!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe01fad40-6e63-40e2-9d94-eab9413b7aa5_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!n5j8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe01fad40-6e63-40e2-9d94-eab9413b7aa5_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e01fad40-6e63-40e2-9d94-eab9413b7aa5_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2805242,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.greymatterunloaded.com/i/178729591?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe01fad40-6e63-40e2-9d94-eab9413b7aa5_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!n5j8!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe01fad40-6e63-40e2-9d94-eab9413b7aa5_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!n5j8!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe01fad40-6e63-40e2-9d94-eab9413b7aa5_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!n5j8!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe01fad40-6e63-40e2-9d94-eab9413b7aa5_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!n5j8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe01fad40-6e63-40e2-9d94-eab9413b7aa5_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>If you&#8217;re in a business that sells things to people, you eventually need to talk to them. This simple truth has fueled the multi-billion dollar insights industry for decades. To reach consumers beyond your existing customer base, most companies turn to consumer panels, whether it&#8217;s Nielsen tracking TV viewing habits or Dynata sending survey invitations for points.</p><p>Here&#8217;s the problem: how do you know if a panel is any good?</p><p>The industry has tried to solve this. Organizations like ESOMAR created standardized checklists, a noble effort that started as &#8220;26 questions,&#8221; which recently ballooned to 37 questions that panel companies use to prove their quality. I&#8217;ve managed consumer panels, worked on the 37 questions and I&#8217;ve sat through the client pitches. Every CEO will stand on a conference stage and tell you their panel is the gold standard.&nbsp;<em>Don&#8217;t believe a word of it.</em>&nbsp;The truth is, convincing someone to take a 20-minute survey for 80 cents is a brutal business, and behind the marketing slides are some ugly realities.</p><p>After wrestling with this problem, I&#8217;ve come to believe there&#8217;s a simpler way to think about panel quality. It comes down to three unbreakable laws, and ensuring they are obeyed in order.</p><h2>Law #1: Thou Shalt Eliminate Fraud</h2><p>Let&#8217;s talk about global economics. The monthly cost of living in Venezuela is about $612. A lucrative week on a U.S. survey panel can net $100. Not a bad side-hustle.</p><p>The world of online panels is a soft target for fraud. We&#8217;re talking about a global network of bots, click farms, and individuals using VPNs or proxies to misrepresent their identity and location. Why? The incentives for fraud are perfectly aligned: low stakes and brittle processes.</p><p>Now, we&#8217;re in a full-blown AI arms race. Fraudsters are just as likely to deploy sophisticated AI agents that can mimic human survey-taking behavior, generate plausible open-ended responses, and bypass basic quality checks. It&#8217;s a constant cat-and-mouse game where, for every new AI-powered fraud detection method a panel company builds, a new AI-powered fraud technique emerges to challenge it.</p><p>Fighting panel fraud requires an explicit, multi-layered system. It&#8217;s not about asking nicely; it&#8217;s about building walls. State-of-the-art systems can include:</p><ul><li><p><strong>ID Verification: </strong>The most bulletproof method is to require some sort of identification and a verification provider to verify the veracity of that ID. This is not often done because few people feel comfortable sharing government identification with random panel companies.</p></li><li><p><strong>Fraud Detection Services:</strong> Because fraud is not unique to surveys, there are multiple companies that provide solutions that can be implemented to identify and detect fraudulent behavior. When licensed by panel companies they provide a score for any individual interaction based on their own sophisticated models and blacklists.</p></li><li><p><strong>Behavioral Checks and Anomaly Detection:</strong> Advanced analytics and machine learning models on top of the panelist&#8217;s behavior is one of the best ways to identify fraudulent panelists. It can be pretty easy to spot when someone appears to be engaging in behavior outside the norm and heavily focused on making a lot of money to redeem.  </p></li></ul><p>If a panel provider can&#8217;t give you a clear, convincing story about how they are systematically fighting fraud, especially with the advancement of AI, walk away. Your data will be built on a foundation of lies, and any insights you derive will be worthless.</p><h2>Law #2: Thou Shalt Ensure Representation</h2><p>You&#8217;ve eliminated the bots and fraudsters. Now you have humans. But here&#8217;s the next critical question: are they the <em>right</em> humans?</p><p>If you&#8217;re trying to understand the American car-buying market, but your panel is made up of 80% retired women from Florida who don&#8217;t drive, your data is useless. It might be fraud-free, but it&#8217;s not representative of the audience you need to understand. This is the second-most critical failure point for a panel.</p><p>The problem is that panels are, by their very nature, not representative of the general population. They are composed of people who <em>choose</em> to join panels. These &#8220;joiners&#8221; tend to be different from the general population; studies have shown they often score higher on traits like conformity and conscientiousness. They are professional survey-takers.</p><p>Beyond the psychological profile, there are the demographic skews. Panels often over-represent older, lower-income, and female respondents, while under-representing men, younger people, and high-income earners. U.S. men 18-24 are notoriously the most difficult population to find in consumer panels.</p><p>A good panel company understands this and works relentlessly to counteract it. This means:</p><ul><li><p><strong>Diverse Recruitment Sources:</strong> Not just relying on affiliate marketing, but actively recruiting from a wide range of sources to bring in different types of people.</p></li><li><p><strong>Active Panel Management:</strong> Constantly monitoring the demographic and behavioral makeup of their panel and refreshing it to prevent it from becoming stale and skewed.</p></li><li><p><strong>Honest Quotas:</strong> Being transparent about which audiences are hard to reach and not just filling quotas with the easiest-to-find respondents.</p></li></ul><p>Representation is the difference between talking to <em>some</em> people and talking to the <em>right</em> people. Once you&#8217;re sure you&#8217;re talking to real people (Law #1), you have to be sure they&#8217;re the right ones.</p><h2>Law #3: Thou Shalt Require Quality Responses</h2><p>You&#8217;ve got real people. They&#8217;re the right people. Now for the final law: Are they actually paying attention, or just mindlessly clicking through to collect their reward?</p><p>This is where the user experience of research becomes critical. For decades, the industry has subjected respondents to soul-crushing surveys: long, boring, repetitive, and poorly designed. It&#8217;s a one-way transaction where the respondent provides valuable input and gets a few pennies in return. Is it any wonder that response quality suffers?</p><p>Picture this: A respondent faces a 20-minute grid rating attributes for six yogurt brands. By minute 12, their brain has turned to mush. They&#8217;re not giving thoughtful feedback, they&#8217;re in survival mode, clicking whatever gets them to the finish line. And you&#8217;re basing business decisions on that data.</p><p>Ensuring response quality is the final frontier. It only matters once you&#8217;ve solved for fraud and representation, but it&#8217;s what separates good data from great data. This involves:</p><ul><li><p><strong>Better Survey Design:</strong> Creating shorter, more engaging, and mobile-friendly surveys that respect the respondent&#8217;s time and intelligence.</p></li><li><p><strong>Fair and Aligned Incentives:</strong> Moving beyond micro-payments that reward speed over thoughtfulness. This could mean higher rewards for more engaged responses or non-monetary incentives that build a sense of community.</p></li><li><p><strong>Closing the Feedback Loop:</strong> Making respondents feel like partners, not cogs. Show them how their data was used. Let them know their opinion mattered. This builds loyalty and encourages more thoughtful participation in the future.</p></li></ul><p>If a panel is filled with real, representative people who are bored out of their minds, the data they produce will be shallow and unreliable.</p><h2>The Bottom Line</h2><p>The 37-question checklist has its place, but it&#8217;s complicated and subjective. When evaluating a consumer panel the three laws are critical, and the order is non-negotiable. Representation doesn&#8217;t matter if your respondents are bots, and engagement is worthless if you&#8217;re talking to the wrong people.</p><p>Every panel company will tell you they have the best panel in the industry. They&#8217;ll show you videos of engaged panelists gushing about how much they love taking surveys. They&#8217;ll parade happy respondents on conference stages. What they&#8217;re showing you is a carefully curated cohort, individuals who meet Law #3. But that same panel could be riddled with fraud and populated exclusively by retirees with nothing but time.</p><p>Here&#8217;s what matters: If a panel provider can&#8217;t give you strong, clear, convincing answers to all three laws: with actual data, not marketing speak, find one that will. These three laws are the unbreakable foundation of quality data. Get them right, and you have insights you can trust. Get them wrong, and you&#8217;re just paying for noise.</p><blockquote><p>Curious to know what to ask your vendors? I put together a simple guide with the three questions you could ask and some additional info panels could provide to earn brownie points. <strong><a href="https://drive.proton.me/urls/YS7Q2GTG6C#oXyZVIlcVaJr">Check out the free download at this link</a></strong>.</p></blockquote><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.greymatterunloaded.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Grey Matter Unloaded! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[When Good Management Leads to Failure ]]></title><description><![CDATA[Lessons from The Innovator&#8217;s Dilemma]]></description><link>https://www.greymatterunloaded.com/p/when-good-management-leads-to-failure</link><guid isPermaLink="false">https://www.greymatterunloaded.com/p/when-good-management-leads-to-failure</guid><dc:creator><![CDATA[Marc Ryan]]></dc:creator><pubDate>Thu, 06 Nov 2025 14:15:54 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!_uXg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5fe2f73e-a778-4079-afe2-e2f4fae54082_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_uXg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5fe2f73e-a778-4079-afe2-e2f4fae54082_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_uXg!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5fe2f73e-a778-4079-afe2-e2f4fae54082_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!_uXg!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5fe2f73e-a778-4079-afe2-e2f4fae54082_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!_uXg!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5fe2f73e-a778-4079-afe2-e2f4fae54082_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!_uXg!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5fe2f73e-a778-4079-afe2-e2f4fae54082_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_uXg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5fe2f73e-a778-4079-afe2-e2f4fae54082_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5fe2f73e-a778-4079-afe2-e2f4fae54082_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3035730,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.greymatterunloaded.com/i/178093681?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5fe2f73e-a778-4079-afe2-e2f4fae54082_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_uXg!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5fe2f73e-a778-4079-afe2-e2f4fae54082_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!_uXg!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5fe2f73e-a778-4079-afe2-e2f4fae54082_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!_uXg!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5fe2f73e-a778-4079-afe2-e2f4fae54082_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!_uXg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5fe2f73e-a778-4079-afe2-e2f4fae54082_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>I&#8217;ve been thinking a lot about disruption lately, it&#8217;s hard not to. Popular media today is filled with stories about how AI is disrupting the world. But the most interesting thing about disruption is its ability to take down industry giants while they&#8217;re still posting record profits. The AI wave of disruption doesn&#8217;t appear to be slowing down and it appears that Insights is in the crosshairs. </p><p>Given all the talk about disruption I went back and reread Clayton Christensen&#8217;s <em>The Innovator&#8217;s Dilemma</em>. It&#8217;s one of my favorite business books, and if you haven&#8217;t picked it up in a while, or if you&#8217;ve only heard about it secondhand, there&#8217;s no better time to revisit it. The lessons the book provides are invaluable for any industry, but for a group close to home, its lessons are worth re-learning because the insights industry is living through its own disruptive moment. A fundamental question any established player in the space will have to answer is, are they walking the same path as the disrupted companies Christensen described?</p><h2>The Paradox of Good Management</h2><p>The unsettling part of Christensen&#8217;s work is that the companies that failed didn&#8217;t have idiots in charge. They were led by some of the best managers in the world. They listened to their customers. They invested in innovation. They focused on their most profitable segments. They did everything the business schools told them to do. And they still got steamrolled by innovation.</p><p>This happens because &#8220;good management&#8221; in a stable market becomes a liability when disruption hits. The practices that make for big company success: staying close to customers, chasing higher margins, focusing on substantial markets, are precisely what blind them to disruptive threats.</p><blockquote><p><strong>Lesson #1:</strong> Good managers do what makes sense, and what makes sense is primarily shaped by what creates value today. To ensure consistency in value delivery, managers build organizations that are meant not to change, or if they must change, to change through tightly controlled procedures. This means the very systems that make companies successful are hardwired to resist change.</p></blockquote><h2>The Disruptive Playbook</h2><p>In his book, Christensen lays out a pattern that repeats across industries. Often, disruptive technologies don&#8217;t start by being better but are considered worse. They underperform on the metrics that mainstream customers care about. But they&#8217;re cheaper, simpler, smaller, or more convenient. And they appeal to a fringe market that the big players don&#8217;t care about.</p><p>Sound familiar? That&#8217;s exactly what&#8217;s happening in insights right now. Synthetic respondents, AI-generated personas, and AI-powered storytelling are all technologies easy for the industry veterans to dismiss. But they&#8217;re faster, cheaper, good enough and scale massively, something no traditional research does well.</p><p>A former colleague of mine, an incredibly smart researcher, told me that a customer who was using a synthetic model for an internal tool is , &#8220;Not what our customers want.&#8221; The irony is that he was right. The <em>current</em> customers don&#8217;t. But that&#8217;s the trap.</p><blockquote><p><strong>Lesson #2:</strong> Your most profitable customers generally don&#8217;t want, and indeed can&#8217;t use, products based on disruptive technologies. They offer a different package of attributes valued only in emerging markets remote from, and unimportant to, the mainstream, giving disruptors the opportunity to learn and gradually move up market.</p></blockquote><h2>Why Customers Mislead You</h2><p>Here&#8217;s one of the most counterintuitive insights from the book: sometimes it&#8217;s right <em>not</em> to listen to your customers. Times when you should invest in lower-performance products that promise lower margins. Times when you should aggressively pursue small markets instead of substantial ones.</p><p>Every customer I have had while at an established business wants sustaining innovations, a  better and cheaper version of what they already buy. I&#8217;ve seen customer advisory boards scratch their heads with wonder at why we&#8217;d pursue something they don&#8217;t want. They won&#8217;t lead you toward disruptive technologies. They can&#8217;t. They don&#8217;t know they need them yet.</p><p>This is the innovator&#8217;s dilemma in its purest form. Do what your customers want, or do what will keep you alive in five years? You can&#8217;t do both.</p><blockquote><p><strong>Lesson #3:</strong> Being a good steward to your core customer can fail you when disruption hits. Focusing on sustaining innovation is tempting for managers who fear cannibalizing sales of their existing products.. </p></blockquote><h2>The Margin and Timing Trap</h2><p>Here&#8217;s where it gets really uncomfortable for established firms: the math just doesn&#8217;t work. Disruptive products are cheaper and offer razor-thin margins. For a company enjoying a cushy 40% on premium insights, a 10% margin business looks like a step backward.</p><p>I&#8217;ve been in big company quarterly reforecast calls with investors, owners and management who sigh meaningfully when they look at anemic revenue and margin forecasts.  With this kind of pressure its easy to see why established firms play the waiting game, deciding to jump in only when the market is &#8220;large enough to be interesting.&#8221; It sounds prudent, but it&#8217;s a fatal mistake.</p><p>While they wait, a startup with nothing to lose is thrilled with those 10% margins. They build a business on it, get better, and move upmarket. By the time the market is finally &#8220;interesting&#8221; to the incumbent, the startup already owns it. I&#8217;ve seen this movie before: big firms dismissed DIY tools as toys until SurveyMonkey and Qualtrics became billion-dollar giants. Now, they&#8217;re doing it again with AI and synthetic data, creating the very vacuum a new disruptor is about to fill.</p><blockquote><p>Lesson #4: Disruptive products are a trap of bad economics and poor timing. They look too small and unprofitable to matter, until they&#8217;re too big to fight.</p></blockquote><h2>What Actually Matters: Processes and Values</h2><p>Here&#8217;s where the book gets interesting. Christensen argues that an organization&#8217;s capabilities don&#8217;t just reside in its people or resources. They reside in its processes and values.</p><p>Processes are how you get work done consistently. Values are the criteria you use to make prioritization decisions. And both of these are designed for stability, not change.</p><p>Think about a traditional research firm. Their processes are built around fielding surveys, collecting data, managing panels, and delivering reports. Their values prioritize high-margin work, large clients, and proven methodologies. </p><p>They believe these are the core, the only things worth improving are the efficiency with which the tasks are completed. They&#8217;re what made the company successful.</p><p>But when disruption hits, those same processes and values become anchors. You can&#8217;t just assign smart people to the problem and expect them to succeed. If the organization&#8217;s processes and values don&#8217;t fit the new opportunity, it won&#8217;t work. Even with the advent of AI, we see the established firms looking at how to deploy AI to make established processes better not disrupt the market (a point I made in a <a href="https://www.greymatterunloaded.com/p/insights-vendors-and-clients-want">previous article</a>). </p><blockquote><p><strong>Lesson #5:</strong> People, technology, and skills have less to do with succeeding in disruptive innovation than do your company processes and what is considered valuable. If your company has ossified processes and value margin stability, disruption will be tough, consider a spin out.</p></blockquote><h2>Lessons Learned, Now What?</h2><p>So what&#8217;s the answer? Christensen offers a few principles:</p><ul><li><p><strong>Don&#8217;t wait.</strong> Small markets don&#8217;t become large markets overnight, but they do become large markets. Get in early when you can still learn and adapt.</p></li><li><p><strong>Use discovery-driven planning.</strong> Identify the assumptions your plans are based on. Test them. Adjust. Don&#8217;t pretend you know more than you do about emerging markets.</p></li><li><p><strong>Match the organization to the opportunity.</strong> If your core business can&#8217;t pursue the disruptive opportunity because of its processes and values, create a separate unit that can.</p></li><li><p><strong>Accept lower margins initially.</strong> Disruptive innovations start with worse economics. That&#8217;s okay. The question isn&#8217;t whether they&#8217;re as profitable as your core business today. It&#8217;s whether they&#8217;ll be the foundation of your business tomorrow.</p></li></ul><h2>The Bottom Line</h2><p>The innovator&#8217;s dilemma doesn&#8217;t tell the story of managers being stupid or lazy. It tells the opposite story, a story of ridiculously smart people being trapped by their own success. The things that made you great in one era can become the things that kill you in the next.</p><p>For the insights industry, that moment is now. The disruptors aren&#8217;t on the horizon; they&#8217;re having conversations with your clients. The firms that survive won&#8217;t be the ones with the best current products or the most loyal customers. They&#8217;ll be the ones willing to cannibalize themselves before someone else does it for them.</p><p>Christensen&#8217;s book is almost 30 years old, but it reads like it was written yesterday. The patterns he identified keep repeating because the underlying dynamics don&#8217;t change. Good management in stable times becomes bad management in disruptive times.</p><p>Are you willing to do what feels wrong to do what&#8217;s right?</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.greymatterunloaded.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Grey Matter Unloaded! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Qual-at-Scale is the Future...or is it?]]></title><description><![CDATA[Behind the Trendy Method Secretly Paving the Way for Synthetic]]></description><link>https://www.greymatterunloaded.com/p/qual-at-scale-is-the-future-or-is-it</link><guid isPermaLink="false">https://www.greymatterunloaded.com/p/qual-at-scale-is-the-future-or-is-it</guid><dc:creator><![CDATA[Marc Ryan]]></dc:creator><pubDate>Mon, 20 Oct 2025 13:14:59 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Qtjm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc49f2e4b-a428-45ff-a261-c474aa5efced_1248x832.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Qtjm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc49f2e4b-a428-45ff-a261-c474aa5efced_1248x832.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Qtjm!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc49f2e4b-a428-45ff-a261-c474aa5efced_1248x832.png 424w, https://substackcdn.com/image/fetch/$s_!Qtjm!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc49f2e4b-a428-45ff-a261-c474aa5efced_1248x832.png 848w, https://substackcdn.com/image/fetch/$s_!Qtjm!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc49f2e4b-a428-45ff-a261-c474aa5efced_1248x832.png 1272w, https://substackcdn.com/image/fetch/$s_!Qtjm!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc49f2e4b-a428-45ff-a261-c474aa5efced_1248x832.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Qtjm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc49f2e4b-a428-45ff-a261-c474aa5efced_1248x832.png" width="1248" height="832" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c49f2e4b-a428-45ff-a261-c474aa5efced_1248x832.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:832,&quot;width&quot;:1248,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1752134,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.greymatterunloaded.com/i/176422474?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc49f2e4b-a428-45ff-a261-c474aa5efced_1248x832.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Qtjm!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc49f2e4b-a428-45ff-a261-c474aa5efced_1248x832.png 424w, https://substackcdn.com/image/fetch/$s_!Qtjm!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc49f2e4b-a428-45ff-a261-c474aa5efced_1248x832.png 848w, https://substackcdn.com/image/fetch/$s_!Qtjm!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc49f2e4b-a428-45ff-a261-c474aa5efced_1248x832.png 1272w, https://substackcdn.com/image/fetch/$s_!Qtjm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc49f2e4b-a428-45ff-a261-c474aa5efced_1248x832.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>I&#8217;ve commented many times about the challenges on the consumer side of the research equation. Most everyone in the space acknowledges that most research we put in front of consumers is boring. Running panels is something I&#8217;ve spent some time on and if there&#8217;s one universal truth, it&#8217;s that the traditional survey experience is, to put it mildly, a soul-sucking chore. We&#8217;ve optimized the life out of it, turning human feedback into a sterile, joyless transaction.</p><p>This challenge isn&#8217;t new. We have known about it for decades, and people have attempted to hone the experience (chat bots anyone?), but these attempts haven&#8217;t delivered on their promise.</p><p>Yet, I&#8217;ve had enough conversations with language models to imagine a world where instead of ticking boxes, research will revolve around genuine conversations. A friendly, engaging chat with an AI moderator that listens, probes, and understands nuance. This solution no longer feels like science fiction; this is the promise of &#8220;qualitative at scale.&#8221; It&#8217;s the dream of getting rich human insights through natural conversations that are navigated by AI systems designed to get the insights clients are chasing. Enabled in a SaaS-like model without the logistical nightmare of a thousand focus groups, it&#8217;s faster, more engaging, and frankly, a much more human way to do research.</p><p>Sounds like the perfect evolution for an industry desperate for a better way. Yet qual-at-scale faces the same uphill battle that previous solutions to a more engaging consumer experience faced: the client. Meeting client requirements rarely aligns with fun consumer experiences, but that&#8217;s not stopping the start-ups. As a result, this beautiful, respondent-friendly future is rolling a Trojan horse up to the client&#8217;s gates, and what&#8217;s inside changes the game in a way nobody&#8217;s ready for.</p><h2>The Dream: Qual-at-Scale Enters the Chat</h2><p>The idea behind qual-at-scale is simple and brilliant. Use AI agents as moderators to conduct thousands of in-depth interviews. No longer are consumers railroaded into a research experience that they find disengaging. The AI moderator will work to ensure critical topics are discussed, even if conversations wander. The benefits are obvious. Respondents get a better experience, which means higher quality engagement (aka better data) and less fraud. Researchers get the &#8220;why&#8221; behind the &#8220;what&#8221;, the rich, unstructured data that only comes from small-scale qualitative work.</p><p>For an industry that&#8217;s been stuck in a rut of five-point scales and a lack of survey design standards, this feels like a revolution. It&#8217;s a chance to move beyond the sterile, quantitative box-ticking that has defined market research for decades.</p><h2>The Sobering Reality: Clients Love Numbers</h2><p>There&#8217;s one tiny problem. Clients depend on numbers.</p><p>According to recent data, online qual accounts for less than half (40%) of spend on qualitative research. The free food and networking behind the two-way mirror are still a better experience compared to a chat session.</p><p>But the more important number is <strong>6%</strong>&#8230; the total share of client spend on online qualitative research. Conversely, <strong>85%</strong> of insights spending is <strong>quantitative<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></strong>. Why? Because the data that drives big decisions is in tracking studies, segmentation, and other large-scale quantitative methods.</p><p>Clients want charts that show trend lines going up and to the right. Evidentiary proof they&#8217;re succeeding in their jobs. Management wants statistically significant numbers to validate decisions. The CMO wants an OKR that proves they are a good steward of billions of ad dollars. Everyone needs a reliable number, a neat little box to check. Selling them on the beauty of unstructured conversational data is like convincing a Wall Street trader to ditch their Bloomberg terminal for poetry. They will appreciate the art, but they can&#8217;t build a financial model with it.</p><p>For qual-at-scale to become more than a niche player in the &#8220;adjudicate choices&#8221; or &#8220;tracking choices&#8221; phase, it has to solve the client&#8217;s problem. It has to give them the numbers they crave.</p><h2>Bringing AI Inside the Gates</h2><p>This is where the Trojan Horse gets an upgrade. A <a href="https://arxiv.org/abs/2510.08338v1">recent paper</a> making the rounds showed something fascinating: open-ended text used by AI models accurately predicts purchase intent Likert scale (e.g. 1-to-5) scores. Astounding results that matched the outcomes of 57 Colgate-Palmolive traditional concept tests.</p><p>Think about that. An AI moderator has a chat with a consumer about a new product, and at the end, it predicts how that person would have rated it on a 1-to-5 purchase intent scale. This is the magic bullet. You get the rich, human-centric experience of a qualitative interview, and the client gets the quantitative data they need for their dashboards. Everyone wins, right?</p><p>Well, not exactly.</p><h2>The Unintended Consequence</h2><p>As I&#8217;ve already pointed out, the biggest gap in qual-at-scale is the need for direct metrics. Any tech that accurately converts a free-form unstructured conversation into Likert ratings will be a panacea for qualitative research. But there&#8217;s one catch. The AI in the aforementioned study did indeed accurately predict Likert ratings from open-ended text, but the LLM synthetically created the open-ended text.</p><p>There&#8217;s that nasty <em>synthetic</em> word again.</p><p>The study wasn&#8217;t about predicting Likert scores from open-ended text; it was about training LLMs to predict Likert scores synthetically. In a method they called direct Likert rating (DLR), the authors of the paper found that the LLM was moderately successful in predicting Likert scores (80% correlation). However, they found that the better method, follow-up Likert rating (FLR), achieved a peak 90% correlation. The key to FLR? Letting the LLM write a brief textual response about purchase intent from the point of view of a synthetic respondent then using that text to predict the purchase intent rating (it&#8217;s a bit more complicated so <a href="https://arxiv.org/html/2510.08338v1#S3">read the paper</a> to learn what they did).</p><p>Here&#8217;s the irony that&#8217;s been rattling around in my head. If the qual-at-scale companies want to grow against the full pie and not just their 15%, they&#8217;re going to power ahead with using open-ended text to predict Likert ratings. Yet, in the process of teaching an AI to understand human conversation so deeply that it translates messy, emotional, human chatter into a clean 1-to-5 rating, we are also teaching it something else: how to <em>be</em> a human respondent.</p><p>Every time we validate that the AI&#8217;s predicted score matches an actual human&#8217;s score, we&#8217;re not improving a tool; we&#8217;re building a simulation. We&#8217;re creating a model that will, with increasing accuracy, replicate human preferences and opinions without the human.</p><p>This pours gasoline on the fire of the synthetic data revolution, something I&#8217;ve talked about in <a href="https://www.greymatterunloaded.com/p/why-vibe-insights-is-the-future">Why Vibe Insights is the Future</a>. The race isn&#8217;t just about getting better data from humans. It&#8217;s a battle between two futures:</p><ol><li><p><strong>Qual-at-Scale:</strong> Using AI to have better conversations with <em>real</em> people.</p></li><li><p><strong>Synthetic Insights:</strong> Using AI to generate responses from <em>virtual</em> people.</p></li></ol><p>Both are chasing the same investment dollars. Both promise to solve the scale and cost problems of traditional research. But one is perfecting the human feedback loop, while the other is replacing it.</p><h2>So, What&#8217;s the Play?</h2><p>This isn&#8217;t a simple good-versus-evil story. Qual-at-Scale is a necessary and powerful evolution. It offers a path away from the dumpster fire that consumer panels have become, something I&#8217;ve lamented in <a href="https://www.greymatterunloaded.com/p/consumer-panels-are-a-stshow-is-there">Consumer Panels are a S**tshow</a>. Now there&#8217;s new hope; a better way to engage with people.</p><p>But let&#8217;s be pragmatic. Qual has been the smallest portion of the research pie for reasons that have little to do with the actual consumer experience. Qual is a method for clients to deploy, one they do see value in but one which comes with an asterisk. It&#8217;s seen as engaged research that isn&#8217;t statistically projectable. Clients have a use for that type of research, but it&#8217;s not top of the list of methods they deploy.</p><p>The business case for pure synthetic data is brutally efficient. It works for everything, it&#8217;s scalable, fast, and cheap. As AI gets better, the temptation for clients to opt for a &#8220;good enough&#8221; synthetic answer over a more expensive human one will be immense.</p><p>I believe the future isn&#8217;t one or the other, but a hybrid.</p><p>I might seem against qual-at-scale, but that&#8217;s not true. The qual-at-scale market will likely dominate the &#8220;<a href="https://www.greymatterunloaded.com/i/173683178/the-market-research-family-the-explorers">Explorers</a>&#8221; family of research, which requires genuine creativity, unexpected tangents, and the messy, brilliant spark of human insight. This is where you discover what you don&#8217;t know. The enormous challenge standing in the way is the need to retrain the market to look to a method that is ignored for big-dollar decision making. This retraining will take years, if not a decade, so don&#8217;t expect overnight success.</p><p>Synthetics, on the other hand, will take over the more repetitive, predictable parts of the industry. Think tracking studies, simple ad tests, and anything where the parameters are well-defined.</p><p>The smartest companies won&#8217;t pick a side. They&#8217;ll use high-quality, engaging qual-at-scale to <em>train, validate, and continuously refine</em> their synthetic models. The human conversations become the ground truth, the R&amp;D engine that makes the synthetic data not just plausible, but powerful. This creates a defensible moat that pure-play synthetic companies, running on generic data, will struggle to cross.</p><h2>The Bottom Line</h2><p>Qual-at-Scale is one of the most exciting developments in our industry. It promises a future where research is more human, more engaging, and more insightful. Succeeding in this space will be tough enough with clients leaning on old habits. And in our rush to solve that problem and build this better future, we might be building a Trojan horse filled not with humans but with the very thing that makes humans obsolete.</p><p>The next few years will be a fascinating, chaotic race. Two paths are clear, and they are diverging fast. The question is, are you betting on having better conversations with humans, or are you betting on an AI that&#8217;s simply learned to have a better conversation with itself?</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.greymatterunloaded.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Grey Matter Unloaded! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>https://www.statista.com/statistics/267225/global-market-research-highest-revenue-sources-by-service-type/</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[A Tale of Two CEOs]]></title><description><![CDATA[IPSOS and Kantar bet on operational excellence]]></description><link>https://www.greymatterunloaded.com/p/a-tale-of-two-ceos</link><guid isPermaLink="false">https://www.greymatterunloaded.com/p/a-tale-of-two-ceos</guid><dc:creator><![CDATA[Marc Ryan]]></dc:creator><pubDate>Tue, 30 Sep 2025 13:30:06 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!HQz2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab32687b-bf35-408a-850c-c7e5f7d02965_1248x832.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!HQz2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab32687b-bf35-408a-850c-c7e5f7d02965_1248x832.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!HQz2!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab32687b-bf35-408a-850c-c7e5f7d02965_1248x832.png 424w, https://substackcdn.com/image/fetch/$s_!HQz2!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab32687b-bf35-408a-850c-c7e5f7d02965_1248x832.png 848w, https://substackcdn.com/image/fetch/$s_!HQz2!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab32687b-bf35-408a-850c-c7e5f7d02965_1248x832.png 1272w, https://substackcdn.com/image/fetch/$s_!HQz2!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab32687b-bf35-408a-850c-c7e5f7d02965_1248x832.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!HQz2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab32687b-bf35-408a-850c-c7e5f7d02965_1248x832.png" width="1248" height="832" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ab32687b-bf35-408a-850c-c7e5f7d02965_1248x832.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:832,&quot;width&quot;:1248,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2072929,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.greymatterunloaded.com/i/174447915?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab32687b-bf35-408a-850c-c7e5f7d02965_1248x832.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!HQz2!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab32687b-bf35-408a-850c-c7e5f7d02965_1248x832.png 424w, https://substackcdn.com/image/fetch/$s_!HQz2!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab32687b-bf35-408a-850c-c7e5f7d02965_1248x832.png 848w, https://substackcdn.com/image/fetch/$s_!HQz2!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab32687b-bf35-408a-850c-c7e5f7d02965_1248x832.png 1272w, https://substackcdn.com/image/fetch/$s_!HQz2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab32687b-bf35-408a-850c-c7e5f7d02965_1248x832.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>With CEOs, there are two flavors: Visionaries and Operators. One builds the future; the other keeps the lights on. The insights industry welcomed new leadership, any guesses what kind?</p><p>In the category of Market and Marketing Research, the biggest players are IPSOS and Kantar, and both got a little shake-up last month. First, IPSOS appointed Jean-Laurent Poitou as Global CEO. Then, Kantar announced Paul Zwillenberg as its new Global CEO.</p><p>Big Insights is looking for a change, and it&#8217;s unquestionably needed. The arrival of AI-driven insights compels both leaders to secure a durable future for two legacy companies closing in on 100 years of operation.</p><p>What&#8217;s interesting is that in both leaders, we see the same blueprint; they&#8217;re both what you&#8217;d consider to be operators. It&#8217;s not surprising, but it got me thinking: what kind of leader do these companies need to survive the next 50 years? </p><h3>Visionaries vs. Operators</h3><p>We know the Visionaries. Steve Jobs, Elon Musk, Bill Gates, Phil Knight and Henry Ford come to mind. Often characterized as controversial experts with a vision, and the know how to get there. They&#8217;re happy to take on debt if it means they will achieve their outcome faster or more successfully.</p><p>Then there are the Operators. These are the dark horses, the CEOs you&#8217;ve never heard of. They often fly under the radar, Joe consumer does not know who they are, and they&#8217;re lacking a grand vision for the product or company. What they have in spades is operational genius. They focus a company on financial outcomes and reshape a business into a well-run machine built to last.</p><p>The world is full of visionary CEOs. Every venture-backed startup, mom-and-pop shop, or sole proprietor is its own visionary. They parlay their inherent understanding of a market into value for customers. Visionaries are the engine of capitalism. They create almost everything we value as consumers: smartphones, cars, shoes, etc. So why are most leaders operators if visionaries are the engines of innovation?</p><h3>Why Operators Usually Win?</h3><p>It boils down to one thing: shareholder return.</p><p>Everyone has someone they answer to. For you and me, that list likely includes family, friends, your employer, etc. Either consciously or unconsciously, we rank the priority of each of those groups. Many parents put family first, but not all. We know the trope of the absent dad who travels for work and misses little Jimmy&#8217;s softball games. That&#8217;s a real thing, and in that case the father prioritized work over family perhaps out of need or a goal of maximizing Jimmy&#8217;s future. </p><p>The same obligations and prioritization hold true for businesses, but often the stakeholders are more limited. In business, the two most common stakeholders are customers and investors. Depending on how the business has prioritized its stakeholder groups, you can predict the leader they&#8217;ll bring onboard.</p><p>Operators are more predictable. They&#8217;re more likely to drive initiatives that benefit investors, specifically focusing on profit and short-term growth. </p><p>Visionaries rarely care as much about immediate shareholder value, a focus that can backfire. They risk everything for customer value but when they succeed&#8230; wow! Remember Amazon? It took them seven years to turn a profit because Bezos was obsessed with the customer experience and refused to back down. He had a vision. Now, their quarterly income is more than the cumulative profit of their first 24 years. The payoff is there, but Visionaries are tough on investors who aren&#8217;t keen to wait a decade for returns.</p><p>This is why we&#8217;ve seen visionary CEOs pushed out. The most famous example is when Apple&#8217;s board brought in John Sculley (an operator) to replace Steve Jobs. A decision that backfired and was reversed, resulting in Steve Jobs being granted the license to turn Apple into the world&#8217;s most powerful brand.</p><h3>The Apple Case Study: A Warning Label</h3><p>Let&#8217;s take a quick jaunt down memory lane. After returning to Apple, Steve renewed the focus on his vision of making the world&#8217;s best products. He succeeded, and by the end of 2010, the company had amassed $27 billion in cash. Most investors, including Warren Buffet, wanted him to use it for stock buybacks or dividends. Jobs adamantly refused, asking, &#8220;Which would you rather have us be? A company with our stock price and $40 billion in the bank? Or a company with our stock price and zero dollars in the bank?&#8221; Steve knew there&#8217;d be new products to develop and needed access to capital to make it work.</p><p>Steve Jobs passed away in October 2011, and his operating partner Tim Cook took over as planned. Months later, in early 2012, Tim announced Apple&#8217;s capital return program. As you&#8217;d expect from an Operator CEO, he had re-prioritized the company focus from customers to investors and leaned in on returning value to shareholders. Investors loved it. However, when Steve passed, so did Apple&#8217;s vision.</p><p>Let&#8217;s be real, today&#8217;s iPhone is a shinier version of the one from five years ago. Siri, purchased by Jobs and once the front runner, is a laughable failure compared to Alexa and Google Assistant. Apple&#8217;s announced AI features like Apple Intelligence never materialized. By 2023, even the Apple product team knew they were behind on AI with only 50,000 NVIDIA AI processors to Meta&#8217;s 150,000, Microsoft&#8217;s 500,000 and Google&#8217;s 1 million. Yet, the CFO, Luca Maestri, shut down a request to purchase an additional $20 billion (50,000) in chips with the feedback, &#8220;make the existing chips more efficient&#8221;. Instead, what did Apple do with its cash that year? They dropped $98 billion on dividends and buybacks.</p><p>Apple makes beautiful devices, but setting aside the fanboy card, it&#8217;s clear to see the company is behind on product development. iPhone sales have plateaued, and <a href="https://www.youtube.com/watch?v=2JOhIoTsy0c">growth is coming from accessories</a> like AirPods. Cook has been drafting on the phenomenal brand equity built by Jobs rather than true innovation.</p><p>And that&#8217;s the rub. Tim Cook took over 14 years ago and, despite middling product innovation, he&#8217;s kept revenue and profit growing. This tells us two things. It illustrates the power of a brand and brand marketing. But it also makes it clear to see why most investors champion the operator approach. Apple has been a massive financial success despite the lack of vision. </p><h3>So, what about insights?</h3><p>The Apple story provides a useful anecdote for combining strategic vision and operational expertise. To break new ground on innovation, you want the visionary calling the shots, backed by operational excellence. This helps deliver long-term durable value without sacrificing enterprise value for investors.</p><p>However, visionary CEOs are unicorns in established businesses, especially those backed by private equity or fickle investors who want a 2-5 year return on investment. These days, the blueprint for an operator CEO is a tenure at a big consultancy like McKinsey, Accenture, or BCG. They are the Ivy League master mechanics who tune an engine to run smoothly and efficiently. They&#8217;ll overhaul clunky businesses, setting them up to get it over 200,000 miles without issue. The gamble most investors take is that while they&#8217;re tuning the engine, an innovator across the globe is not building a driverless future.</p><p>So, let&#8217;s look back at our two new insights CEOs, Paul and Jean-Laurent. A quick peek at their LinkedIn profiles tells you everything you need to know: they&#8217;re textbook operators. This makes perfect sense given the financial expectations of both firms. Both are outsiders to the industry but are likely among the smartest and most capable leaders around.</p><p>So, the million-dollar question: how long until they find a vision? And will it be a vision they understand and believe, or one adopted from consultants and a multitude of internal insiders? And when it comes time to make bets, to trade off profit for customer value, will they be comfortable doing so? It&#8217;s hard to say. </p><p>My money&#8217;s on them not climbing that ladder. Their intuition will be to focus on what they know, investors first and customers second. For both businesses, it&#8217;s not a bad thing to work on financial performance. But in a world where the AI innovation cycle is moving at breakneck speed, I can&#8217;t shake the feeling that these companies hired Indy car mechanics for a trip to Mars.</p><p>Regardless of my thoughts, I&#8217;m sure both CEOs will see P&amp;L success in tuning their organizations for growth. They both appear to be capable leaders and have earned their shot. Yet I&#8217;m still curious to know what the future will bring and whether they will set both companies up for the next 100 years of operations. This will be the ultimate A/B test in operational leadership. </p><div class="pullquote"><p>For a great deep dive into the Apple case study check out <a href="https://www.youtube.com/watch?v=JUG1PlqAUJk">Apple Explained</a></p></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.greymatterunloaded.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Grey Matter Unloaded! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.greymatterunloaded.com/p/a-tale-of-two-ceos?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.greymatterunloaded.com/p/a-tale-of-two-ceos?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Insights Vendors and Clients Want Different Things From AI]]></title><description><![CDATA[Betting on a fast horse...]]></description><link>https://www.greymatterunloaded.com/p/insights-vendors-and-clients-want</link><guid isPermaLink="false">https://www.greymatterunloaded.com/p/insights-vendors-and-clients-want</guid><dc:creator><![CDATA[Marc Ryan]]></dc:creator><pubDate>Wed, 24 Sep 2025 13:30:29 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!GNoN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb42b94d-4672-4c22-bbe0-44d622a3fc91_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!GNoN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb42b94d-4672-4c22-bbe0-44d622a3fc91_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!GNoN!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb42b94d-4672-4c22-bbe0-44d622a3fc91_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!GNoN!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb42b94d-4672-4c22-bbe0-44d622a3fc91_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!GNoN!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb42b94d-4672-4c22-bbe0-44d622a3fc91_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!GNoN!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb42b94d-4672-4c22-bbe0-44d622a3fc91_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!GNoN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb42b94d-4672-4c22-bbe0-44d622a3fc91_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/db42b94d-4672-4c22-bbe0-44d622a3fc91_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3261839,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.greymatterunloaded.com/i/174374288?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb42b94d-4672-4c22-bbe0-44d622a3fc91_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!GNoN!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb42b94d-4672-4c22-bbe0-44d622a3fc91_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!GNoN!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb42b94d-4672-4c22-bbe0-44d622a3fc91_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!GNoN!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb42b94d-4672-4c22-bbe0-44d622a3fc91_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!GNoN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb42b94d-4672-4c22-bbe0-44d622a3fc91_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Everyone&#8217;s holding their breath and waiting for the AI bubble to pop. It will. A lot of what we&#8217;re seeing right now isn&#8217;t real innovation; it&#8217;s a frantic, FOMO-driven reaction to accessible tech and lowered barriers to entry. It feels a lot like 1999 all over again.</p><p>Think back to when the dot-com bubble burst. At the time I was working at Jupiter Media Metrix, measuring the meteoric rise of the internet high fliers such as <a href="https://en.wikipedia.org/wiki/Excite_(web_portal)">Excite</a>, <a href="https://en.wikipedia.org/wiki/Kozmo.com">Kozmo</a>, <a href="https://en.wikipedia.org/wiki/Webvan">Webvan</a>, and others. These companies were delivering on the promise of the internet, and in March 2000, they saw their futures crumble. As a measurement firm with a long list of newly insolvent entities as clients, Jupiter Media Metrix also became part of internet history. </p><p>That doesn&#8217;t mean that the companies that disappeared in the internet's heyday had bad ideas; Kozmo is a later-day DoorDash, Webvan looked a lot like Peapod does today. Rabid innovation is fueled by great ideas, but when FOMO and speed to market become the most important thing, financial stability gets left in the dust. Conversely, other players not only survived the dot-com bubble bursting but have since thrived (Amazon, Microsoft, Oracle, etc.). In retrospect, you might argue that the dot-com survivors saw the internet as an enabler of a bigger idea.   </p><p>So, it&#8217;s no surprise that many of the management teams at today&#8217;s big insights firms stand flat-footed when looking at the current AI-driven innovation happening in research. With substantial financial expectations (Kantar looking for an exit, IPSOS focused on growth, Nielsen refocusing on ratings), the motivation is to focus on using AI to solve core operational challenges first. Taking needless work out of the system is more important than adding new complexity. However, this practical approach introduces a fresh problem; it creates an AI innovation gap.</p><p>What do I mean by the AI innovation gap? Artificial intelligence is a technology that has implications across multiple parts of the insights pipeline. Yet, few if any vendors can implement those innovations in parallel across their key processes. This inability to execute on innovation across multiple pathways results in gaps in what an insights company needs to do and what its customers are expecting them to do. </p><p>Let&#8217;s break it down.</p><h3>Automation &amp; Optimization: </h3><p>Modern insights firms lead with operations. Servicing the largest brands in the world comes from being able not only to coordinate multiple market studies amongst different cultures but also from the ability to pull that data together into a cohesive single-source solution. I&#8217;ve been at big insights firms when RPA (robotic process automation) failed. The challenge to automating research turned out to lie in the variability of inputs and outputs. The number of exceptions to a standard process that required human oversight exceeded the value of the technology. We resolved that the only way to substantially deal with automation would be in the future through an AI lens. Well, the future is now.</p><p>Beyond just automation, the same efforts lead to optimization. As an example, one of the AI projects I worked on redefined the price curve for studies on a client by client basis given the historical performance of accounts. With many clients providing specifications for studies based on <a href="https://www.youtube.com/watch?v=fh7U04rzz8M">creative guesswork</a>, we knew full well that some studies tracked closer to the priced parameters and some didn&#8217;t. Deploying a simple machine learning model brought estimated prices much closer to reality, improving the forecasts and stabilizing profitability. </p><h3>Consumer Experience:</h3><p>How many of you out there are members of consumer panels? How many of you take surveys or scan receipts on a weekly basis? I&#8217;d venture few to none. The reality of the consumer experience in the insights field is that the value proposition sucks. It&#8217;s downright horrible, torturous, and soul-sucking. No one wakes up wanting to take a 10-minute survey on low-fat yogurt, let alone a 20-minute survey. Before I had the job of running a consumer panel, the experience was out of sight and out of mind. Corporate rewards and promotions link to client performance and not to how we treat the consumers who create the data we use (see my <a href="https://www.greymatterunloaded.com/p/consumer-panels-are-a-stshow-is-there">previous article on consumer panels</a>).  </p><p>Designers create surveys around the client&#8217;s data needs, not for consumer enjoyment. The insights world somehow missed the entire UX revolution of the last decade. We were too busy debating the merits of a 4-point versus a 5-point scale to notice. Surveys today are still among the poorest designed experiences you&#8217;ll find out there. A fact on display to every consumer panel respondent who&#8217;s completing surveys across multiple research vendors. </p><p>We are now seeing an AI-driven evolution of the consumer experience. Companies focused on consumers are few, not to mention the client's motivation is in retaining their Likert scales, but the evolution is well underway. </p><h3>Data &amp; Insights Synthesis</h3><p>I&#8217;ve <a href="https://www.greymatterunloaded.com/p/why-vibe-insights-is-the-future">spoken about synthetic data</a> in the past, and it is a polarizing topic. But without a doubt, using advanced data methods to take partial data and convert it into meaningful insights is something that holds promise. What makes tools like ChatGPT (i.e. LLMs) work is the ability to make smart predictions on how to complete a sentence. This prediction ability has gone from mediocre in 2023 to mind-blowing in 2025, with many of the AI company leaders claiming the tech to be on track to AGI intelligence by 2030. </p><p>Companies exist today that use public data or limited survey data to synthesize insights, consumers, and reports. AI researchers looking for a leg up on the competition are putting a renewed focus on building new models that <a href="https://www.livescience.com/technology/artificial-intelligence/scientists-just-developed-an-ai-modeled-on-the-human-brain-and-its-outperforming-llms-like-chatgpt-at-reasoning-tasks?utm_source=flipboard&amp;utm_content=topic/science">mimic the human brain</a> and think like we do. It&#8217;s a matter of time before this becomes a meaningful affordable technology to help answer more questions for the brands that have limited research budgets. </p><h3>Storytelling &amp; Discovery</h3><p>Finally, we have the storytelling and discovery-driven AI innovation. If you haven&#8217;t yet listened to an AI-generated podcast (audio overview) from NotebookLM, pause now and <a href="https://notebooklm.google.com/notebook/1b98444e-e87f-4191-81de-179be46a6082?artifactId=097a7131-dc40-40f4-bb28-0eed86a3c18a">go take a listen to the future</a>. The reality of the insights industry is that its primary output is a syndicated data dashboard that three people look at, or a PowerPoint presentation that gets filed away and never seen again. Valuable insights and findings are often siloed and under-leveraged across the organization. Couple that with the large share of research projects that end up being checkbox exercises, research that&#8217;s completed to validate performance but rarely used in the organization for future value creation. </p><p>Innovation in storytelling and discovery is what we see daily with the current crop of large language models on the market. Google is already releasing training courses on how to <a href="https://www.cloudskillsboost.google/paths/2480/course_templates/1422/html_bundles/579148">leverage NotebookLM for marketing research</a>. Both Anthropic&#8217;s Claude and ChatGPT are getting better and taking in data from documents and running a full analysis. Google rolled Gemini into Chrome, and Microsoft has been working on improving CoPilot for the office suite. Using AI to augment insight generation skills is becoming commonplace among practitioners in the insights industry.</p><p>While these technologies are massive improvements for the industry, we&#8217;re still in the early days. LLMs of today are fantastic yet blunt. We&#8217;re not yet at the stage that we trust AI with our credit cards to book travel but love the itineraries it creates. Similarly with insights, we&#8217;re augmenting with AI, not using it to do the end to end storytelling and discovery. This technology will get better, and more specific tools tuned to the needs of the insights industry will emerge. </p><div><hr></div><h2>The Gap</h2><p>Some will consider this heresy, but in the world of insights, AI is a distraction. </p><p>Most insights firms don&#8217;t have the time, resources or ability to innovate with AI in these areas simultaneously. In fact, many of the smartest people I know prefer innovating in methods and approaches rather than chasing a one-off AI feature. Methods and approaches are what clients pay for and what drives the P&amp;L. Everything around that is fluff. Sure, some fluff helps with the sale: faster turnaround, better quality respondents, more data, fancier dashboard, etc., but the value the client is buying is in the expertise and underlying methodologies. </p><p>This is where it falls apart. Everyone is pulling in a different direction.</p><p>The insights agencies and panel companies? They&#8217;re staring at the huge operations numbers in their P&amp;Ls and thinking, &#8220;How will we use AI to do more with less?&#8221; Their focus is squarely on <em>Automation &amp; Optimization</em>. It&#8217;s not sexy, but it&#8217;s where they will save millions.</p><p>Meanwhile, the clients, also known as the ones writing the checks, are getting dazzled by demos of AI creating podcasts, beautiful narratives and uncovering hidden gems. They want better outputs and smarter <em>Storytelling &amp; Discovery</em>. They don&#8217;t care about the vendor&#8217;s operational costs; they care about getting answers that make them look smart in the next board meeting.</p><p>Mid-tier insights agencies are attempting to keep up with the larger firms, and for them, <em>Data &amp; Insights Synthesis</em> is a cheat code to put them into the big leagues. </p><p>And the poor consumer stuck in the middle? They&#8217;re praying that AI will make the data collection experience less of a nightmare. They&#8217;re desperate for a better <em>Consumer Experience</em>.</p><p>Hopefully, you see the gap? The people paying for the work and the people doing the work want different things from AI. Vendors want AI to fix the biggest limiter on their valuation, while clients are expecting it to reinvent the insight space.</p><h2>The Bottom Line</h2><p>The legacy insights firms are making a calculated bet: use AI to cut costs now, get the P&amp;L healthy, and then buy their way into cool, client-facing innovation later. It&#8217;s the classic &#8220;survive, then thrive&#8221; strategy. A strategy proven to be successful in today&#8217;s most successful companies. </p><p>It&#8217;s also a risky gamble. It leaves the door wide open for a new generation of AI-native startups to swoop in and steal clients by focusing on what they want. A classic innovator&#8217;s dilemma. </p><p>The question isn&#8217;t whether the legacy firms will save money with AI. It&#8217;s whether they&#8217;ll still be the biggest names in the industry by the time they&#8217;re done optimizing.</p><p>What do you think? Is the industry playing it smart, or are they breeding a faster horse?</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.greymatterunloaded.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Grey Matter Unloaded! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.greymatterunloaded.com/p/insights-vendors-and-clients-want?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.greymatterunloaded.com/p/insights-vendors-and-clients-want?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Meet the Five Families of Insights ]]></title><description><![CDATA[And how susceptible to AI they are]]></description><link>https://www.greymatterunloaded.com/p/meet-the-five-families-of-insights</link><guid isPermaLink="false">https://www.greymatterunloaded.com/p/meet-the-five-families-of-insights</guid><dc:creator><![CDATA[Marc Ryan]]></dc:creator><pubDate>Tue, 16 Sep 2025 13:31:44 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Rdzq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22054868-7371-439c-b0ce-27147cfe4e0b_1647x1029.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Rdzq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22054868-7371-439c-b0ce-27147cfe4e0b_1647x1029.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Rdzq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22054868-7371-439c-b0ce-27147cfe4e0b_1647x1029.png 424w, https://substackcdn.com/image/fetch/$s_!Rdzq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22054868-7371-439c-b0ce-27147cfe4e0b_1647x1029.png 848w, https://substackcdn.com/image/fetch/$s_!Rdzq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22054868-7371-439c-b0ce-27147cfe4e0b_1647x1029.png 1272w, https://substackcdn.com/image/fetch/$s_!Rdzq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22054868-7371-439c-b0ce-27147cfe4e0b_1647x1029.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Rdzq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22054868-7371-439c-b0ce-27147cfe4e0b_1647x1029.png" width="1456" height="910" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/22054868-7371-439c-b0ce-27147cfe4e0b_1647x1029.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:910,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3087441,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.greymatterunloaded.com/i/173683178?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22054868-7371-439c-b0ce-27147cfe4e0b_1647x1029.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Rdzq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22054868-7371-439c-b0ce-27147cfe4e0b_1647x1029.png 424w, https://substackcdn.com/image/fetch/$s_!Rdzq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22054868-7371-439c-b0ce-27147cfe4e0b_1647x1029.png 848w, https://substackcdn.com/image/fetch/$s_!Rdzq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22054868-7371-439c-b0ce-27147cfe4e0b_1647x1029.png 1272w, https://substackcdn.com/image/fetch/$s_!Rdzq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22054868-7371-439c-b0ce-27147cfe4e0b_1647x1029.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Ever tried to explain the insights industry at a party? It&#8217;s a mess. A sprawling, jargon-filled infinitely nuanced conversation that leaves most people nodding politely while their eyes glaze over. So, after years of trying and failing to explain the insights industry, I came up with a simpler way to think about it: The Five Families. </p><p>First, let's get the lingo out of the way. You&#8217;ll hear people say &#8220;market research,&#8221; but that&#8217;s  like calling all music &#8220;classical.&#8221; The world has changed and moved on to the world of  &#8220;insights&#8221;. A broader term for understanding people, no matter where the data comes from. So for our purposes, we&#8217;re talking about the insights industry. </p><p>So when thinking about the insights field, it's useful to classify the companies that operate in the industry into five big families, each with its own turf, its own racket, and its own way of doing things. Think of them as the Corleones, the Tattaglias, and so on, but with more data and fewer horse heads in beds. </p><p>Understanding these families is the key to figuring out how this whole ecosystem works, who holds the power, and provides a good framework for understanding the impact of AI. Let's meet the families. </p><h2>The Consumer Data Family: The Hoarders </h2><p>These are the private investigators of the insights world. Companies like Circana, Experian, and Affinity are in the business of knowing what people actually do, not just what they say they do. They are data hoarders, and their motto is simple: track everything. </p><p>They know what you buy at the grocery store, what car you drive, and what catalogs you get in the mail. They pull this off by tracking our digital footprints, scanning receipts, and analyzing credit card transactions. Then they package that behavioral data and sell it to anyone who wants a peek into the real world. </p><p>Any company with digital tracking can play this game. Walmart knows what you buy in-store and what you search for on their website. If you own a VIZIO TV they even know what you&#8217;re watching on TV (<a href="https://www.adlingo.org/walmart-vizio-inscape-acr-data/">Walmart bought VIZIO</a>). They blend it all together to get a scary-good picture of their customers. </p><p>The Hoarders are all about passive data collection at a massive scale. </p><ul><li><p>Data Scale: Massive </p></li><li><p>Representativeness: Skewed to whatever they can track </p></li><li><p>Methods: Mostly passive tracking (1st/3rd party data) </p></li><li><p>Foundation: Get as much granular data as possible, by building new data collection pipes (direct, licensed, etc.). </p></li></ul><h2>The Media Research Family: The Scorekeepers </h2><p>These are the umpires. The official scorekeepers. Having spent years at Nielsen, I can tell you that &#8220;currency&#8221; is the holiest word in the building. It's the dollar, the euro, and the gospel all rolled into one. </p><p>Companies like Nielsen, VideoAmp, and Comscore are obsessed with one thing: who is seeing what, where, and when. They measure TV ratings, website traffic, and podcast listeners to help advertisers plan their media buys and hold agencies &amp; publishers accountable. They exist to create a single, trusted number that everyone agrees on so that billions of ad dollars can change hands without controversy. </p><p>A currency system is basically a planned monopoly. It&#8217;s great because everyone uses the same playbook. It&#8217;s terrible because innovation moves at a glacial pace and prices are, shall we say, not very competitive. If you want to know how many people saw that Super Bowl ad, you call the Scorekeepers. They&#8217;re not cheap, but they&#8217;re the only game in town. </p><ul><li><p>Data Scale: Small (currency panels) to Huge (ad tech logs) </p></li><li><p>Representativeness: High (that&#8217;s the whole point) </p></li><li><p>Methods: A mix of panels (zero-party) and digital logs (3rd party) </p></li><li><p>Foundation: Be the one source of truth everyone has to pay for. </p></li></ul><h2>The Marketing Research Family: The Doctors </h2><p>Think of these folks as the brand doctors. They show up after the party to tell you if your expensive decor actually impressed anyone. This family, which includes giants like Kantar and IPSOS, specialize in measuring the effectiveness of your marketing. </p><p>Did that clever ad campaign actually lift brand awareness? Is your new tagline resonating or falling flat? The Doctors answer these questions with brand trackers, ad tests, and copy tests. Their entire world revolves around influence. They want to know why you love Nike more than Adidas and what it would take to make you switch. </p><p>Here&#8217;s a simple way to split them from the Scorekeepers: Media Research gets your ad in front of eyeballs. Marketing Research figures out if those eyeballs cared. </p><ul><li><p>Data Scale: Small </p></li><li><p>Representativeness: High </p></li><li><p>Methods: Almost entirely surveys (zero-party) </p></li><li><p>Foundation: Measure and track what influences consumers. </p></li></ul><h2>The Market Research Family: The Explorers </h2><p>These are the curious folk who go exploring the wilderness of the consumer mind. The Explorers tackle the big, fuzzy, "what if" questions that keep executives up at night. They don&#8217;t just measure what is; they explore what could be. </p><p>Want to launch a product in Brazil? The Explorers will tell you what flavors they like. Need a new billion-dollar idea? They&#8217;ll run innovation workshops. Is your app confusing? They&#8217;ll do usability testing. They answer questions like: </p><ul><li><p>Who are our customers, really? (Segmentation) </p></li><li><p>What do they want that they can&#8217;t articulate? (Usage &amp; Attitudes) </p></li><li><p>How do they actually use our product? (Ethnographies) </p></li></ul><p>This is the most bespoke family, filled with boutique firms and specialists who use everything from neuroscience to in-home diaries to find the answers. It&#8217;s all about exploring the unknown. </p><ul><li><p>Data Scale: Small to Tiny </p></li><li><p>Representativeness: High </p></li><li><p>Methods: Anything and everything, but mostly zero-party (one-on-one interviews, surveys, focus groups)</p></li><li><p>Foundation: Answer tough business questions by talking to consumers. </p></li></ul><h2>The Customer Research Family: The Therapists </h2><p>Finally, you have the "how does that make you feel?" folks. Companies like Qualtrics and Medallia are the therapists of the insights world, focused on customer satisfaction and experience (CX). </p><p>They live and breathe what it takes to get customers to stick around: Net Promoter Scores (NPS), customer feedback surveys, and journey mapping. Their goal is to figure out if your customers are happy, why they&#8217;re churning, and what you can do to make them love you again. Think of the QR code on your Walmart receipt or the automated email after a support call, that&#8217;s the Therapists at work. </p><p>Their secret weapon is technical integration. They plug directly into a company&#8217;s operations, turning every customer interaction into a data point. Because they&#8217;re so deeply embedded, their contracts are sticky, and their valuations are some of the highest in the industry. </p><ul><li><p>Data Scale: Medium </p></li><li><p>Representativeness: Med to High (for existing customers) </p></li><li><p>Methods: Heavily zero-party, often automated </p></li><li><p>Foundation: Integrate into client systems and never let go.  </p></li></ul><h2>What About the Other Players? </h2><p>You might be wondering where companies like sample providers (e.g. Dynata, Cint, etc.) or consultancies (e.g. McKinsey, BCG, etc.) fit in. Good question. </p><ul><li><p>Sample Providers are the arms dealers. They supply the firepower (respondents) to the families but aren&#8217;t a family themselves. They&#8217;re the neutral facilitators of the insights economy (even though many of them compete with the families). </p></li><li><p>Big Consultancies are like occasional hired muscle. They&#8217;ll swoop into the Market Research family&#8217;s turf for a big, strategic job, but they don&#8217;t live there full-time. </p></li><li><p>Research Platforms like SurveyMonkey are the tools that make the industry hum. They let anyone play researcher, for better or worse. </p></li></ul><h2>The Coming Shake-Down </h2><p>So there you have it. The five families, each with their own turf, their own approach, and their own fragile truce. For decades, this is how the world worked. Of course, no single player in the space ever truly sticks to their own territory and there's often overlap. Kantar plays in Marketing Research and Consumer Data. IPSOS is across Market/Marketing and Customer research. Nielsen over the years has played in both Media, Consumer and Marketing Research. There's always plenty of motive for businesses to try and step into each other's territory.</p><p>But a new "Heisenberg" is on the scene. One that doesn&#8217;t care about tradition, turf, or taking prisoners. The territories that the five big families focus on is to big AI what an ant is to a car rolling down the road, out of sight out of mind. When Google rolled out news summaries into search there was no Machiavellian plan to kill the publishing industry, just a strong desire to improve the customer experience. Despite the best intentions, the mass adoption of Google as the home page reduced overall traffic to news websites. When Apple added the flashlight feature to the iPhone they weren't thinking about putting all of the iPhone flashlight app makers out of business, they were improving the experience. When it comes to AI, we'll likely see this routine repeated over and over. AI isn't setting out to change how the insights industry works, but in the pursuit of the best AI experience, it likely will. </p><p>So who's best protected in the wake of this coming change? Here's my two cents on some future outcomes:</p><ul><li><p>Consumer Data: The biggest datasets are decently protected as unique sources of training data for AI models. <em>Risks:</em> predictive AI models that use smaller datasets and signal to adequately predict human behavior.</p></li><li><p>Media Research: Relatively well protected as the agreed upon currency of the industry. However, AI led approaches will continually nip around the edges of the industry until enough evidence gets an industry body interested. <em>Risks:</em> synthetic/hybrid measurement results in more accurate media measurement. </p></li><li><p>Marketing Research: Poorly protected from AI. Often these approaches have little in the way of protective moat on their approach. Switching costs are low and data sizes are often too small to make them extremely valuable training data. However, AI based automation and insights can drive unbelievable profitability for those who take advantage. <em>Risks:</em> AI native companies provide automated measurement and interpretation using combination of synthetic and hybrid data.</p></li><li><p>Market Research: Poorly protected from AI. This area is the most dynamic and exploratory of the families and most susceptible to AI innovation disruption. However, as realistic chat based qualitative methods grow this could be the dawn of a new age of at scale conversational research. <em>Risks:</em> AI native self serve systems that go from business question to AI generated answer using public data, company data, surveys, and synthetic insights. </p></li><li><p>Customer Research: Well protected from AI. These companies are part of the scenery. They're so well embedded into operational processes their renewal rate is least susceptible to risk from AI. <em>Risks:</em> The sales cycle on Customer Research is long, however, AI fueled new players in the space could take a swing with new AI driven insights into the relationship between customer satisfaction and corporate performance.  </p></li></ul><p>So there you have it. My thinking around how the industry is structured and how well insulated the various participants are from disintermediation. Would love to hear what you think, feel free to drop me a line.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.greymatterunloaded.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Grey Matter Unloaded! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Why Vibe Insights is the Future]]></title><description><![CDATA[Hating &#8220;Synthetic&#8221; is Fruitless]]></description><link>https://www.greymatterunloaded.com/p/why-vibe-insights-is-the-future</link><guid isPermaLink="false">https://www.greymatterunloaded.com/p/why-vibe-insights-is-the-future</guid><dc:creator><![CDATA[Marc Ryan]]></dc:creator><pubDate>Mon, 14 Jul 2025 13:31:58 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!4JLP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c0500a7-fc34-4842-bada-c2d499322983_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!4JLP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c0500a7-fc34-4842-bada-c2d499322983_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!4JLP!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c0500a7-fc34-4842-bada-c2d499322983_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!4JLP!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c0500a7-fc34-4842-bada-c2d499322983_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!4JLP!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c0500a7-fc34-4842-bada-c2d499322983_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!4JLP!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c0500a7-fc34-4842-bada-c2d499322983_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!4JLP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c0500a7-fc34-4842-bada-c2d499322983_1536x1024.png" width="728" height="485.5" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4c0500a7-fc34-4842-bada-c2d499322983_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:728,&quot;bytes&quot;:3193939,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.greymatterunloaded.com/i/168117105?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c0500a7-fc34-4842-bada-c2d499322983_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!4JLP!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c0500a7-fc34-4842-bada-c2d499322983_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!4JLP!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c0500a7-fc34-4842-bada-c2d499322983_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!4JLP!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c0500a7-fc34-4842-bada-c2d499322983_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!4JLP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c0500a7-fc34-4842-bada-c2d499322983_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The next time you need to divide up a group of researchers for a game of dodge ball, take the half with positive things to say about synthetic data to one side and the half with negative perspectives the other. </p><p>Few topics in the insights world are more controversial or divisive. To steer clear of the drama, many stakeholders end up walking the safe line of &#8220;human powered synthetic insights&#8221;. This is code for, buy my traditional research data from &#8220;real&#8221; humans and we&#8217;ll add some synthetic insights on top. Pure synthetic solutions get eye-rolls, crossed arms, and skeptical muttering. LinkedIn posts follow, detailing flaws in synthetic data and claiming its insights are unreal, &#8220;computer-generated guesswork.&#8221;</p><p>The biggest synthetic data companies don&#8217;t care, because they know that soon, neither will anyone else.</p><h2>Vibe Insight Enters the Chat</h2><p>There&#8217;s a new buzzword in the AI world: &#8220;<a href="https://en.wikipedia.org/wiki/Vibe_coding">vibe coding</a>.&#8221; Andrei Karpathy, OpenAI co-founder and the AI engineering messiah, coined it to describe a wild new way of working, tell an AI model the &#8220;vibes&#8221; of what you want, and it figures out the rest. No endless mock-ups, no complex prework, no micro-managing&#8230; just vibes.</p><p>Anointed with a label, vibe coding is now mainstream. Nowadays, most engineers have experimented with vibe coding. The magic of LLMs we experience when asking a question is even better when you say, &#8220;Hey, build me a dashboard,&#8221; and the AI pulls in tools, code, web results, whatever it needs. The boundary between what you know and what the machine figures out is fading fast.</p><p>Vibe Insights? It&#8217;s the same thing, but for research questions. Want to know what&#8217;ll happen to your brand if you run a spicy ad? Tell the LLM what you&#8217;re after, and if it knows, great, answer provided. If it doesn&#8217;t, it&#8217;ll go fetch data wherever you let it. Vibe Insights is research, with a layer of AI driven conclusions. If it needs to synthesize those conclusions, then it will. </p><h2>The Human Understanding Gap</h2><p>If you take a step back from the insights industry for a minute and look at it from a 50,000 ft view, you&#8217;ll notice something&#8230; we dumb everything down. The reason we do so much segmentation, cross-tabbing, and demographic pigeonholing is because our brains can&#8217;t process the messiness of real life. Grouping people by age or income is a hack to make complex stuff fit into simple charts we understand. The world is weird, complicated, and the only way we process the complexity is to fit it into unified thinking by smoothing the differences into <em>18 to 34 years old</em> or well named segments like <em>Kids &amp; Cul-de-Sacs</em>.</p><p>Look around and you&#8217;ll see it in how we design studies, reports and even our world views. Want a grand unified theory of how brands grow? Take your pick, <a href="https://www.ipsos.com/en/emotional-attachment-and-profitable-customer-relationships">emotional attachment</a>, <a href="https://www.kantar.com/inspiration/brands/what-is-the-meaningful-different-salient-framework">meaningfully different</a>, or <a href="https://marketingscience.info/how-brands-grow/">availability &amp; distinctiveness</a>. While supported by marketing science, these frameworks can present conflicting advice and, frankly, they don&#8217;t work for every brand. </p><p>This unification we do has a benefit. Intuitively, we know every brand has its own unique path to building equity, but we simplify to make it easier to understand and communicate. It&#8217;s to our benefit to make the complex approachable. Unfortunately, AI models don&#8217;t need that benefit. </p><p>Most people don&#8217;t get just how massive the math behind these AIs is. You want to train a baby LLM with 1,000 parameters, by hand? If you&#8217;re doing a multiplication every 2 seconds, you&#8217;ll finish in about 15 months (assuming no breaks and a 12-hour workday). Scale it up to a million parameters? Now you&#8217;re looking at 23,000 years.</p><p>It would take over a century of full-time work from every person on Earth to train a model the size of ChatGPT. And every three months, models grow. AI systems don&#8217;t handle complexity, they embody it.</p><p>Not only are models faster and able to handle complexity, but they&#8217;re also smarter. The latest from OpenAI, Meta, Google, Anthropic, are crushing previous intelligence tests. Researchers had to put together <em><a href="https://lastexam.ai/">Humanity&#8217;s Last Exam</a></em>: 2,500 PhD-level questions, structured, academic, and a challenge for anyone with a pulse. I don&#8217;t know anyone that can score above 1%. Google&#8217;s Gemini model scored 21% in April. On Wednesday, Grok 4 dropped and doubled that, scoring 44%. For reference, Grok is at a point that it can ace the SAT, ACT, and likely earn a PhD in multiple subjects. On top of all that <a href="https://www.youtube.com/watch?v=48_fcg0tnrk">ChatGPT 5 is around the corner</a> and likely to be even smarter.</p><h2>Synthetic Data to the Rescue</h2><p>I don&#8217;t think there&#8217;s much more evidence needed to be convinced that AI systems are smart. However, I am certain that some calls for &#8220;real data from real respondents&#8221; will persist, and synthetic will continue to be thrown under the bus. However, things are happening that will force change.</p><p>The data the big AI companies want for training is being walled off at an alarming rate. The New York Times is embroiled in a <a href="https://thehill.com/opinion/technology/5383530-chatgpt-users-privacy-collateral-damage/">legal battle with OpenAI</a>. Disney has <a href="https://www.aiplusinfo.com/disney-lawsuit-challenges-ai-copyright-boundaries/">gone nuclear</a> on several AI companies. Reddit locked down and <a href="https://www.newsbytesapp.com/news/science/reddit-battles-ai-content-with-human-only-posts-lawsuits/story">sells access for a fee</a>. Cloudflare just put up a <a href="https://blog.cloudflare.com/introducing-pay-per-crawl/">paywall for AI crawlers</a>. The content wars have become a battle royale.</p><p>So, what do you do when you can&#8217;t get fresh data? You make your own. Enter synthetic data, what to the insights world has been Frankenstein&#8217;s monster the AI giants have been using for years. With pressure on data supply, it&#8217;s getting new focus, and getting way better, fast.</p><p>Perhaps a supply issue sounds familiar? The research industry&#8217;s been in a panel crisis for years; tiny panels, high churn, professional respondents, and data quality nightmares. Synthetic isn&#8217;t a stop gap; its looking like an escape hatch. The more attention it gets from companies with trillions to spend, the better it will become.</p><h2>Reasoning Models Change the Game</h2><p>Ok, fine, you might believe synthetic is the next thing, but at least you have your data moat to protect your insights business. Not so fast. Bob McGrew, former head of research at OpenAI, filled in the data moat with a <a href="https://x.com/bobmcgrewai/status/1915161562677207332">simple post</a>: &#8220;<em>I&#8217;m skeptical of the power of proprietary data as a moat in the long run.</em>&#8221; Why? Because reasoning, not just raw data, is now proving to be a game changer.</p><p>The new wave of AI models don&#8217;t just memorize; they think and reason. Like a smart high schooler figuring out a complex math problem, these models don&#8217;t need to have the answer memorized in their training data. They can infer, extrapolate, and fill in the blanks from their base knowledge. As a result, your vast oceans of survey data and tracking studies is less important than you think. If you give a reasoning model a little bit of information, it&#8217;ll figure out a lot.</p><p>Bob highlighted this sentiment in his post, &#8220;<em>How will you value what your proprietary data adds compared to what your competitor&#8217;s infinitely smart, infinitely patient agents can estimate from public data?</em>&#8221;. He goes into this in detail on a <a href="https://youtu.be/z_-nLK4Ps1Q?t=1291&amp;si=wvKpQvzxJArTpy4b">Sequoia podcast</a> where he relays his experience that even custom industry-specific models, trained on secret datasets, often under perform generalist models that just&#8230;think better.</p><h2>Human Behavior Models: AI That Knows You Better Than You Do</h2><p>If you still think human generated insights data is required, then it&#8217;s time to talk about Centaur. Of course I&#8217;m not talking about the half-horse, half-dude. I&#8217;m talking about the <a href="https://www.nature.com/articles/s41586-025-09215-4">Centaur AI project</a>. </p><p>The Centaur project took a big, general LLM and fine-tuned it with Psych-101 data; trial-by-trial human behavior results from 60,000+ people, and 160 experiments. The result was that Centaur didn&#8217;t just keep up with domain-specific models, it beat them at predicting how people act.</p><p>The team even went so far as to use fMRI scans to compare Centaur&#8217;s &#8220;thinking&#8221; matched to human neural activity and found that the model was the best at predicting human thought processes. So, yes, the AI is literally learning to think like us, and over time they&#8217;re going to get infinitely better.</p><p>The largest share of work being done in the for-profit insights sector is an offshoot of foundational work done in behavioral psychology which has been steered toward understanding drivers of for profit consumption. If the behavioral psychologists are figuring out how to teach an AI model to think like a human and predict behavior do you really think the same kinds of tech won&#8217;t soon figure out which Taco Bell ad moves more burritos?</p><h2>The &#8220;Humwelt&#8221; Problem</h2><p>Allow me to introduce a new term to your vocabulary: <em><strong>Humwelt</strong></em>. The term comes from a combination of the &#8220;H&#8221; from human and the German term <a href="https://en.wikipedia.org/wiki/Umwelt">Umwelt</a>. Umwelt describes &#8220;the specific way in which organisms of a particular species perceive and experience the world, shaped by the capabilities of their sensory organs and perceptual systems.&#8221; For an ant, it&#8217;s umwelt is the backyard which it perceives as its whole universe. </p><p>Humwelt? It&#8217;s our little corner of human experience. To me, humwelt describes a particular human bias, a bubble where we can&#8217;t perceive a world where an AI system is as powerful, fast, predictive, smart, creative, or as capable as a human. It&#8217;s a world where we can&#8217;t fathom AI creating valid insights.</p><p>When researchers dismiss synthetic respondents, or find flaws in synthetic data they&#8217;re likely not wrong&#8230;today. But they&#8217;re stuck in their humwelt, not imagining how quickly AI will solve those issues. Synthetic insights have flaws right now, but those flaws are getting patched faster than you can say &#8220;survey fatigue.&#8221;</p><p>Thinking about Vibe Insights through a lens of &#8220;yeah, but&#8221; is humwelt in practice. You are much better off imagining the Sci-Fi version of the future where Vibe Insights is all powerful and the problems of today are just part of the journey.</p><h2>Vibe Insights and the Next Differentiator</h2><p>So what comes next?</p><p>Despite the fun name, don&#8217;t assume that as AI gets better at &#8220;vibe coding,&#8221; it&#8217;ll also get easier to run end-to-end projects. If you try vibe coding, you will learn pretty quickly that it is only as good as the <em>context</em> and environment you give it to work with. It&#8217;s easy to ask the AI for the vibe; it&#8217;s hard to set up the playground where the AI actually knows what&#8217;s relevant, has tools, data and knows how to act on it. The same is true of research, a powerful Vibe Insights tool will have the right <em>context</em> and environment for answering questions.</p><p>For insight vendors, this is existential. The old edge was in the research, methods, operational expertise and having better access to people or data. But the future advantage is going to be in context engineering, setting up the right frameworks, permissions, API plumbing, data curation, and policy so that when the AI is asked to solve a complicated problem, it knows what to do and has the tools to do it.</p><p>Insights vendors that stick to their unified theories are bound to face new threats coming from company scale thinking machines. On the other hand, those who see themselves as context engineers, architects of the AI operating environment, will be the ones to survive. The question is becoming less about &#8220;How do Brands grow?&#8221; It&#8217;s &#8220;Can you build a system where a complex AI model can help my specific brand grow, deliver something new, useful, and relevant every day?&#8221;</p><p>This is the real differentiator. In the vibe insights future, the companies shaping the context will pull ahead.</p><h2>Why We&#8217;ve Got Time</h2><p>I&#8217;ve been around long enough to see every &#8220;revolutionary tech&#8221; collide with corporate inertia. Companies still hire the old names, McKinsey, Nielsen, Kantar, IPSOS, take your pick, because nobody gets fired for picking the safe option.</p><p>That&#8217;s the human advantage for now: our stubborn bias toward the familiar, and the <em>humwelt</em> of vendor risk-aversion. In large part, this is because we still control the purse strings and this isn&#8217;t likely to change overnight. But disruption will find a way, either through &#8220;white knight&#8221; risk-takers or upstart vendors running around the old guard carving out new value using classic innovator&#8217;s dilemma tactics, while incumbents say &#8220;<a href="https://medium.com/bc-digest/the-xerox-thieves-steve-jobs-bill-gates-6e1b36fc1ec4">they&#8217;re not competing in our space.</a>&#8221;</p><h2>The Bottom Line</h2><p>Vibe Insights is eventually going to eat the world, the evidence piles higher every day whether we like it or not. The next generation of insights leaders will be the ones who figure out how to engineer the vibe, and not cling to their frameworks and vaults of historical data.</p><p>But hey, that&#8217;s just my two cents. What do you think? Ready to vibe, or are you still clinging to your cross-tabs for dear life?</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.greymatterunloaded.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Grey Matter Unloaded! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[The Evolution of Experiments]]></title><description><![CDATA[Decoding Experimental Designs in Incrementality Measurement]]></description><link>https://www.greymatterunloaded.com/p/the-evolution-of-experiments</link><guid isPermaLink="false">https://www.greymatterunloaded.com/p/the-evolution-of-experiments</guid><dc:creator><![CDATA[Marc Ryan]]></dc:creator><pubDate>Thu, 26 Jun 2025 13:31:06 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!n8tK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac833959-14ff-4da6-98cd-26d28939660c_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!n8tK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac833959-14ff-4da6-98cd-26d28939660c_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!n8tK!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac833959-14ff-4da6-98cd-26d28939660c_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!n8tK!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac833959-14ff-4da6-98cd-26d28939660c_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!n8tK!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac833959-14ff-4da6-98cd-26d28939660c_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!n8tK!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac833959-14ff-4da6-98cd-26d28939660c_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!n8tK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac833959-14ff-4da6-98cd-26d28939660c_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ac833959-14ff-4da6-98cd-26d28939660c_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2601760,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.greymatterunloaded.com/i/166822399?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac833959-14ff-4da6-98cd-26d28939660c_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!n8tK!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac833959-14ff-4da6-98cd-26d28939660c_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!n8tK!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac833959-14ff-4da6-98cd-26d28939660c_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!n8tK!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac833959-14ff-4da6-98cd-26d28939660c_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!n8tK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac833959-14ff-4da6-98cd-26d28939660c_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>No one disagrees that the transition into the information age has profoundly changed the world. The ease with which technology drives transformation, often through complex and opaque systems (e.g. Google&#8217;s search algorithm), would seem almost magical to past generations. My work in advertising measurement has contributed, in a very, very small way, to this change. However, in pursuing methods to evaluate the incremental impact of advertising, I recognize that I&#8217;ve played a part in normalizing alternative measurement methods, some of which lack transparency.</p><p>A quote I frequently reference when discussing measurement with stakeholders is by renowned statistician George Box: &#8220;All models are wrong, but some are useful.&#8221; This simple yet profound truth encapsulates the reality of our algorithm-driven world. Google&#8217;s search algorithm, though imperfect, is remarkably effective. Advanced language models such as ChatGPT push this idea even further.</p><p>Yet, while models may be improving some outcomes, the creeping acceptance of alternative measurement methods can normalize subpar solutions. One such area is incrementality measurement, using experiments to gauge the incremental benefits of advertising campaigns, an area growing in prevalence across the media industry.</p><p>If you&#8217;ve followed methodological discussions at conferences and across LinkedIn, you&#8217;ve likely encountered the debate around experimental versus quasi-experimental designs in incrementality research. During my time, I&#8217;ve noticed frequent confusion between these methodologies, which opens the door for some to exploit the ambiguity around quasi-experimental designs. Because of my contributions to normalizing this approach over the last twenty-five years, I believe that revisiting this topic can be helpful in reducing ambiguity and providing a framework for reliability assessments.</p><h2>The Gold Standard: Experimental Design</h2><p>Experimental Designs, also known as Randomized Control Trials (RCTs), were first documented by James Lind in the 1700s, and remain the &#8220;gold standard&#8221; for research because of their ability to control external influences. Consider the common pharmaceutical trial: 100 headache sufferers are randomly split into two groups, ensuring comparable demographics and conditions. One group receives medication, the other a placebo, isolating the medication as the sole variable. After two weeks, surveying both groups reveals the medication&#8217;s effectiveness based on a reduction in headaches.</p><p>When testing online advertising in the late 1990s, RCT was the primary method. We randomly assigned visitors to a site such as Yahoo.com into two groups: one exposed to the test ad, the other to a public service announcement. Surveying both groups afterward allowed us to make comparisons and pinpoint the ad&#8217;s impact. When introduced, RCT was a game changer over traditional pre-post testing. For the first time, we could isolate the effectiveness of part of a campaign, even with other media influencing users (e.g. measure online effect while TV is running).</p><h2>The Compromise: Quasi-experimental Design</h2><p>Using RCT to measure campaigns in the early days didn&#8217;t last. Before the new millennium dawned, measurement had already shifted from RCT to quasi-experimental design. </p><h4>Why Use Quasi-experiments?</h4><p>While RCT remained ideal, its complexity and cost often deterred advertisers. Coordinating randomization across diverse ad servers, media properties, and walled garden platforms such as Google and Facebook were a logistical nightmare. Thus, quasi-experimental designs, used since the 1960s in academic contexts, became widely adopted for their practicality. However, with no standardized terminology to describe each variation, these designs will be quite diverse; as is the quality of the resulting data.</p><h4>Characteristics of Quasi-experiments</h4><p>The defining characteristic of a quasi-experiment is the absence of the most important characteristic of an RCT: lack of random group assignment. Instead, researchers strive to assemble comparable control groups alongside test groups. Removal of the randomization component simplifies quasi-experiment implementation, particularly within a fragmented media ecosystem. The industry that figured out look-a-like modeling took those innovations straight into the world of measurement. However, as George Box pointed out, these models are essentially wrong and unless the methodologist does their best to control for biases, a quasi-experiment has the potential to create substantial data issues.</p><h4>Factors to Control For</h4><p>Effective quasi-experimental studies try to mimic randomized conditions by controlling the key factors:</p><ul><li><p><em>In Target:</em> Ensuring control participants are in the target audience yet have not been exposed to the ad.</p></li><li><p><em>Temporally Relevant:</em> Capturing control participants at the same time as the exposed group to negate external effects.</p></li><li><p><em>Audience Profile:</em>  Ensuring a close match in demographics between test and control groups.</p></li><li><p><em>Confounders:</em> Managing additional variables like product ownership, device types, or geographic location.</p></li></ul><p>Researchers often address these factors through careful data processing and bias-reduction techniques, but they&#8217;re fundamentally imperfect.</p><h2>The Importance of Precise Language</h2><p>It seems the criticism of quasi-experimental measurement is often not because of the methodology itself but its susceptibility of the term to misuse. The term &#8220;experimental design&#8221; carries significant weight, it implies rigor and validity. Quasi-experiments take benefit from the parent they&#8217;re named after, even though they are a flawed attempt to replicate RCT conditions. That mimicry results in researchers often calling quasi-experimental studies &#8220;Experimental Design&#8221; which, while true, is a misuse of the term and misleads data consumers into a false sense of security.</p><p>When Campbell &amp; Stanley introduced <a href="https://www.sfu.ca/~palys/Campbell&amp;Stanley-1959-Exptl&amp;QuasiExptlDesignsForResearch.pdf">guidelines for quasi-experimental designs</a>, the intent was to offer practical alternatives when RCTs were unfeasible. Today, we&#8217;ve standardized this method, frequently choosing it over the RCT as it&#8217;s also viewed as an &#8220;experimental design&#8221;. In the high-volume, complex world of advertising, quasi-experiments are a convenient, low-cost substitute, obscuring methodological weaknesses behind a veneer of credibility. For a consumer of experimental results, without a mechanism to validate the results, you have only one option: trust the data. If this is you, then you need to evaluate the credibility of the measurement.</p><h2>Five Critical Questions</h2><p>To better evaluate experimental studies, consider these essential questions:</p><ol><li><p>Is this a quasi-experimental design? <em>If yes, proceed with caution.</em></p></li><li><p>Can the measurement provider show test and control group allocation by media placement? <em>If yes, check for potential biases in media exposure</em>.</p></li><li><p>Can the measurement provider show test and control group allocation by day? <em>If yes, ensure equal numbers included by day.</em></p></li><li><p>Can the measurement provider show demographic profiles of test and control groups? <em>Use to verify demographic consistency.</em></p></li><li><p>Do test and control groups match on other significant variables (e.g., device type, geolocation)? <em>Confirm no hidden biases exist (e.g. exposed mostly iOS, unexposed mostly Android).</em></p></li></ol><p>Although not exhaustive, this framework offers a practical starting point for ensuring data integrity. While understanding these answers won&#8217;t eliminate potential biases, it will help enhance your confidence in quasi-experimental results.</p><h2>Not a Bad Thing</h2><p>The shift to alternative measurement approaches is unavoidable and not always a bad thing. Driven by practical needs and technological advancements, alternative measurement opens new doors. However, embracing these methods demands a critical eye toward maintaining data integrity and transparency. It is crucial to grasp the distinctions between rigorous experimental designs and practical quasi-experiments. By applying a thoughtful evaluation framework, advertisers and researchers can better navigate methodological complexities and uphold high standards in a more algorithmic and data-driven world.</p><p>For more on Experimental Designs, <a href="https://www.greymatterunloaded.com/p/experimental-design-fundamentals">refer to this article</a>.</p><p>To understand factors affecting data quality, explore <a href="https://www.greymatterunloaded.com/p/a-basic-guide-to-evaluating-validity-9fb">this article</a>.</p><p>View my proposal for transparent quality disclosures <a href="https://www.greymatterunloaded.com/p/fasrim-a-data-quality-framework-for">in FASRIM</a>.</p><p>Rick Bruner has a <a href="https://www.linkedin.com/pulse/advertisers-seeking-accurate-roas-should-use-geo-tests-rick-bruner-pnvwe">strong argument</a> for geo-based RCT measurement. </p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.greymatterunloaded.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Grey Matter Unloaded! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Are Ad Agencies the Next Generation Insights Leaders?]]></title><description><![CDATA[The slow subversion of the traditional insights advisor]]></description><link>https://www.greymatterunloaded.com/p/are-ad-agencies-the-next-generation</link><guid isPermaLink="false">https://www.greymatterunloaded.com/p/are-ad-agencies-the-next-generation</guid><dc:creator><![CDATA[Marc Ryan]]></dc:creator><pubDate>Thu, 19 Jun 2025 13:31:22 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!5Pxe!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1428e49e-4a45-4fff-8c71-c74efd280b81_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!5Pxe!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1428e49e-4a45-4fff-8c71-c74efd280b81_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!5Pxe!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1428e49e-4a45-4fff-8c71-c74efd280b81_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!5Pxe!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1428e49e-4a45-4fff-8c71-c74efd280b81_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!5Pxe!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1428e49e-4a45-4fff-8c71-c74efd280b81_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!5Pxe!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1428e49e-4a45-4fff-8c71-c74efd280b81_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!5Pxe!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1428e49e-4a45-4fff-8c71-c74efd280b81_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1428e49e-4a45-4fff-8c71-c74efd280b81_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2517023,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.greymatterunloaded.com/i/166252721?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1428e49e-4a45-4fff-8c71-c74efd280b81_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!5Pxe!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1428e49e-4a45-4fff-8c71-c74efd280b81_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!5Pxe!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1428e49e-4a45-4fff-8c71-c74efd280b81_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!5Pxe!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1428e49e-4a45-4fff-8c71-c74efd280b81_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!5Pxe!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1428e49e-4a45-4fff-8c71-c74efd280b81_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>This week a large population of insights and ad agency personnel will collide in southern France for the annual Cannes Lions festival; the marketing world&#8217;s annual beachfront circus, where martech/adtech companies, agencies and brands rub shoulders and boast about their new offerings and successes. Amongst the mimosas, beach pavilions, and yacht parties, a new trend is emerging: agencies as curators of specialized knowledge and data for brands. </p><h2>Been There Done That: Agencies and Market Research</h2><p>Advertising agencies have been flirting with market research for decades. It&#8217;s always seemed like a logical pairing; agencies have the strategic vision and client relationships; research firms provide the data and credibility.</p><p>However, the dance often ended with two left feet. I&#8217;ve seen this firsthand having been in meetings with agency reps and brand marketers delivering the unwelcome news that a campaign has flopped in spectacular fashion. I never received a more evil eye than from the agency reps unhappy with measurement outcomes. Still, despite this perceived conflict of interest, agencies have long known that data and knowledge is key to performance, credibility and client longevity. And because of that awareness, we&#8217;ve seen agencies buy into the insights space. </p><p>Consider Interpublic Group (IPG)&#8217;s acquisition of NFO back in 2000. It seemed brilliant, integrate deep consumer insights into advertising strategy. But this marriage highlighted the cultural differences between traditional marketing research and agency models. Clients questioned the independence of the insights produced as agencies aim to create interesting advertising, so the perceived impartiality of their market research offers became a sticking point. Also, a major accounting scandal diverted IPG leadership&#8217;s attention from this integration for years. The divestiture of the asset to TNS in 2008 was a straightforward decision, as the company focused on streamlining operations to regain investor confidence.</p><p>Then there&#8217;s the WPP and Kantar saga. WPP&#8217;s ownership of Kantar was an even bigger play, signaling ambitions for agency-integrated insights at scale. However, with WPP&#8217;s service led portfolio approach, growth through client/revenue acquisition was the currency that fueled investments. Meanwhile, Kantar had reached saturation and needed investment in data and infrastructure to drive future growth of a model outside of the WPP norm. As WPP started seeing major shifts coming in the advertising and media market, the decision to divest managerial control of Kantar to Bain Capital became logical and practical.</p><p>Not to be left behind, Publicis took a different tack, acquiring Digitas and Razorfish, digital agencies with rich analytics capabilities. While successful as digital shops, these acquisitions didn&#8217;t transform Publicis into a credible player in core market research. However, this was enough to show the value of a data led approach and Publicis&#8217; later acquisition of Epsilon provided the company with a treasure trove of consumer data and sophisticated analytics, enhancing its ability to deliver insights and real-time activation capabilities. Perhaps this move hints at the first genuine success of an agency in bridging the gap between advertising strategy and deep consumer insights.</p><h2>Agency Business Model Evolution</h2><h3>The Death of the 15% Model</h3><p>One of the fundamental challenges for bringing marketing research and agencies together has been a structural challenge: the compensation models of both entities.</p><p>Looking back, it&#8217;s clear the way agencies earn revenue has steadily evolved, setting up their most recent play for consumer insights leadership. A couple of decades ago advertising agencies operated on a commission model, taking a 15% cut of media spend, a profitable and straightforward system for traditional media. The rise of digital and programmatic advertising shattered this model, creating a complex, opaque supply chain that prompted clients to demand greater transparency. In response, many agencies established &#8216;black box&#8217; trading desks, engaging in media arbitrage by buying inventory in bulk and reselling it to clients at an undisclosed markup to protect their margins. Client pressure, landmark industry reports like the ANA&#8217;s 2016 study, and the growing trend of brands in-housing their own marketing capabilities eroded this practice.</p><h3>The Shift Toward Tech and Data Services</h3><p>Today, agencies are navigating a fragmented and competitive landscape by diversifying their revenue streams. They rely on labor-based retainers and project fees, while also adopting high-risk, high-reward performance models tied to business outcomes. They are also now moving upstream to offer strategic consulting and developing sophisticated technology and data services. This is helping clients manage first-party data for delivering insights and helping to position themselves as indispensable partners in a complex digital world.</p><p>While the measurement based selling practice of marketing research firms hasn&#8217;t changed, through sheer tenacity and forced evolution Agencies have become smarter and savvier about the value of the insights, analytics, modeling, and real-time optimization, many now sitting on mountains of real-time data, richer, deeper, and timelier than traditional market research sources.</p><h2>Agency Superpowers: Client Proximity &amp; Transparency</h2><h3>Strategic Access to Power</h3><p>Agencies have a close partnership with the Chief Marketing Officer, giving them direct strategic influence within client organizations. When the CMO is looking for help in determining how to measure marketing effectiveness, it&#8217;s more often a senior agency data lead or technologist who steps in to guide the conversation, not someone from a traditional measurement firm.</p><p>This senior-level engagement provides agencies privileged access to critical business problems, enabling rapid, actionable responses. In contrast, insights firms rarely have direct connections at such senior levels. Even when client organizations have an executive dedicated to measurement, research firms&#8217; contacts live further down within the marketing organization, among the managers responsible for measurement or analytics.</p><h3>The Risk of Proximity</h3><p>This close partnership also means agencies fly closer to the sun, becoming vulnerable to the frequent turnover of CMOs and shifts in marketing strategies. Agencies often find themselves caught up in endless pitch cycles, shouldering blame when marketing initiatives falter. Meanwhile, measurement firms fly under the radar, enjoying longevity with measurement programs that often run for years, if not decades.</p><h3>Evolving Attitudes Toward Conflict of Interest</h3><p>As a result, marketing research leaders looking at agencies often feel comfort in that agencies remain conflicted, they produce campaigns and then measure their effectiveness, raising eyebrows at impartiality. The credibility gap remains agencies&#8217; Achilles&#8217; heel, challenging their attempts to dominate the insights arena.</p><p>But here&#8217;s something worth pondering: as transparency increases and more data flows into client-owned systems, will the conflict-of-interest conversation lose its sting? We&#8217;ve already seen clients become comfortable with major tech platforms (Google, Meta, Amazon, TikTok) grading their own homework. While there&#8217;s some level of MRC oversight involved, the underlying acceptance is there: self-measurement is now part of the game. Could this signify a broader philosophical shift where concerns over conflicts of interest fade into the background, or at least become less of a deal breaker?</p><h3>The Methodology Gap&#8212;and How It&#8217;s Closing</h3><p>Then there&#8217;s the question of rigor. While agencies excel at short-term analytics and real-time reactions, they still struggle with methodological depth. Agencies often prioritize quick wins and flashy insights over the rigorous, methodical approach needed for deep, strategic consumer understanding.</p><p>However, this is changing. As agencies chase the generative AI future, they are leaning in on talent acquisition that centers on AI, data, and statistics. A side effect of this shift is an increase in raw talent around data, statistics, and methods. Browse through the open job opportunities at leading hold co&#8217;s and you&#8217;ll find more roles than ever that carry a requirement for statistical backgrounds, and a deep understanding of data and data systems. </p><h2>Market Research Firms: Strengths and Shortcomings</h2><p>So if agencies have better relationships with clients, deeper technical knowledge of advertising and consumer data technologies, growing insights talent pools and a market open to self measurement, what do traditional marketing research firms bring to the table?</p><p>It all boils down to neutrality, history, continuity, and relationships. Clients trust market research firms because they are neutral arbiters and have been functioning in that role for decades. Entropy, or changing the way things are done, is the hardest obstacle to overcome. Not only that, the most successful marketing research firms have invested in and developed decades of expertise in methodologies, frameworks and research operations that agencies lack. Beyond that, unlike agencies that are working toward a short-term success (the campaign), marketing research firms have positioned themselves as longer term arbiters of brand success (<a href="https://www.greymatterunloaded.com/p/the-greatest-missed-opportunity-in">despite my POV that they&#8217;ve failed</a>). This positioning has clients returning to the Marketing Research company well for new insights and POVs on how to get meaningful results.</p><p>Yet research firms move at glacial speeds compared to agile agencies. Legacy infrastructure, siloed data, and outdated business models often hobble their ability to deliver timely insights. This sluggishness is problematic in a world demanding real-time decision-making. While the agency world tries to reinvent themselves for an AI first world, marketing research companies are trying to navigate private equity exits and restructures rather than solve for the next big thing.</p><h2>What&#8217;s Different This Time?</h2><h3>The Evolution from Bespoke to Scalable</h3><p>When we compare the prior wave of agency-led research efforts (NFO/Kantar) to how agencies chase insights today, there&#8217;s a marked difference. Prior models leaned on bespoke solutions, deep client service, and traditional methodologies, valuable but misaligned with the pace and structure of modern marketing. In today&#8217;s AI-first landscape, speed, scale, and data integration trump long slide decks and focus groups. Brands want immediate answers, scale across touchpoints, integrated workflows, and systems that can plug into their broader marketing and media infrastructure.</p><h3>Relationships vs. Capability</h3><p>While consulting-led research firms still hold significant sway through long-standing client relationships and trust, many now rely more on those personal connections than on the technical or strategic edge they once offered. That leaves them vulnerable because brands don&#8217;t just look for a sounding board, but for partners who can power the next wave of innovation.</p><p>Agencies may have a better position. They&#8217;ve developed from executional partners to data-enabled strategists, capable of offering brands unified solutions across media, measurement, and modeling. And with data at the center of everything from campaign performance to personalization, agencies have more levers to pull and more systems at their fingertips than most research firms ever did.</p><h2>The Potential Winners: Data as a Service (DaaS)</h2><h3>Leaner, Faster, Plug-and-Play</h3><p>Competing for attention with the CMO will not be easy and even then, when marketing leadership finds what a measurement company offers valuable, more often than not it&#8217;ll be data and they&#8217;ll delegate marketing measurement integration to their ad agency.</p><p>Maybe the best path forward is an <em>if you can&#8217;t beat them, join them</em> approach. Everyone is mining data and unique analytical solutions, regardless of their client-facing approach. Market research firms adopting a Data as a Service model may end up thriving amid this brewing competition. DaaS providers don&#8217;t carry the same operational weight, no bloated teams, no cumbersome service models, just clean, fast data integration, an approach bound to see success in this new market. DaaS platforms offer flexibility, integration capabilities, and real-time insights, attributes crucial for competing against and with nimble agencies.</p><h3>Neutral Ground for Execution and Measurement</h3><p>Even more so, DaaS sidesteps the legacy turf wars that used to pit agency advice against third-party measurement firms. With neutral, API-ready data that plugs into the client&#8217;s tech stack, these platforms reduce the friction between marketing execution and measurement. They don&#8217;t just provide insights; they become the backbone of ongoing analytics.</p><p>As marketing becomes more automated, tech-integrated, and performance-driven, it&#8217;s the nimble, interoperable DaaS firms, those who speak fluent Snowflake, BigQuery, and clean room, that will win favor with CMOs and their agency partners. The good news is that there&#8217;s a growing ecosystem of these players: SimilarWeb, Qrious, Affinity Solutions, SambaTV to name a few. In contrast, traditional firms that can&#8217;t shed their dependency on person-hours and PowerPoint may struggle to remain relevant to marketing conversations.</p><h2>Why Agencies Will Continue to Struggle</h2><p>Still, agencies face a high bar. Their proximity to the CMO may grant access, but it also makes them the scapegoat when campaigns go sideways. Methodological rigor isn&#8217;t easily grafted onto creative DNA, and for all their tech, agencies still struggle with longitudinal thinking. This is because of their incentives, which are structurally designed around short-term impact, creating a fundamental misalignment with becoming a true long-term insights partner.</p><p>Another challenge is resistance to change. Many brands, especially larger ones, operate under a risk-averse mindset where entrenched partnerships persist not because of performance but because of inertia. As the old saying goes, &#8220;You don&#8217;t get fired for hiring IBM.&#8221; That same logic often protects traditional research vendors, even when agencies may offer faster, more integrated solutions. This creates a psychological and operational drag on the industry&#8217;s ability to evolve.</p><p>So while agencies are ascendant now and may look like the obvious future of insights, the real winners will be those who blend the strategic access and agility of agencies with the trusted depth and rigor of research-led thinking.</p><h2>Is the Agency Threat Real This Time?</h2><p>The Publicis/Epsilon acquisition should have been the distant thunder signaling a brewing storm. Publicis didn&#8217;t just gain data, it gained Epsilon&#8217;s deep expertise in consumer behavior, analytics, and real-time activation. This integrated capability is the competitive edge agencies need to challenge market research firms.</p><p>The trend toward programmatic everything could amplify agencies&#8217; advantages further, notably when coupled with AI-driven analytics and the rise of connected TV. As connected TV and social becomes a dominant channel for video consumption, fragmentation will make traditional measurement techniques fall apart. In the fallout, agencies can harness consumer data alongside campaign performance metrics, offering near real-time optimization and targeting. If agencies can bridge their methodological gaps and bolster credibility, they might position themselves as genuine contenders in a measurement ecosystem that&#8217;s digital, fragmented, and addressable.</p><p>Thinking back to what&#8217;s happening this week, the evidence is plain to see. The Cannes Lions agenda tells a pretty simple story: agencies are leading the innovation charge with high-profile tech talks, AI planning tools, cloud-based measurement infrastructures, retail media data activations and agentic AI applications. Agencies are not just part of the conversation, they&#8217;re dominating it. Marketing research firms, by contrast, are tucked into smaller, more focused sessions, with a focus on brand tracking, media effectiveness, or new product introductions.</p><p>This shift is meaningful. Cannes has always reflected where the industry sees value heading, and this year it&#8217;s data-driven, AI-enabled, and activation-focused. If insights firms don&#8217;t move fast, they may find themselves sidelined at the very festival that once celebrated their analytical rigor.</p><h2>What to Watch?</h2><p>How can we spot the next moves? Here are some key trends to track:</p><ul><li><p>Continued acquisitions by agencies of analytics or AI-driven startups.</p></li><li><p>Agency announcements led by Chief Technology Officers, Chief Data or AI Officers. </p></li><li><p>Significant agency investments in building dedicated insights team staffed with senior legacy research talent.</p></li><li><p>Continued expansion and bundling of real-time analytics and insights into media planning and buying through specialized agency teams.</p></li><li><p>Growing client acceptance of agency-produced research insights showed through increased budgets and strategic involvement.</p></li><li><p>CAGR/valuation of DaaS insights companies vs. advisory led insights firms. </p></li></ul><p>Monitor these developments as they&#8217;ll signal how serious agencies are committing to this space.</p><h2>Avoiding the Slow Boil</h2><p>The agency community has always coveted the value that marketing research and insights firms create for brands. While the failures of past efforts might embolden research firms to harbor a sense of immunity, the threat from agencies is not hypothetical; it&#8217;s happening right now. Research firms are seeing their business moat dissolve into a puddle of value based on the client&#8217;s lack of desire for change, a dangerous operating model. Unlike efforts of the past, the signs this time are less obvious and more akin to the proverbial frog boiling in a pot of water. It will be the firms that adapt, embracing agility, real-time data access, unique collection methods and integration capabilities that will thrive. Those clinging to traditional methods risk becoming less relevant in the marketing organization.</p><p>Time is the one resource everyone is gifted, but it comes with a catch: it&#8217;s non-renewable and moves in one direction. Research firms still have time, but they&#8217;re operating in a landscape where velocity matters more than tenure. The data is moving, decisions are accelerating, and AI is making tomorrow feel closer than ever. If insights firms don&#8217;t find a way to keep pace, they&#8217;ll be remembered less for their depth and more for missing their moment.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.greymatterunloaded.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Grey Matter Unloaded! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[A Non-Technical Guide to AI Models for Insights Professionals]]></title><description><![CDATA[What you need to know to capitalize on the opportunity]]></description><link>https://www.greymatterunloaded.com/p/a-non-technical-guide-to-ai-models</link><guid isPermaLink="false">https://www.greymatterunloaded.com/p/a-non-technical-guide-to-ai-models</guid><dc:creator><![CDATA[Marc Ryan]]></dc:creator><pubDate>Thu, 05 Jun 2025 15:18:49 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!gVF3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e9fbb5a-d4ba-4e81-8e97-3fe46d1f85e6_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!gVF3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e9fbb5a-d4ba-4e81-8e97-3fe46d1f85e6_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!gVF3!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e9fbb5a-d4ba-4e81-8e97-3fe46d1f85e6_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!gVF3!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e9fbb5a-d4ba-4e81-8e97-3fe46d1f85e6_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!gVF3!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e9fbb5a-d4ba-4e81-8e97-3fe46d1f85e6_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!gVF3!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e9fbb5a-d4ba-4e81-8e97-3fe46d1f85e6_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!gVF3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e9fbb5a-d4ba-4e81-8e97-3fe46d1f85e6_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5e9fbb5a-d4ba-4e81-8e97-3fe46d1f85e6_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2847313,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.greymatterunloaded.com/i/164558175?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e9fbb5a-d4ba-4e81-8e97-3fe46d1f85e6_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!gVF3!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e9fbb5a-d4ba-4e81-8e97-3fe46d1f85e6_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!gVF3!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e9fbb5a-d4ba-4e81-8e97-3fe46d1f85e6_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!gVF3!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e9fbb5a-d4ba-4e81-8e97-3fe46d1f85e6_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!gVF3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e9fbb5a-d4ba-4e81-8e97-3fe46d1f85e6_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>When people think AI, they think ChatGPT. Fair enough, it's the celebrity tool of artificial intelligence right now. But that celebrity status has led a lot of folks (especially in the business world) to think that all AI is a chatbot. Or worse, that all AI <em>needs</em> to be a chatbot.</p><p>Let&#8217;s dispel that myth.</p><p>Imagine a fast food drive-thru. You talk to a speaker, place your order, maybe have a brief, awkward back-and-forth. That speaker? That&#8217;s your large language model (LLM), like ChatGPT, the chatbot we all recognize. It&#8217;s a smart, capable interface. But as we all know it&#8217;s not the kitchen. It&#8217;s not flipping your burger or pouring your drink. Behind the scenes, a lot of specialized systems are working together to get your order right. Same goes for AI. When you ask ChatGPT to generate an image the chatbot doesn&#8217;t generate the image, a different model behind the scenes handles that, this is the root of the confusion; like the drive thru speaker, the LLM is a window into the world of AI.</p><p>Large language models might be how you <em>interact</em> with AI, and the fact that LLMs are fueled with the knowledge of the internet makes them extremely smart and mind blowing, but the real magic and some of the coolest AI applications are often found behind that metaphorical kitchen door.</p><p>I put this document together to provide some guidance on the major technologies fueling the AI revolution so those in the insights industry don&#8217;t end up designing a restaurant and forget to hire a chef. Admittedly, we all love to eat at restaurants but have little interest in doing the cooking so while this topic can go deep I've broken this article in two to make it as digestible as possible (okay, enough food metaphors). </p><ol><li><p>The super easy to understand version - below, </p></li><li><p>The more nuanced yet still approachable version - <a href="https://open.substack.com/pub/greymatterunloaded/p/an-approachable-deeper-dive-into">part 2 article</a>, </p></li><li><p>What&#8217;s next in AI - <a href="https://open.substack.com/pub/greymatterunloaded/p/an-approachable-deeper-dive-into">part 2 article</a>.</p></li></ol><p>Each part is intended to be easy to understand, for that reason I may leave some big things out, but I will endeavor to provide some links to useful well structured videos for those interested in learning more. </p><p>To get ahead of the inevitable question, why does this matter? Some of you may think I&#8217;ll just write a check to MSFT, Google, OpenAI and use what they give me and I&#8217;m done. And for some of you that might be enough. But if you plan to bring AI into your insights pipeline, want to do so at a reasonable cost, and have to navigate the inevitable requirement of big customers (i.e. keeping their data secure), you may end up needing a team(s) to look at how to bring this tech in house, hopefully the info below will help you in some small way from keeping them talking over your head.</p><h2>The Three Big Use Cases for AI </h2><p>Before diving into the models, let's quickly set the stage by outlining the primary ways AI is transforming the insights industry. Broadly speaking, we can categorize these transformations into three major groups.</p><h4><strong>Storytelling Enhancers</strong></h4><p>First up, we have storytelling enhancements, focused on elevating how we share insights with end users. AI steps in here by interpreting data, crafting insightful reports, or even creating engaging <a href="https://blog.google/technology/ai/notebooklm-audio-overviews/">podcast-like</a> narratives to clearly communicate findings. Beyond reports, AI helps craft targeted, impactful content, like ads or persuasive copy, making insights directly actionable in business contexts. This is a world where you can chat directly with your data as if you're talking with someone from the audience you've measured. Say goodbye to crosstabs.</p><h4><strong>Methodological Advancements</strong></h4><p>The second group covers methodological advancements, the area where AI reshapes what's possible. AI-driven moderation of focus groups, computer vision for coding visual data (e.g. receipts, photos) or maps of intricate networks of influence are just a few benefits. We&#8217;re already glimpsing a future where insights become increasingly predictive and prescriptive rather than simply reporting historical data. Soon, AI will unlock entirely new possibilities for interactive feedback, leveraging immersive experiences through VR and AR, fundamentally shifting research from passive observation to active participation.</p><h4><strong>Operational Delivery</strong></h4><p>Finally, there's operational delivery, the essential behind-the-scenes work that insight consumers typically never see. Here, AI makes its mark by streamlining processes such as respondent selection, dynamically crafting better surveys, efficiently managing fraud detection, and automating tedious data processing tasks. It can also continuously monitor data for anomalies, spotting issues long before humans might notice. While these enhancements stay mostly out of sight, their impact is profound, resulting in <a href="https://www.greymatterunloaded.com/p/ai-is-coming-for-your-job-just-not">savings and enabling teams</a> to spend more time on strategic thinking.</p><p>With these use cases as the backstory let&#8217;s dig into the models.</p><h2>Big AI Models in a Nutshell</h2><p>Someone will fight me on this but I&#8217;m going to keep my list to the 10 big technologies I see as powering the most interesting advancements in AI and here they are:</p><ul><li><p><strong>Transformer Models (GPT):</strong> </p><p>Acts like a smart assistant that can read a book, write a summary, and tell you what&#8217;s likely to happen in chapter 12. Or better yet, that best friend who knows you so well they can complete your sentences for you. This is how ChatGPT knows what to say, they&#8217;re really good at predicting the structure of sentences.</p><p><em><br>How to use for insights?</em> <br>Excellent for interactive data exploration, allows users to easily dig through complex datasets through natural conversation rather than complicated interfaces. Effective for simulating realistic consumer personas for deeper qualitative insights. These models can automate reasoning processes, automatically generating robust insights reports, and improving qualitative research engagements. Additionally, they help innovate traditional survey methods and efficiently generate compelling, consumer-targeted advertising copy.<br></p></li><li><p><strong>Causal Inference/Counterfactual AI:</strong> </p><p>Causal inference is like a coach watching a basketball game replay to figure out if the star player&#8217;s three-point shots really helped the team win, or if other things, like great defense or the other team&#8217;s mistakes, were the real reason. Counterfactual AI is like rewinding the basketball game in your head and imagining what would&#8217;ve happened if the star player took a dunk instead of a three-point shot. It&#8217;s like the computer guessing if the team would&#8217;ve still won or lost, based on how the game went, to help plan better plays next time. This is how Amazon targets you with advertising that's designed to get the highest conversion rates.<br><br><em>How to use for insights?</em> </p><p>Ideal for strategic, prescriptive insights by not only identifying what&#8217;s happening, but deeply understanding the reasons behind observed phenomena. They excel at scenario planning, determining outcomes under hypothetical conditions. Great for strategic marketing effectiveness and optimize future ad spend (Mix Modeling). Also useful in isolating the precise impacts of specific product features, targeting consumers with tailored marketing, diagnosing root causes of performance shifts, and creating sophisticated media planning scenarios that drive desired business outcomes.<br></p></li><li><p><strong>Convolutional Neural Networks (CNN):</strong> </p><p>Kinda like someone that can look at a puzzle piece and know exactly what part of a larger image it belongs to. Really great at understanding how the small parts of an image is related to larger images. This is how CNNs help Tesla's drive themselves.<br><br><em>How to use for insights?</em> <br>Outstanding at interpreting visual data, especially for evaluating and isolating elements within advertising creative to predict performance. They can also efficiently monitor brand presence and context in media content and product reviews, significantly improving marketing responsiveness. CNNs provide real-time facial and emotional analysis during consumer interactions, accurately capture shopping behaviors via video feeds, and streamline receipt analysis for market insights and tracking consumer purchase behaviors.<br></p></li><li><p><strong>Generative Adversarial Networks (GANs):</strong> </p><p>Imagine one kid tries to draw fake Pok&#233;mon cards and another kid tries to spot which ones are fake. Over time, the drawing kid gets better until even the best Pok&#233;mon expert can&#8217;t tell the difference. This is how GANs create pictures of people <a href="https://this-person-does-not-exist.com/en">that aren't real</a>.<br><br><em>How to use for insights?</em> <br>Great for identifying fraudulent activities by spotting data anomalies that deviate from expected patterns. They can generate realistic synthetic data to boost samples, and can enable sharing of sensitive datasets without exposing actual consumer data. GANs support predictive modeling in marketing mix and financial analyses, enabling scenario testing of potential media strategies and outcomes. <br></p></li><li><p><strong>Diffusion Models:</strong> </p><p>Like a sculptor. They know what they want and start with a block of stone in front of them. They slowly remove the excess and eventually what once looked like a formless shape is the Statue of David. This is what a diffusion model does for image generation starts with a fuzzy representation and gradually improves it until it generates the image you&#8217;re looking for.</p><p><br><em>How to use for insights?</em> <br>Effective in stable synthetic data creation with greater diversity and realism compared to other methods, useful for augmenting limited or sparse datasets. Ideal for producing hyper-realistic visual product mock-ups and advertisements for early-stage consumer research and testing. These models can create consumer avatars grounded in extensive insights data, allowing for deeper empathy in customer strategy. Additionally, they can efficiently perform advanced data imputation, accurately filling in missing or incomplete datasets.<br></p></li><li><p><strong>Reinforcement Learning (RL):</strong></p><p>Just like giving your dog a treat when they sit on command and time-outs for when they're bad. The rewards given to the AI eventually get it closer to being able to figure out how to do something. This is how you teach your Nest Thermostat what to set the temperature to in your home, every time it gets it right you ignore it (positive reinforcement) and every time it gets it wrong you change the temp (negative reinforcement).<br></p><p><em>How to use for insights?</em> <br>Useful for dynamically pairing research respondents with the most relevant survey tasks, improving response quality. RL can create highly personalized respondent experiences, increasing engagement and data accuracy. Ideal for continuously optimized testing scenarios such as pricing strategies and creative effectiveness. They help automate ongoing insights generation, adaptively improving based on feedback. They can provide media allocation recommendations to maximize real-time campaign performance.</p><p></p></li><li><p><strong>Self-Supervised Learning (SSL):</strong> </p><p>Imagine a curious explorer in an unlabeled museum filled with artifacts. Instead of a guide labeling each item (&#8220;this is a vase&#8221;), the explorer studies the objects, grouping similar ones (e.g., noticing that shiny, curved items are often pottery) and predicting what missing pieces might look like. Through this self-guided exploration, they build a deep understanding of the museum&#8217;s patterns. Later, when tasked with identifying a specific artifact or category, they can quickly apply their knowledge with minimal extra help. This is how Siri is able to understand multiple accents and pronunciations.|<br><br><em>How to use for insights?</em> <br>Perfect for unifying diverse, inconsistent survey datasets into coherent datasets without extensive manual effort. Excellent for advanced segmentation, identifying naturally occurring consumer groups and trends within complex raw data. These models enrich raw data by automatically adding labels and context, greatly improving the effectiveness of downstream AI tasks.<br></p></li><li><p><strong>Graph Neural Networks (GNN):</strong> </p><p>Imagine a GNN as a gossip network in a small town. Each person (node) knows a bit about themselves (their features) and talks to their friends (edges). By sharing and combining gossip with their neighbors, everyone learns more about the town&#8217;s social scene (e.g., who&#8217;s popular or who might get along). After a few rounds of chatting, the GNN can predict things like who&#8217;ll be invited to a party (node task) or whether two people will become friends (edge task), based on the town&#8217;s connections. This is how Instagram is able to make eerily accurate recommendations of people to connect with. <br></p><p><em>How to use for insights?</em> <br>Powerful for analyzing interconnected datasets, delivering advanced segmentation by identifying groups defined by behaviors, preferences, and social interactions. GNNs reveal novel insights through deep understanding of subtle yet impactful data relationships, enhancing traditional analysis methods. Ideal for insights with multiple data streams. Use to track influence networks and identify key individuals or products driving market trends or purchases.<br></p></li><li><p><strong>Multimodal Models:</strong> </p><p>A single mode model is like seeing a recipe in a magazine whereas a multimodal model is like having a talented chef who can combine ingredients like meats/vegetables/spices, innovative cooking techniques, and world class presentation to create a delicious dish. By tasting and adjusting each part while considering how they work together, the chef produces a meal that&#8217;s perfectly balanced, just like a multimodal model blends different data types to give smart answers or create new content. This is how Google can look at a photo and tell you the breed of a Dog in the photo.<br></p><p><em>How to use for insights?</em> <br>Best suited for nuanced sentiment analysis, accurately interpreting complex emotional cues across video/images and text including sarcasm, irony, memes, and other non-traditional expressions. Multimodal models enable sophisticated contextual ethnographic research, seamlessly integrating qualitative video/photo data with quantitative metrics. These models significantly enhance customer experience tracking, combining analytics data, audio from customer interactions, and user-generated content into comprehensive insights. They also excel in analyzing advertising and product performance by simultaneously considering visuals, audio, and consumer reactions.</p><p></p></li><li><p><strong>Neural Radiance Fields (NeRF):</strong></p><p>Imagine taking a ton of photos of your bedroom and giving them to a computer than uses the photo to build a version you can navigate in VR. This is how Google improves street view into immersive experiences. The latest season of Black Mirror on NetFlix has an <a href="https://youtu.be/WOONIuRwPPQ?si=Vpo-f1XfFHs0B91_&amp;t=61">interesting take on this tech</a>.</p><p></p><p><em>How to use for insights?</em> <br>Currently a niche application in insights, NeRF models are ideal for creating immersive, realistic 3D product models from simple 2D images, greatly enhancing product engagement in consumer research. Perfect for virtual reality research applications, enabling realistic and highly interactive environments for testing store layouts, shelf arrangements, or product placements. </p><p></p></li></ul><h3>Again, Why Does This Matter?</h3><p>In the grand scheme of things, if you&#8217;re not building insights applications or organizing your company&#8217;s data, this mostly doesn&#8217;t matter. That is, other than in the context that you should take away from this that there&#8217;s more to AI systems than &#8220;GenAI&#8221;. </p><p>Worst case scenario you now know more than 99.9% of business leaders leaning on their teams to build new AI applications. Best case scenario you realize that these new AI models are the tip of the iceberg and the business of the future will likely take advantage not just of one of these technologies but many of them. </p><p>There&#8217;s a <em>whole menu</em> of AI tools beyond chat. Some of the most powerful applications aren&#8217;t conversational at all. They're engines working quietly in the background to:</p><ul><li><p>Predict outcomes</p></li><li><p>Simulate possibilities</p></li><li><p>Spot patterns</p></li><li><p>Make better decisions</p></li><li><p>Personalize experiences</p></li><li><p>Reconstruct environments</p></li></ul><p>Understanding these building blocks helps you contribute to building an AI strategy that fits your business. Sometimes the best new feature is a shiny experience for customers (e.g. chat with data), but in the long run dollars will flow to solutions that positively impact the P&amp;L (e.g. more efficient marketing spend). </p><p>So yeah, ChatGPT might be the drive-thru window, but AI&#8217;s real value might just be what&#8217;s happening in the kitchen.</p><p>To dig in a little deeper (still accessible) and learn a little about what&#8217;s on the horizon <a href="https://open.substack.com/pub/greymatterunloaded/p/an-approachable-deeper-dive-into">check out part 2</a>.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.greymatterunloaded.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Grey Matter Unloaded! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[An Approachable Deeper Dive into AI Models]]></title><description><![CDATA[A little more nuanced]]></description><link>https://www.greymatterunloaded.com/p/an-approachable-deeper-dive-into</link><guid isPermaLink="false">https://www.greymatterunloaded.com/p/an-approachable-deeper-dive-into</guid><dc:creator><![CDATA[Marc Ryan]]></dc:creator><pubDate>Thu, 05 Jun 2025 15:16:12 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!GAzd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4540d3ee-f6d4-40aa-8975-719f2ac535dc_1080x1305.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!GAzd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4540d3ee-f6d4-40aa-8975-719f2ac535dc_1080x1305.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!GAzd!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4540d3ee-f6d4-40aa-8975-719f2ac535dc_1080x1305.png 424w, https://substackcdn.com/image/fetch/$s_!GAzd!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4540d3ee-f6d4-40aa-8975-719f2ac535dc_1080x1305.png 848w, https://substackcdn.com/image/fetch/$s_!GAzd!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4540d3ee-f6d4-40aa-8975-719f2ac535dc_1080x1305.png 1272w, https://substackcdn.com/image/fetch/$s_!GAzd!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4540d3ee-f6d4-40aa-8975-719f2ac535dc_1080x1305.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!GAzd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4540d3ee-f6d4-40aa-8975-719f2ac535dc_1080x1305.png" width="1080" height="1305" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4540d3ee-f6d4-40aa-8975-719f2ac535dc_1080x1305.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1305,&quot;width&quot;:1080,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:145665,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.greymatterunloaded.com/i/164560430?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4540d3ee-f6d4-40aa-8975-719f2ac535dc_1080x1305.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!GAzd!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4540d3ee-f6d4-40aa-8975-719f2ac535dc_1080x1305.png 424w, https://substackcdn.com/image/fetch/$s_!GAzd!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4540d3ee-f6d4-40aa-8975-719f2ac535dc_1080x1305.png 848w, https://substackcdn.com/image/fetch/$s_!GAzd!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4540d3ee-f6d4-40aa-8975-719f2ac535dc_1080x1305.png 1272w, https://substackcdn.com/image/fetch/$s_!GAzd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4540d3ee-f6d4-40aa-8975-719f2ac535dc_1080x1305.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><a href="https://www.greymatterunloaded.com/p/a-non-technical-guide-to-ai-models">If you haven&#8217;t yet read part one, you can find it at this link. </a><br><br>This document is here to provide you with the ability to wrap your head around the various foundational models discussed in the previous article. As promised it&#8217;s all here and written to be relatively easy to digest. However, since we&#8217;re talking about a lot of information you can use the following links to help navigate the document:</p><ul><li><p><a href="https://www.greymatterunloaded.com/i/164560430/how-ai-works">How AI Works</a></p></li><li><p><a href="https://www.greymatterunloaded.com/i/164560430/the-models-in-a-little-more-detail">The Models in a Little More Detail</a></p><ul><li><p><a href="https://www.greymatterunloaded.com/i/164560430/transformers-not-just-talkers">Transformers (Not Just Talkers)</a></p></li><li><p><a href="https://www.greymatterunloaded.com/i/164560430/causal-inference-counterfactual-ai">Causal Inference / Counterfactual AI</a></p></li><li><p><a href="https://www.greymatterunloaded.com/i/164560430/convolutional-neural-networks-cnns">Convolutional Neural Networks (CNNs)</a></p></li><li><p><a href="https://www.greymatterunloaded.com/i/164560430/generative-adversarial-networks-gans">Generative Adversarial Networks (GANs)</a></p></li><li><p><a href="https://www.greymatterunloaded.com/i/164560430/diffusion-models">Diffusion Models</a></p></li><li><p><a href="https://www.greymatterunloaded.com/i/164560430/reinforcement-learning-rl">Reinforcement Learning (RL)</a></p></li><li><p><a href="https://www.greymatterunloaded.com/i/164560430/self-supervised-learning-ssl">Self-Supervised Learning (SSL)</a></p></li><li><p><a href="https://www.greymatterunloaded.com/i/164560430/graph-neural-networks-gnns">Graph Neural Networks (GNNs)</a></p></li><li><p><a href="https://www.greymatterunloaded.com/i/164560430/multimodal-models">Multimodal Models</a></p></li><li><p><a href="https://www.greymatterunloaded.com/i/164560430/neural-radiance-fields-nerf">Neural Radiance Fields (NeRF)</a></p></li></ul></li><li><p><a href="https://www.greymatterunloaded.com/i/164560430/coming-soon-whats-next-in-ai-models">Coming Soon: What&#8217;s Next In AI Models</a></p></li></ul><h2>How AI Works</h2><p>Modern AI models are built on some powerful ideas that have been in development for decades. One of the most impactful approaches powering a ton of AI systems is the <strong>neural network</strong>. Modeled on the human brain, think of these like digital brains made up of connected layers that pass information forward in a thinking process and tweak themselves when mistakes are made. That tweaking process? It&#8217;s called <em>backpropagation, </em>the model learns by adjusting its settings to get better each time, it&#8217;s kinda like a student who fixes wrong answers on a quiz and ends up learning the content better for next time. Over time these AI systems get smarter and more accurate. This idea powers most of what you see today, from image recognition to chatbots like ChatGPT. It's the secret sauce behind models that can talk, draw, and even code. Without this technique, modern AI as we know it wouldn't exist.</p><p>Then there&#8217;s the stats nerd in the family: <strong>probabilistic models</strong>. These models aren&#8217;t just guessing, they&#8217;re weighing possibilities. It&#8217;s like when you think, &#8220;If I get a D in history, there&#8217;s a 80% chance my parents will be mad.&#8221; These models help AI understand relationships and uncertainty, which is super useful in fields like science, finance, and medicine. They&#8217;re also used in <strong>causal reasoning</strong>, helping AI figure out <em>why</em> something happened, not just <em>what</em> happened. And then we&#8217;ve got <strong>reinforcement learning</strong>, which is basically AI learning from experience. This is easy for most people to understand since it&#8217;s the computer&#8217;s version of Pavlov&#8217;s dog. Imagine a robot playing a video game, it gets points for doing well, and learns to avoid making bad moves. That reward system helps the robot get better and better over time.</p><p>What&#8217;s cool is that today&#8217;s best AI systems don&#8217;t just use one of these ideas, they bring multiple techniques to the table. Some combine neural networks with probabilities, others mix prediction with memory, vision, or decision-making. It&#8217;s like giving a robot not just a brain, but instincts, reasoning skills, a good sense of outcomes, and the ability to learn from mistakes. For example, a model like ChatGPT uses neural networks, a dash of information theory, and loads of data to sound human. Others use attention mechanisms and training tricks inspired by how we compress and transmit information efficiently. That combo of different tools and techniques is what makes modern AI models so smart as well as good at solving real-world problems in everything from medicine to music.</p><p>So while I&#8217;ve highlighted below what I believe are the 10 most impactful models of the day, know that these approaches have grown from well established roots and advancements in a few core fields. What&#8217;s next in the space, well I&#8217;ve speculated on that near the bottom of this article. </p><h2>The Models in a Little More Detail</h2><h3>Transformers (Not Just Talkers)</h3><p>These are the models we're often referring to when we talk about AI today. Why? Because they're everywhere. It's even hiding in the acronym GPT: Generative, Pretrained, <strong>Transformer</strong>. Some of them are big and proprietary (ChatGPT) and some are built on open source tech (LLaMA). Regardless, the AI innovation behind the scene is a Transformer model. Innovation happens here because it's easy to see the benefit of having a AI system that is fluent in languages that matter (e.g. English, French, JavaScript) and has an incredible understanding of the world's corpus of knowledge.  </p><ul><li><p><strong>Best examples:</strong> ChatGPT, Claude, Gemini, Mistral, LLaMA</p></li><li><p><strong>What they do:</strong> These models understand and generate <em>sequences</em>. Their superpower is predicting what comes next in a sequence, especially in language, making them pattern recognition pros in speech, writing, music and code. We do this ourselves with a little less math: "Peter Piper Picked a <strong>_</strong>___" is a pretty easy sentence for anyone brought up speaking english. In the same way  a transformer is making the same kinds of predictions for how words go together to make language work.</p></li><li><p><strong>Where they struggle:</strong> They're bad at things that are not predictive. Think math. The answer to 948+276 does not require a prediction,  it's a straightforward calculation. When humans solve math problems we follow a series of steps to get to the answer. Transformer models don't follow steps, they make fast predictions which is why as math gets more complicated they often make incorrect guesses. They're not designed for step by step processes. This also why you'll hear stories of large language models hallucinating answers to questions that are patently untrue. Because they're making predictions, some of those predictions are bound to be wrong.</p></li><li><p><strong>The one sentence use case:</strong> You need a super knowledgeable assistant that can converse with a user to deliver information in a conversant manner.</p></li><li><p><strong>How to use for insights?</strong> This has been explored by a significant number of companies and start-ups both in the Insights space as well as adjacent spaces. To make this work well you will want to pair a large language model with your data (<a href="https://www.greymatterunloaded.com/p/the-tech-that-could-decide-your-place">MCP</a> or <a href="https://youtu.be/Q6JisCLX2Ws?si=JNxd0bwX6d4BmVRo">RAG</a>). Examples of this applied to insights include:</p><ul><li><p>Chat with data: Most clients hate crosstabs. This allows customers to ask a LLM questions about your data without the hassle of knowing how to navigate a confusing crosstabbing tool.</p></li><li><p>AI personas: Tell the AI system to pretend it is a particular type of person in your data. Customers can then chat with the AI as if they are that person. The AI uses your data to answer.</p></li><li><p>Enhanced search: Take advantage of the predictive power of the transformer to better predict what the user is trying to ask.</p></li><li><p>Reasoned insights: Not only can transformer models return data based on a user question, they can also reason the meaning behind that data. Put this together with automation tools to generate reports and findings without needing an analyst.</p></li><li><p>Qualitative: Chatbots and qual have been around for a long time but as you can imagine a guided research conversation with a LLM is a much more scalable and engaging experience for respondents.</p></li><li><p>Revised survey experience: No one has cracked the actual survey experience yet but many are working to turn traditional quant surveys into better experiences with LLMs.</p></li><li><p>Copy generation: Use LLMs trained on consumer preference data to create advertising copy or scripts that highlight nuances in consumer preferences.</p></li></ul></li><li><p><strong>Accessibility:</strong></p><ul><li><p>Model Maturity: Fairly Mature</p></li><li><p>Talent Pool: Abundant</p></li><li><p>Implementation Complexity: Somewhat Difficult </p></li></ul></li><li><p><strong>Want to learn more?</strong> Watch the first 2 chapters of <a href="https://youtu.be/wjZofJX0v4M?si=lQr4ecYVHZ7zJs-h">this video</a> for a great visual representation</p></li></ul><h3>Causal Inference / Counterfactual AI</h3><p>A lot of the aforementioned models are designed to look at relationships and make predictions. Causal Inference and Counterfactual AI are advanced methodologies in AI and statistics focused on understanding cause-and-effect relationships rather than mere correlations. This technique more so than the others is focused on enabling more robust decision-making and predictions. Causal inference identifies how changes in one variable (e.g., saw an ad) directly affect another (e.g., made a purchase), while counterfactual AI explores &#8220;what-if&#8221; scenarios by estimating outcomes under hypothetical conditions that didn&#8217;t occur. These approaches are critical in fields like healthcare, economics, policy analysis, and AI-driven decision systems, where understanding true causes is essential for effective operations.</p><ul><li><p><strong>What they do:</strong> Figure out not just what happened, but what <em>would</em> have happened if something else occurred.</p></li><li><p><strong>Where they Struggle:</strong> Causal inference and counterfactual AI struggle with unobserved confounders and data limitations, which can bias results and make it hard to isolate true causal effects without strong assumptions. Just like us mere humans, this means that these techniques are equally susceptible to those black swan events that haven't been seen before. If a meteor strikes Manhattan tomorrow and no one knew it was coming, neither did the AI model.</p></li><li><p><strong>The one sentence use case:</strong> You have data about the real world (customers, events, etc.) and you want to be able to understand what drives specific outcomes and plan for a specific result.</p></li></ul><ul><li><p><strong>How to use for insights?</strong> Probably the most interesting models on the list for insights professionals. These models are designed to make insight prescriptive and diagnostic. Causal and counterfactual models are pivotal for generating truly strategic insights. They push the insights field towards understanding the "why" behind observed phenomena and predicting the consequences of actions. While challenging to implement correctly, the ability to answer questions and reason about "what ifs" provides a powerful foundation for data-driven decision-making. </p><ul><li><p>Marketing effectiveness: The holy grail of advertising spend. The means through which all ad spend is justified. Use these models to not only understand what&#8217;s working in a campaign but scenario plan for the next campaign with historical data.</p></li><li><p>Product analytics: Figure out what product features are driving what impact. Did the new packaging drive purchase or was that a coupon event?</p></li><li><p>Consumer targeting: Scenario plan for what intervention will cause a consumer to convert in the purchase funnel? Will it be an ad target, a discount, high frequency, personalized creative?</p></li><li><p>Root cause analysis: Why did website traffic dip? Was it the new design, the ad creative, or the turnaround on shipping time?</p></li><li><p>Media planning: Scenario planning various media plans designed around curated outcomes.</p></li></ul></li><li><p><strong>Accessibility:</strong></p><ul><li><p>Model Maturity: Relatively New</p></li><li><p>Talent Pool: Scarce</p></li><li><p>Implementation Complexity: Somewhat Difficult </p></li></ul></li><li><p><strong>Want to learn more?</strong> This <a href="https://youtu.be/Od6oAz1Op2k?si=mrStTVXhqkVhuDZB">video</a> provides a high level overview of Causal Inference as a concept and <a href="https://youtu.be/MFnOYNU5sbk?si=B-KIyev-2iyYcj4g">this video</a> provides how ML is brought to the table. Counterfactual modeling is still evolving so not a ton of sources of content that is approachable. This <a href="https://youtu.be/adTazXyLn38?si=7dNnG5Yr5pWdsdCl">video </a>is probably the best place to start.</p></li></ul><h3>Convolutional Neural Networks (CNNs)</h3><p>When you hear CNN in a conversation with a data scientist they're not talking about cable news, they're referring to one of the most interesting foundational AI techniques that is changing the world. Unlike LLMs this techniques application in your daily life is not immediately apparent, but they've been around for quite some time. CNNs are the technology that is making cell phone photography rival the best high end cameras. The magic eraser that removes an unwanted person from a cell phone photo? That&#8217;s a CNN at work. CNNs are able to break a photo up into multiple parts analyze each part separately, understand what each part is, and understand how it connects to the larger whole. Cars that can drive themselves and see objects in the road? That&#8217;s a CNN. These models are all around us powering computers to see the world in the same way we see the world. </p><ul><li><p><strong>Best examples:</strong> Google Photos Magic Eraser, Tesla Autopilot, Apple's Face ID</p></li><li><p><strong>What they do:</strong> Spot patterns in images. They&#8217;re the original engine of the computer vision revolution.</p></li><li><p><strong>Where they struggle:</strong> CNNs are picky eaters when it comes to image data. They do great with clear, consistent visuals (like cats vs. dogs) but get confused when images are noisy, altered, or come from unusual angles. A small change, like shifting a pixel, can throw them off completely. They love deep rich datasets (<a href="https://www.reuters.com/legal/litigation/google-sued-by-us-artists-over-ai-image-generator-2024-04-29/">which also get them into trouble</a>).</p></li><li><p><strong>The one sentence use case:</strong> You need a computer to be able to see things in images or the physical world and make decisions based on what it sees.</p></li><li><p><strong>How to use for insights?</strong> This class of models is especially good at understanding visual stimulus. Most computer vision applications have this or similar tech inside. Within the insights space this can make traditional visuals something analyzable:</p><ul><li><p>Ad testing: Use a CNN to look through an ad and isolate the various elements of the advertising. This allows you to turn that ad into data that can be run through models to predict effectiveness. </p></li><li><p>Social media monitoring: Want to know if a product is showing up in social media posts or videos? A CNN can look through those videos and images for things that brands find interesting (logos, product packages). Not only that the CNN can help spot trends or provide context so the brand knows what they&#8217;re being associated with.</p></li><li><p>Product review monitoring: Are people posting negative reviews showing products broken or dysfunctional? CNNs can spot the brand, help understand the context and map the issues (both bad and good).</p></li><li><p>Facial analysis: Determine user emotion in response to stimulus, identify if someone&#8217;s looking at an ad/TV/etc. Analyze human response in focus groups.</p></li><li><p>Pattern analysis: Monitor shopping behavior from video feeds. Where on the shelf do people look, how do they navigate the store, what products do they pickup and investigate, immediately add to cart or ignore. </p></li><li><p>Receipt processing: Help decipher receipt data for analysis. Determine purchase location, items, prices, etc. </p></li></ul></li><li><p><strong>Accessibility:</strong></p><ul><li><p>Model Maturity: Fairly Mature</p></li><li><p>Talent Pool: Abundant</p></li><li><p>Implementation Complexity: Easiest</p></li></ul></li><li><p><strong>Want to learn more?</strong> Best high level overview of how this works is from IBM and can be seen <a href="https://youtu.be/QzY57FaENXg?si=WfJ8s9TDFkIDLGmI">here</a>. To learn more about how it works under the covers there&#8217;s a great set of examples from <a href="https://adamharley.com/nn_vis/">Adam Harley from Carnegie Mellon</a>.</p></li></ul><h3>Generative Adversarial Networks (GANs)</h3><p>GANs have also been around for some time now. They're the models that sit behind the scenes at companies like OpenAI that help ChatGPT generate images from text prompts. These models are also used in those fancy Snapchat filters that make you look older, younger or like a cat. These types of models have certainly found a niche in an application to AI generated art. But human art isn't the only thing they're good at copying, they're also good at copying speech patterns and are often deployed in text to speech applications.</p><ul><li><p><strong>Best examples:</strong> OpenAI image generation, ElevenLabs text to speech.</p></li><li><p><strong>What they do well :</strong> Quite simply a GANs model is a competition between two competing models. One model tries to fake something (like an image), another tries to spot the fake. They go back and forth and improve by competing. This is great for copying styles from a known set of training data (e.g. human faces)</p></li><li><p><strong>Where they struggle:</strong> GANs are notoriously tricky to train. It&#8217;s like playing tug-of-war, if one side gets too strong, the balance breaks. They can also struggle with generating structured, logical outputs (like a coherent spreadsheet), and often mess up fine details (think distorted hands or text).</p></li><li><p><strong>The one sentence use case:</strong> You need an approach that allows you to make a facsimile of something that is as good as the inspirational context. </p></li></ul><ul><li><p><strong>How to use for insights?</strong> Think of GANs as dueling models that get better over time at outsmarting each other. In the insights field GANs have a robust use case in supporting multiple parts of the industry. Here&#8217;s just a sample of how they can be employed:</p><ul><li><p>Fraud detection: Train a GAN on "normal" data. New data points can then more easily be identified as "fake" or users that the generator struggles to reconstruct accurately are likely anomalies worth inspecting further.</p></li><li><p>Synthetic samples: Train the model on existing (limited) data to generate more synthetic samples that follow the same underlying distribution. This will help boost sparse areas of your sample.</p></li><li><p>Open data: Make more underlying research data open to customers and marketers by using a GAN to create samples that mimic the original data and prevent private consumer data from being shared.</p></li><li><p>Anomaly detection: Similar to fraud detection, use a GAN to spot when data differs from expected results to highlight subtle unseen changes that could indicate new trends, etc.</p></li><li><p>Marketing mix/financial modeling: Use a GANs trained on market data to predict the likely outcome based on a hypothetical media allocation in the market. Extend the use case to data trained on company performance to predict financial outcomes.</p></li><li><p>Product innovation prototyping: Generate synthetic product designs for review and further research</p></li></ul></li><li><p><strong>Accessibility:</strong></p><ul><li><p>Model Maturity: Maturing</p></li><li><p>Talent Pool: Moderate</p></li><li><p>Implementation Complexity: Somewhat difficult</p></li></ul></li><li><p><strong>Want to learn more?</strong> IBM to the rescue again for an <a href="https://youtu.be/TpMIssRdhco?si=_9g_Sc29DBz8ZSBC">overview of GAN</a>. </p></li></ul><h3>Diffusion Models</h3><p>It's fair to say that when it comes to AI generating content (music, images, etc.) GANs is no longer the cool kid in town, now its all about Diffusion. They work from a huge training dataset. Imagine if you started to create a picture of a puppy by blending 1000 puppy pictures together - it'd be a mess. But if you can take away the parts that don't fit and put together the parts that do from across all of the photos you could generate a new puppy picture by pulling from all of the images in your pile of photos. They're great when you have a lot of training data that can be used to create artificial versions based on that data. Diffusion models are similar to GANs in what they accomplish, and can be a little slower however they are less likely to mess up in the same ways as a GANs might (rendering fingers).</p><ul><li><p><strong>Best examples:</strong> Snapchat/Instagram filters, Music Generation, StableDiffusion</p></li><li><p><strong>What they do well:</strong> Start with noise and reverse it step-by-step to generate stunning outputs. Unlike GANs diffusion models will be more creative and able to create content that is an extension of the training data not just a facsimile of the training data.</p></li><li><p><strong>Where they struggle:</strong> Diffusion models can be slow and resource-intensive, often taking seconds or minutes to generate outputs and requiring powerful computers. They also struggle with complex prompts, producing artifacts like blurry spots or incorrect details, especially when instructions are vague or involve rare combinations.</p></li><li><p><strong>The one sentence use case:</strong>  Same as GANs - You need an approach that allows you to make a creative extension of something that is as good as the inspirational context.</p></li><li><p><strong>How to use for insights?</strong> It would be fair to say that most of the use cases for GANs also work well as use cases for Diffusion Models - i.e. rinse and repeat. In the AI community Diffusion is the more popular model while GANs are waning in popularity, however GANs are usually better with insights style data (for now). Here&#8217;s some insights use cases:</p><ul><li><p>Synthetic data generation: The training process tends to be more stable than GANs, potentially leading to better data diversity.</p></li><li><p>Mock-up creation: Create hyper realistic product mock-ups for further research.</p></li><li><p>Advertising generation: Use feedback from consumer data to create sample advertisements both images and video. </p></li><li><p>Consumer avatar creation: Use to create visual representations of the consumer grounded in insights data which help marketers better empathize with the customer.</p></li><li><p>What if analysis: Use a model trained on typical consumer data and have the model generate a likely outcome based on specific changes in the underlying dataset.</p></li><li><p>Advanced imputation: Diffusion models are especially good at filling in missing details. Diffusion can be used as a more sophisticated form of data imputation to fill in missing details in aggregate datasets.</p></li></ul></li><li><p><strong>Accessibility:</strong></p><ul><li><p>Model Maturity: Maturing</p></li><li><p>Talent Pool: Moderate</p></li><li><p>Implementation Complexity: Very difficult</p></li></ul></li><li><p><strong>Want to learn more?</strong> This is a relatively approachable video on <a href="https://youtu.be/x2GRE-RzmD8?si=MT1y1VFSLOhMb4H2">Diffusion</a>. </p></li></ul><h3>Reinforcement Learning (RL)</h3><p>The idea of reinforcement learning goes back decades. It is also the most relatable since it's the technique we use to learn how to navigate the world as babies and children. Try something over and over until you get it right, guided by positive and negative feedback. Reinforcement Learning is a type of machine learning where an agent learns to make decisions by interacting with an environment to achieve a specific goal. This is about learning through trial and error, guided by rewards or penalties. The agent observes the environment, takes actions, and receives feedback in the form of a reward signal, which it uses to improve its decision-making over time. These models being computer generated can be deployed in a lab to conduct a virtual task an unlimited number of times in a rapid fashion. </p><ul><li><p><strong>Best examples:</strong> Recommendation engines, AI controlled NPCs in video games, smart thermostats</p></li><li><p><strong>What they do:</strong> Learn by trial and error. If the AI does something right, it gets a reward and learns to do it more.</p></li><li><p><strong>Where they struggle:</strong> Reinforcement Learning struggles with sample inefficiency, often requiring millions of interactions to learn, which can be slow and impractical in real-world settings. It also faces challenges with reward design, as poorly crafted rewards can <a href="https://www.wired.com/story/when-bots-teach-themselves-to-cheat/">lead to unintended behaviors</a>, and training can be unstable in complex environments. Computers don't yet do learning in this manner nearly as well as babies and puppies.</p></li><li><p><strong>The one sentence use case:</strong> You know the outcomes you're looking for and want a computer to be able to learn how to generate those outcomes. </p></li><li><p><strong>How to use for insights?</strong> This approach has a lot of applicability to the insights space. There&#8217;s very little of the insights data creation and interpretation funnel that can&#8217;t benefit from Reinforcement Learning.</p><ul><li><p>Respondent selection: Use this model to pair research tasks the respondents most likely to be qualified to complete the task (e.g. take a survey).</p></li><li><p>Personalization: Create a better experience for panelists or respondents by personalizing the experience of participating in research to their needs. </p></li><li><p>Dynamic testing: Use these models to run sophisticated experiments that continuously alter the experiment parameters until the best outcome is determined. Think dynamic price testing where prices and packages are changed dynamically until the model learns what works best. Can be applied to other use cases like ad testing. Can be implemented as a continuous testing environment&#8230; imagine always on content A/B testing.</p></li><li><p>Agent based optimization: Implement a reinforcement learning agent that can help make dynamic suggestions for how to allocate media dollars to capitalize on the current trends. Think of it as a mix modeler in your pocket monitoring trends and spend.</p></li><li><p>Adaptive surveys: A survey that monitors respondent engagement and learns to ask the right questions to maintain high levels of attention.</p></li><li><p>Insights automation: Use to generate insights outputs for commonly generated reports. Use humans to &#8220;train&#8221; the model by providing positive feedback when the insights are good and negative feedback when the output is bad.</p></li></ul></li><li><p><strong>Accessibility:</strong></p><ul><li><p>Model Maturity: Maturing</p></li><li><p>Talent Pool: Moderate</p></li><li><p>Implementation Complexity: Somewhat difficult</p></li></ul></li><li><p><strong>Want to learn more?</strong> Reinforcement Learning is described well in <a href="https://youtu.be/T_X4XFwKX8k?si=bsWF1ErRRjPUinL8">this video</a>. </p></li></ul><h3>Self-Supervised Learning (SSL)</h3><p>This technique is the evolution from supervised learning. Supervised learning happens when you give a computer a problem with easy to understand data and an example of how that data can be used to solve a problem. This is the same way we learn math in school. We use well "labeled" inputs (e.g. cost of apples=$2, apples sold per hour=4) and teach the student how to use those inputs to solve a problem (e.g. money earned in an hour). </p><p>Self-supervised learning (SSL) is a type of machine learning where a model learns to make predictions or understand data without explicit labeled outputs, unlike supervised learning, which relies on input-output pairs (examples). Instead, SSL creates its own &#8220;labels&#8221; from the data itself by designing pretext tasks, artificial problems the model solves to learn useful representations. These learned representations can then be used for downstream tasks, like classification or generation. SSL is the evolution of learning, (imagine learning math without being previously told the price of apples is important) offering a powerful way to leverage vast amounts of unlabeled data, which is abundant in the real-world.</p><ul><li><p><strong>Best known examples:</strong> Siri responding to your specific voice or accent when you say "Hey Siri", TikTok being able to design a <em>For You</em> page based on your preferences for certain types of videos.</p></li><li><p><strong>What they do:</strong> Learn from data by making up puzzles. The more puzzles they make up the more they understand the data and the more they can use the data. It&#8217;s how many modern models get smart.</p></li><li><p><strong>Where they struggle:</strong> Self-supervised learning struggles with high computational costs, requiring powerful hardware and significant energy to pre-train on large datasets. Until you do the pre-training the models don't work very well. It also depends on carefully designed pretext tasks, as poorly chosen tasks can lead to weak or irrelevant representations that don&#8217;t transfer well to practical applications.</p></li><li><p><strong>The one sentence use case:</strong> This is the perfect technique for legacy businesses that didn't invest in structured data warehouses. If you have a lot of data that's not necessarily well described but could be used to create value SSL is for you.</p></li><li><p><strong>How to use for insights?</strong> SSL&#8217;s are a great boon to the insights field for one particular reason - most real-world data is not well structured for using in AI applications. SSL algorithms create their own understanding of the data from the data itself. No humans needed, just data, which is great considering how much insights info is locked in proprietary systems or a million surveys with slightly different wording.</p><ul><li><p>Reconciling surveys: Take 500 custom brand tracking studies and create a unified understanding of how brands grow without needing to completely standardize.</p></li><li><p>Semantic search: Improve insights data retrieval by using a SSL model which understands the context of the data and doesn&#8217;t need to match directly on keywords.</p></li><li><p>Segmentation on steroids: Create a new approach to customer segmentation which relies on a SSL model to figure out segments that fit together naturally through an understanding of the underlying data. Can be used to uncover new insights and findings.</p></li><li><p>Enhancing models: Take messy real world data (e.g. thousands of: focus groups, surveys, transcripts) and use a SSL model to add context (labels) needed to make that data work for other AI applications (e.g. any other model on this list).</p></li><li><p>Anomaly detection: SSL models learns the normal parameters of a dataset so when things don&#8217;t fit those parameters you have an anomaly to investigate.</p></li></ul></li><li><p><strong>Accessibility:</strong></p><ul><li><p>Model Maturity: Maturing</p></li><li><p>Talent Pool: Moderate</p></li><li><p>Implementation Complexity: Somewhat difficult</p></li></ul></li><li><p><strong>Want to learn more?</strong> This <a href="https://www.youtube.com/watch?v=iGJ1XSkCyU0">video</a> on Self Supervised Learning provides a high level overview. </p></li></ul><h3>Graph Neural Networks (GNNs)</h3><p>Graph Neural Networks (GNNs) are a class of machine learning models designed to process and analyze data structured as complex relationship graphs (think network). These consist of nodes (representing entities, like people or molecules) and edges (representing relationships, like friendships or chemical bonds). GNNs excel at capturing the most complex types of relationships in graph-structured data, making them ideal for tasks like social network analysis, recommendation systems, molecular chemistry, and traffic prediction. GNNs are all about understanding connections so they leverage the connectivity to learn.</p><ul><li><p><strong>Best examples:</strong> Drug discovery that is able to uncover new molecules to improve health. That friend algorithm in your social media app that does an uncanny job finding people you know. Google Maps ability to predict the best route from origin to destination while taking into account traffic along the way.</p></li><li><p><strong>What they do:</strong> Understand relationships and connections in data, like how people or things are linked.</p></li><li><p><strong>Where they struggle:</strong> Graph Neural Networks struggle with scalability, as processing large graphs with millions of nodes is computationally expensive and often requires approximations. They also face issues with generalizing deep in the graph where nodes become too similar, and handling dynamic graphs that change over time.</p></li><li><p><strong>The one sentence use case:</strong> If you have data that's connected in some way and you want to leverage those connections to make predictions. </p></li><li><p><strong>How to use for insights?</strong> GNNs are a cornerstone model for deriving insights from connected data. They enable you to move from analyzing isolated data points to understanding the rich relationships that often drive key outcomes. Through learning the structure of connections, a GNN can unlock a deeper, more contextual understanding of complex relationships. Much of the insights field revolves around understanding relationships between various data points so GNNs are a great application to be aware of.</p><ul><li><p>Advanced segmentation: Identify groups of customers with similar behaviors, preferences, or social connections. This gives you a more nuanced segmentation than attribute-based methods. Use this to define what defines a consumer base.</p></li><li><p>Insights Generation: Create novel new insights based on difficult to spot relationships between various cohorts of data. When paired with LLMs allows for reasoning that can substantially complement the typical data analyst. </p></li><li><p>Influence tracking: Deploy in marketing activation by using across touch points to determine influential customers or products within the network that drive trends or purchases. </p></li><li><p>Fraud networks: Fraudulent behavior often isn&#8217;t just one bad actor or respondent, it can be one person puppeting multiple accounts. Use GNN to identify the network of larger scale fraudulent behavior (e.g. botnets, etc.).</p></li><li><p>Criticality analysis: Understand the parts of the marketing or supply ecosystem that are critical to the whole network functioning. When combined with other approaches it can be used to scenario plan for changes in strategy or unforeseen circumstances.</p></li></ul></li><li><p><strong>Accessibility:</strong></p><ul><li><p>Model Maturity: Maturing</p></li><li><p>Talent Pool: Scarce</p></li><li><p>Implementation Complexity: Somewhat difficult</p></li></ul></li><li><p><strong>Want to learn more?</strong> This <a href="https://youtu.be/epVW0_iVBX8?si=qFwXxzGFzP9oSDWf">video</a> is a good overview of GNNs with a particular focus on drug discovery.</p></li></ul><h3>Multimodal Models</h3><p>When large language models were first created people loved them, but soon realized they wanted to interact with the models in the same way we interact with other people, not just through text communications but with photos, articles, music, spoken language, etc. From this demand came multimodal models. Multimodal models are advanced machine learning models designed to process, understand, and generate outputs from multiple types of data, such as text, images, audio, video, or even sensor data, within a single framework. Unlike traditional models that handle one modality (e.g., text-only language models or image-only convolutional neural networks), multimodal models integrate and reason across diverse data types, capturing richer context and enabling more human-like intelligence. These models are a cornerstone of modern AI, powering applications like virtual assistants, content generation, and autonomous systems.</p><ul><li><p><strong>Best examples:</strong> Newest version of Google Gemini: Take a photo and the model tells you what is in the image.</p></li><li><p><strong>What they do:</strong> Mix different types of data&#8212;text, images, audio&#8212;and reason across them.</p></li><li><p><strong>Where they struggle:</strong> Multimodal models struggle with high computational demands, requiring significant resources to process and align multiple data types simultaneously. They also face challenges with data quality and cross-modal interference, where noisy or misaligned datasets and modality imbalances can lead to sub optimal performance. </p></li><li><p><strong>The one sentence use case:</strong> You want to build a solution that considers multiple forms of data and reasons how to make use of that data to deliver value.</p></li></ul><ul><li><p><strong>How to use for insights?</strong> A natural evolution of the AI modeling space, multimodal models are relatively cutting edge and offer a lot of value to the insights community. Understanding consumers requires models that can know not just how to read text but also interpret tonality in speech and facial expressions. </p><ul><li><p>True sentiment analysis: Complete sentiment analysis of large scale consumer data and better interpret edge cases such as irony, sarcasm, puns, memes and satire.</p></li><li><p>Data contextualized ethnography: Combined analysis of consumer videos/photos of a use case together with quantitative data from consumer studies. E.g. people buy a lot of baking soda and half the time it ends up in the fridge.</p></li><li><p>Next level CX: Combine data from site analytics, audio from call centers, user generated content and online posts/reviews to create real time tracking of customer experience.</p></li><li><p>Multi-modal analysis: Analyze performance of products/advertising/media using visuals, audio, and consumer reaction data to disentangle elements that have the greatest impact.</p></li><li><p>Advertising creative generation: Create new video and audio advertising built on consumer data and historical brand assets.</p></li><li><p>Enhanced visual search: Create a more engaging search experience which transcends text to images. E.g. find all advertising for &#8220;Coke&#8221; regardless of whether Coke is mentioned in the ad copy or just shown in an image or video. </p></li></ul></li><li><p><strong>Accessibility:</strong></p><ul><li><p>Model Maturity: Maturing</p></li><li><p>Talent Pool: Moderate</p></li><li><p>Implementation Complexity: Very difficult</p></li></ul></li><li><p><strong>Want to learn more?</strong> Most of us have seen Multimodal AI but here&#8217;s a more <a href="https://youtu.be/97n1u66Shgg?si=g6Oj6HtM6RprgG9a">nuanced overview</a> of what makes AI multimodal.</p></li></ul><h3>Neural Radiance Fields (NeRF)</h3><p>One of the coolest and newest models to the scene are a cutting-edge class of machine learning models designed to reconstruct and render highly detailed 3D scenes from a collection of 2D images. When you look out your window you can imagine what the other side of tree you're looking at looks like, with NeRF and some training data so can computers. NeRFs model the geometry and appearance of a scene by learning how light interacts with objects in 3D space. </p><ul><li><p><strong>Best Examples:</strong> Rendering 3D versions of spaces from photos, Computer gaming, Image to Video</p></li><li><p><strong>What they do:</strong> Turn 2D images into 3D scenes.</p></li><li><p><strong>Where they struggle:</strong> NeRF models struggle with high computational costs and slow rendering, requiring powerful hardware and limiting real-time applications. They also need many high-quality images and falter with moving objects or changing lighting, lacking flexibility for dynamic or sparse-data scenarios.</p></li><li><p><strong>The one sentence use case:</strong> You have a lot of images and need to render them into interactive (virtual) content, or videos.</p></li></ul><ul><li><p><strong>How to use for insights?</strong> NeRF models are likely the least directly applicable to the insights field. Currently, they are the darling of the game/video production and VR fields but over time we&#8217;ll see more applications for the insights industry. For now there&#8217;s only a couple key applications.</p><ul><li><p>Rendering products: Take 2D versions of product images and create 3D models that enable consumers to interact with them more immersively.</p></li><li><p>VR/AR research: Create 3D environments to allow customers to immersively explore in VR or AR. Virtual store testing, shelf layout analysis, etc.</p></li><li><p>Immersive placement testing: Create 3D versions of products to place dynamically into video content to test novel placement options. For example: place a soda can virtually into 50 different video podcasts to test which generates the best consumer response.</p></li></ul></li><li><p><strong>Accessibility:</strong></p><ul><li><p>Model Maturity: Relatively New</p></li><li><p>Talent Pool: Scarce</p></li><li><p>Implementation Complexity: Very difficult</p></li></ul></li><li><p><strong>Want to learn more?</strong> One of the most visually interesting of models. Check out this quick <a href="https://youtu.be/0eADSpAI_VM?si=QKSy054FrZFWXQpy">2 minute overview</a> of the tech.</p></li></ul><h2>Coming Soon: What&#8217;s Next In AI Models</h2><p>It&#8217;s hard to say what will be the next big tech in AI because the large AI innovators love to surprise us with mind blowing new models that have been hidden in the lab for a while (looking at you Veo 3). However, if we look at the insights industry and what&#8217;s starting to scale there are a couple key technologies to keep an eye on:</p><h3>Web-Scale Retrieval-Augmented Generation (RAG)</h3><p>When you go down the rabbit hole on AI systems you quickly learn about the concept of <em><a href="https://youtu.be/MmSMAYooRas?si=KSSp1z-rvWWZPych">context window</a></em>. So while RAG has been around for a while, web-scale RAG is the promised land. It&#8217;s like chatting with an AI that&#8217;s allowed to read <em>the entire internet</em> while it talks to you, and can remember <em>everything</em> it just read, even if it&#8217;s super long.</p><p><strong>Analogy:</strong> Imagine a librarian who can instantly scan <em>every</em> book, website, and document in the world, find just the right paragraphs, and hand them to a super intelligent editor that crafts a perfect response on the fly.</p><p><strong>Why it matters:</strong> Most AI tools still rely on memory and pre-training. Web-scale RAG lets models pull in <em>fresh, relevant</em> knowledge in real time. This is the future of on-demand intelligence, think AI analysts that monitor markets, media, or consumers as things happen, and generate insight reports <em>instantly</em>.</p><p><strong>Where it&#8217;s happening:</strong> <a href="https://you.com/articles/you.com-launches-api-for-web-search-and-retrieval-augmented-generation-giving-ai-chatbots-and-large-language-models-real-time-web-access-and-unparalleled-accuracy">You.com</a>, <a href="https://vespa.ai/perplexity/">Perplexity</a>, <a href="https://openai.com/index/new-models-and-developer-products-announced-at-devday/">OpenAI&#8217;s GPT-4 Turbo</a> with long context, and efforts from Anthropic and Meta are racing to push the boundaries of large-scale retrieval and generation. Academic work is coming out of <a href="https://cs.stanford.edu/~myasu/blog/racm3/">Stanford</a>, the <a href="https://www.semanticscholar.org/paper/Scaling-Retrieval-Based-Language-Models-with-a-Shao-He/f25cc8cfb985d1bad1f1a070d74dd6171d2d028c">Allen Institute for AI</a>, and the <a href="https://arxiv.org/abs/2412.15235">University of Washington</a>.</p><h3>Neurosymbolic AI</h3><p>The challenge with a lot of AI today is the models provide great answers but <a href="https://youtu.be/nMwiQE8Nsjc?si=_WydDPWntW2Hm58e">don&#8217;t always tell you how they got there</a>. This AI model mixes brain-like learning (neural networks) with rule-following logic (symbols). It can both learn from data <em>and</em> follow instructions, like doing your math homework and explaining each step. </p><p><strong>Analogy:</strong> Imagine a detective team where one partner is a brilliant gut-instinct type who notices patterns others miss (the neural network), and the other is a strict rule-follower who builds logical timelines and checks alibis (the symbolic logic engine). Separately, they&#8217;re good. Together, they solve cases with both creativity <em>and</em> clarity.</p><p><strong>Why it matters:</strong> Insights teams want both <em>smart automation</em> and <em>traceable logic</em>. Neurosymbolic AI offers more transparency, which is key for generating explainable results in regulated or high-stakes research contexts. This will be critical for automating insights generation.</p><p><strong>Where it&#8217;s happening:</strong> <a href="https://mitibmwatsonailab.mit.edu/category/neuro-symbolic-ai/">MIT-IBM Watson Lab</a>, Stanford's Human-Centered AI Institute, and <a href="https://www.darpa.mil/research/programs/assured-neuro-symbolic-learning-and-reasoning">DARPA's Explainable AI program</a> are major players here.</p><h3>Meta-Learning / Few-Shot Learning</h3><p>Looking through the examples I provided above you&#8217;ll quickly find that the big hurdle for implementation is almost always the training or tuning of a model. Most AI needs thousands of examples to learn. Meta-learning teaches AI how to learn quickly, just a few examples, and it gets the idea. Being able to learn more quickly will open the door to companies with smaller datasets or only partially labeled data to take advantage of AI.</p><p><strong>Analogy:</strong> It&#8217;s like watching someone tie a shoelace once, and then figuring out how to do it yourself.</p><p><strong>Why it matters:</strong> This is game-changing for niche or low-data segments, a new product line, emerging customer trend, or small panel. It lets you deploy AI where you don&#8217;t have mountains of data.</p><p><strong>Where it&#8217;s happening:</strong> Research is booming at <a href="https://research.google/blog/announcing-meta-dataset-a-dataset-of-datasets-for-few-shot-learning/">Google Brain</a>, <a href="https://openreview.net/forum?id=b-ny3x071E5">DeepMind</a>, <a href="https://openai.com/index/reptile/">OpenAI</a>, and the <a href="https://www.cs.toronto.edu/~zemel/documents/prototypical_networks_nips_2017.pdf">University of Toronto</a>.</p><h3>Federated Learning at Scale</h3><p>These days privacy is part of the public consciousness. <a href="https://iapp.org/news/a/data-protection-and-privacy-laws-now-in-effect-in-144-countries">IAPP estimates</a> suggest that 82% of the global population is covered by some form of privacy regulation. If AI models are more accurate with more data how can you train them in a data privacy centered world? That&#8217;s where federated learning comes in. Instead of sending your data to one big server, the AI learns right on your device and shares just what it learned, not your personal info. It&#8217;s smart and private at the same time.</p><p><strong>Analogy:</strong> Picture a bunch of students studying for the same test, but they&#8217;re not allowed to share their notes. Instead, each one studies on their own, figures out what tricks work (like &#8220;draw a diagram for this part&#8221; or &#8220;use a rhyme for that list&#8221;), and sends <em>just those tips</em> to a group chat. Everyone gets smarter together without ever sharing their private notebooks.</p><p><strong>Why it matters:</strong> The insights field today is already working to manage privacy barriers. For customer research and behavioral data, federated learning lets you tap into edge data (like phones or surveys on apps) without violating privacy. It supports compliance while expanding your reach. </p><p><strong>Where it&#8217;s happening:</strong> <a href="https://research.google/blog/federated-learning-collaborative-machine-learning-without-centralized-training-data/">Google</a> (for Android), <a href="https://machinelearning.apple.com/research/learning-with-privacy-at-scale">Apple</a> (on-device AI), and <a href="https://blog.ml.cmu.edu/2019/11/12/federated-learning-challenges-methods-and-future-directions/">Carnegie Mellon University</a> are pioneering large-scale federated systems.</p><h3>Intent-Based Modeling</h3><p>Insights are best consumed when they feed into the decision making engine of a business. Historically this is a difficult process because insights practitioners often focus only on the facts of the data and try not to over interpret findings. This kind of AI opens new doors by guessing what someone&#8217;s really trying to do, even if their actions are messy. It&#8217;s less about &#8220;what they clicked&#8221; and more about &#8220;what they wanted.&#8221;</p><p><strong>Analogy:</strong> Imagine getting in your car at 8:00 a.m. on a Tuesday. You don&#8217;t type anything in, but your GPS already suggests the fastest route to your office. Why? Because based on your past patterns, it <em>knows</em> where you&#8217;re probably headed even if you haven&#8217;t told it. That&#8217;s intent-based modeling: predicting your goal from your behavior, not just waiting for your interaction.</p><p><strong>Why it matters:</strong> This is gold for customer journey analysis, churn prediction, and targeting. It helps brands serve real needs instead of just reacting to facts.</p><p><strong>Where it&#8217;s happening:</strong> <a href="https://business.adobe.com/blog/the-latest/adobe-announces-new-ai-capabilities-to-personalize-digital-experiences-in-adobe-experience-cloud">Adobe Research</a>, <a href="https://youtu.be/Gvf8oQDcLsQ?si=slPTprTidOWe0p_n">Stanford HCI</a>, and <a href="https://arxiv.org/abs/2505.18943">Meta&#8217;s behavioral science</a> labs are pushing this work, often tied to personalization and UX modeling.</p>]]></content:encoded></item><item><title><![CDATA[AI Is Coming For Your Job: Just Not In The Way You Think]]></title><description><![CDATA[Why Creativity Is the Last Superpower]]></description><link>https://www.greymatterunloaded.com/p/ai-is-coming-for-your-job-just-not</link><guid isPermaLink="false">https://www.greymatterunloaded.com/p/ai-is-coming-for-your-job-just-not</guid><dc:creator><![CDATA[Marc Ryan]]></dc:creator><pubDate>Wed, 28 May 2025 13:30:27 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Ywa8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc25e2eda-eec8-4311-bd95-6734a153295e_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Ywa8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc25e2eda-eec8-4311-bd95-6734a153295e_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Ywa8!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc25e2eda-eec8-4311-bd95-6734a153295e_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!Ywa8!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc25e2eda-eec8-4311-bd95-6734a153295e_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!Ywa8!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc25e2eda-eec8-4311-bd95-6734a153295e_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!Ywa8!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc25e2eda-eec8-4311-bd95-6734a153295e_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Ywa8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc25e2eda-eec8-4311-bd95-6734a153295e_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c25e2eda-eec8-4311-bd95-6734a153295e_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3582360,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.greymatterunloaded.com/i/164015975?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc25e2eda-eec8-4311-bd95-6734a153295e_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Ywa8!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc25e2eda-eec8-4311-bd95-6734a153295e_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!Ywa8!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc25e2eda-eec8-4311-bd95-6734a153295e_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!Ywa8!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc25e2eda-eec8-4311-bd95-6734a153295e_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!Ywa8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc25e2eda-eec8-4311-bd95-6734a153295e_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>I recently went through a thought experiment on what defines talent. It's an interesting thing to ponder in this age of AI.</p><p>When someone said &#8220;that person&#8217;s talented,&#8221; I think what they really meant was: <em>this person has both skill and creativity</em>. A killer combo. You didn&#8217;t just know how to do the thing, you knew how to do it with flair, with edge, with insight.</p><p>Take photography. A truly talented person didn&#8217;t just &#8220;point and shoot,&#8221; but &#8220;framed the moment, delivered mood with lighting, or told a story.&#8221; The same holds true even of the legal profession, sure, you can memorize the statutes, but talent was the ability to bend them (ethically!) into clever arguments that swayed a courtroom. Rinse and repeat for Wall Street traders, coders, architects. Talent was knowing the craft (skill) <em>and</em> seeing around the corners (creativity).</p><p>Fast forward to now. The last three years have seen AI go from novelty to necessity. And the next three? If it doesn&#8217;t scare you a little, you&#8217;re not paying attention. We&#8217;re living through a seismic shift in how skill is valued and how fast it can be replicated.</p><p>But as I rounded out my thought experiment I developed a different take: AI hasn&#8217;t destroyed talent. It&#8217;s just reshaped it.</p><h3>The AI Skill Explosion</h3><p>Today&#8217;s AI is a skill vending machine. Want to generate a moody photo of a robot smoking a cigar on Mars? <a href="https://chatgpt.com/share/682ca8f0-4a1c-800a-adb9-7019a4265a3a">Cool, two prompts and a click</a>. Need a chunk of <a href="https://x.com/AndrewYNg/status/1915421117998874899">JavaScript</a>? AI's got you. <a href="https://x.com/heykahn/status/1645436462144581637">Legal summaries? Financial modeling?</a> <a href="https://x.com/rohanpaul_ai/status/1924215605638615300">Radiology scans</a>? All on tap.</p><p>If it feels like everything you spent years mastering is suddenly available to anyone with an API key... you&#8217;re not wrong.</p><p>And yes, sometimes AI doesn&#8217;t just keep up, it outruns you. It can be faster, cheaper, maybe even more <a href="https://www.nature.com/articles/s41586-019-1799-6">accurate</a>. I&#8217;ve seen firsthand how AI can blow past bottlenecks and put seasoned experts on the defensive. Especially in fields where the job is mainly &#8220;do (insert task) efficiently, repeat.&#8221;</p><h3>AI's Creativity Blindspot?</h3><p>Here&#8217;s the kicker: AI might <em>do</em> the task, but it still doesn&#8217;t <em><a href="https://arxiv.org/abs/2503.23327">imagine</a></em> the outcome.</p><p>Case in point: image generation. These tools are powerful, but drop them in front of someone with no visual instinct, and you&#8217;ll get a mess of aesthetic spaghetti. It takes creative vision, a strong sense of taste, clear intent, and storytelling ability to turn AI from a parlor trick into a tool of expression.</p><p>Same goes for software. A vibe coder with a great idea can now spin up a prototype without a technical cofounder. But without a sense of design, utility, or user psychology? It still flops.</p><h3>The Great Talent Flip</h3><p>So what&#8217;s actually happening?</p><ol><li><p><strong>Talented people are getting supercharged.</strong> The best creatives, engineers, analysts: these folks are <em>faster and more powerful</em> now. Their ability to translate ideas into execution just leveled up.</p></li><li><p><strong>Creatives without skills are breaking in.</strong> Suddenly, you don&#8217;t need a CS degree to build an app or an MFA to make compelling visuals. AI closes the skill gap for anyone with a clear vision and creativity.</p></li></ol><p>That&#8217;s wild. We&#8217;re watching the barriers crash down in real time. And depending on which side of the barrier you&#8217;re standing on, it&#8217;s either thrilling or terrifying.</p><h3>So Who Should Be Nervous?</h3><p>Here&#8217;s the uncomfortable truth.</p><p>There&#8217;s a group of professionals out there who&#8217;ve built careers on deep domain knowledge and tool-specific mastery, with little room or need for creativity. Think: Salesforce admins who can click the right boxes, X-ray techs who follow protocol, spreadsheet warriors in middle management.</p><p>These roles, high on precision, low on innovation, are squarely in AI&#8217;s sights.</p><p>And look, I&#8217;m not saying these folks are talentless. But I <em>am</em> saying that if your edge at work is all skill and no creativity... now&#8217;s a good time to pivot.</p><h3>Time for a Talent Check-In</h3><p>Here's my recipe for the future. Ask yourself:</p><ul><li><p><strong>Does my job rely on creative problem-solving?</strong></p></li><li><p><strong>Do I regularly imagine new ways to get better outcomes?</strong></p></li><li><p><strong>Am I valued for my ideas, or just my execution?</strong></p></li></ul><p>If you&#8217;re checking "yes" to the first two, you&#8217;re golden. Lean in on AI augmentation, find ways to supercharge what you do with AI.</p><p>If you are answering "yes" to the third item here, it&#8217;s time to double down on the one thing AI still can&#8217;t fake: human imagination. Find ways to be creative. Even better, become the leader in using AI to help take over executing your daily tasks, make yourself invaluable to the organization as a leader in AI driven transformation.</p><p>The key thing to takeaway is this: AI knows the past way better than you do. It&#8217;s trained on history, patterns, precedent. But <em>we</em> write the future. And the folks who can dream up something new, can see around corners? They're still the ones setting the direction, not just pushing the buttons.</p><p>So yeah, skill still matters. But in this new world, <strong>creativity is the last superpower</strong>.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.greymatterunloaded.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Grey Matter Unloaded! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Insights is Ripe for Consolidation]]></title><description><![CDATA[Joining forces is the smartest plan to wait out the economy]]></description><link>https://www.greymatterunloaded.com/p/insights-is-ripe-for-consolidation</link><guid isPermaLink="false">https://www.greymatterunloaded.com/p/insights-is-ripe-for-consolidation</guid><dc:creator><![CDATA[Marc Ryan]]></dc:creator><pubDate>Fri, 23 May 2025 13:30:33 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Taxg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03f9fe33-77ef-442b-8e39-01c2857d491a_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Taxg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03f9fe33-77ef-442b-8e39-01c2857d491a_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Taxg!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03f9fe33-77ef-442b-8e39-01c2857d491a_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!Taxg!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03f9fe33-77ef-442b-8e39-01c2857d491a_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!Taxg!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03f9fe33-77ef-442b-8e39-01c2857d491a_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!Taxg!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03f9fe33-77ef-442b-8e39-01c2857d491a_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Taxg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03f9fe33-77ef-442b-8e39-01c2857d491a_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/03f9fe33-77ef-442b-8e39-01c2857d491a_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2137704,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.greymatterunloaded.com/i/163401277?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03f9fe33-77ef-442b-8e39-01c2857d491a_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Taxg!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03f9fe33-77ef-442b-8e39-01c2857d491a_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!Taxg!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03f9fe33-77ef-442b-8e39-01c2857d491a_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!Taxg!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03f9fe33-77ef-442b-8e39-01c2857d491a_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!Taxg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03f9fe33-77ef-442b-8e39-01c2857d491a_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>"It was the best of times, it was the worst of times." I've always wanted to use this quote, and there could be no better quote to describe the state of the insights industry in early 2025. The market is full of optimism about how AI can transform the insights and data space for the better. New companies are popping up every day, solving once-impossible problems using AI. At the same time, the traditional insights community is facing headwinds from multiple directions. This article focuses on the latter, offering a point of view on where things stand, what options companies have, and where we might go next.</p><p>First, let's build a simple mental framework to make this easier to digest. At a high level, I'm breaking companies down based on size and market position:</p><ul><li><p><strong>Multinational Strategics</strong>: Think IPSOS, Kantar, Nielsen, and NIQ/GfK. These are the big players with deep customer relationships and a sizable global footprint. Most are over the $1 billion revenue mark, and they&#8217;ve been around forever.</p></li><li><p><strong>Mid-Tier Incumbents</strong>: These are companies that have grown past start-up mode and now pull in over $100 million. They&#8217;re chipping away at the multinationals with more agile offers. Examples include Dynata, Toluna Group, YouGov, Sago, and Cint.</p></li><li><p><strong>Boutiques &amp; Specialists</strong>: These are sub-$100 million companies. Some are boutique research firms focused on client service. Others are specialists with niche products in areas like ad testing or B2B research. Think UpWave, MFour, Suzy, and many more.</p></li></ul><p>These categories aren&#8217;t unique to insights, but how capital flows between them has changed dramatically in the past five years.</p><p>It used to be that a start-up with a cool idea would grow, threaten the bigger players, and eventually get acquired. This exit path, powered by cheap capital and enthusiastic acquirers, was pretty standard. Private equity followed close behind, buying into everything from GfK to Dynata. The money was easy, and deals were everywhere.</p><h2>The Slow Big Co's</h2><p>Fast forward to today. Capital markets are tight. Interest rates are the highest they&#8217;ve been in over a decade. The big players are sitting on debt, and IPO dreams are fading. Kantar left WPP in 2019, tried spinning off Kantar Media, and has now shelved IPO plans in favor of selling off units. Bain, its backer, has walked away from a public exit (The <a href="https://www.linkedin.com/posts/eric-weinberg-40536912_what-kantars-recent-news-means-for-the-sector-activity-7309692929294368768-VgP9?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAAAAfKcByTa6QcvLfDpub_5IsLnlPcAuDZQ">Stage18 team</a> has an excellent analysis of this). Meanwhile, NielsenIQ, backed by Advent, is optimistically prepping for a $10 billion IPO this summer. IPSOS, still publicly traded, has seen a 30% drop in stock price over the last year, despite strong fundamentals. They&#8217;ll unveil new plans in their Horizon 2030 roadmap.</p><p>So that&#8217;s one player looking to exit, one chasing an IPO, and one trying to raise its stock price. None are out shopping. Maybe things change after summer, depending on how the market responds, but for companies looking to exit soon, that&#8217;s a long wait.</p><h2>Mid-tier Pressure</h2><p>In the mid-tier, we see a different story. Dynata pushed out a debt maturity with a pre-pack deal, buying some time. Sago (formerly Schlesinger) is backed by Gauge Capital, which has held them since 2015. Toluna has been with Verlinvest since 2011 and continues to integrate acquisitions. Cint, once a rising star, has lost 87% of its value since going public. It&#8217;s more focused on fundamentals than buying.</p><p>These firms might want to acquire, but they&#8217;ll focus on tech and AI assets that can give them leverage against the multinationals. Every acquisition now has to prove it can deliver ROI fast.</p><h2>Start-up Malaise</h2><p>Which brings us to the boutiques and specialists. Let&#8217;s start with boutiques.</p><p>These firms live on their relationships and consultative services. They&#8217;re not chasing scale or platforms, they&#8217;re focused on service. While their business models don&#8217;t scream &#8220;venture scale,&#8221; some still attract private equity money. Historically, they exited by becoming part of a larger firm, offering expertise or geographic expansion. That&#8217;s not happening right now. Top and mid-tier buyers are either distracted or doubling down on AI.</p><p>Now for the specialists. Many were born between 2016 and 2021, during a boom in digitized research and automation. They&#8217;re the cool kids, doing clever things with metering, sampling, brand tracking, and ad analytics. But here&#8217;s the problem: they each solve one piece of the puzzle. Brands love them, but their addressable market is limited. Growth eventually slows. And even when their product is better, big brands often go with a familiar name because procurement likes safe bets with the established vendors.</p><p>Left alone, these firms might disrupt the industry. But they don&#8217;t have the luxury of time. Investors, especially those with venture or growth equity stakes, are looking for the exit.</p><h2>So...</h2><p>What now? The top end of the market is distracted, the middle tier is cautious, and the little guys are left hanging. There are so many small firms where you think, &#8220;Surely they&#8217;ve been trying to sell for years.&#8221; Yes, their boards are tired of waiting. Meanwhile, new AI-native companies are rocketing past them.</p><p>Investors are also out of time. The typical hold period for VC is 5 to 7 years. For PE, it&#8217;s 7 to 10. That window has closed for many firms. Despite client wins and product advances, most haven&#8217;t scaled enough for a traditional exit.</p><h2>The Strategic Buyers Aren&#8217;t Coming to the Rescue</h2><p>Strategic acquirers, big agencies, research giants, and platform players, are cautious. Capital is expensive. Integration is hard. Most ResTech firms are too narrow or too small to be compelling on their own. Even the promising ones end up in a weird middle ground, too good for an acqui-hire, too small for a scale deal. And when deals happen, they favor the buyer. Investors don&#8217;t usually get the payday they hoped for.</p><h2>The Case for Cashless Mergers</h2><p>Instead of waiting for lifeboats or trying to raise another round, it&#8217;s time to consider cashless mergers. Combining forces could:</p><ul><li><p>Achieve the revenue scale needed to attract new investors</p></li><li><p>Combine talent and reduce duplicate costs</p></li><li><p>Offer more complete solutions to clients</p></li><li><p>Strengthen negotiating power with data vendors and platforms</p></li></ul><p>These deals could also give tired investors a graceful exit. They can roll equity into a new entity or structure partial liquidity tied to future milestones. Let's face it, most of the players in the ResTech or Boutique space are duplicating efforts trying to create new Generative AI solutions. That's the last thing the market needs now. Combining forces through talent and equity consolidation will allow fewer solutions to be better scoped, developed and pushed to a larger group of potential clients. A win for everyone involved.</p><h2>From Fragmentation to Federation</h2><p>This won&#8217;t be easy. Merging two subscale firms doesn&#8217;t guarantee success. You need cultural fit, strong leadership, and solid execution. But the alternative is slow decline, and that&#8217;s already happening.</p><p>Founders should be reaching out to peers. Pair a sampling platform with an analytics layer. Combine data collection with delivery tools. Build federated entities. Share back offices. Merge product roadmaps.</p><h2>Scale Matters More than Innovation</h2><p>Innovation and boutiques in Market Research aren&#8217;t dying. But the market they operate in is evolving. The AI-powered future needs platforms that blend research chops with data smarts, compliance, and real decision-making tools.</p><p>The winners won&#8217;t be the ones with the biggest Series B, or the ones with the coolest innovation. They&#8217;ll be the ones who reshape the map, the ones to achieve enough scale to start getting noticed by big clients. Cashless mergers aren&#8217;t desperate moves. Right now they&#8217;re smart, strategic, and overdue.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.greymatterunloaded.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Grey Matter Unloaded! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Déjà Vu in Tech: The Agentic AI Trap]]></title><description><![CDATA[How to Break the Loop and Build Something That Lasts]]></description><link>https://www.greymatterunloaded.com/p/deja-vu-in-tech-the-agentic-ai-trap</link><guid isPermaLink="false">https://www.greymatterunloaded.com/p/deja-vu-in-tech-the-agentic-ai-trap</guid><dc:creator><![CDATA[Marc Ryan]]></dc:creator><pubDate>Tue, 20 May 2025 13:30:38 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!MTi-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2c0b84d-fb4d-4523-90b6-679e5b3653bd_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MTi-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2c0b84d-fb4d-4523-90b6-679e5b3653bd_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MTi-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2c0b84d-fb4d-4523-90b6-679e5b3653bd_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!MTi-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2c0b84d-fb4d-4523-90b6-679e5b3653bd_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!MTi-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2c0b84d-fb4d-4523-90b6-679e5b3653bd_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!MTi-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2c0b84d-fb4d-4523-90b6-679e5b3653bd_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MTi-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2c0b84d-fb4d-4523-90b6-679e5b3653bd_1536x1024.png" width="728" height="485.5" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a2c0b84d-fb4d-4523-90b6-679e5b3653bd_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:728,&quot;bytes&quot;:3523388,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.greymatterunloaded.com/i/163931615?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2c0b84d-fb4d-4523-90b6-679e5b3653bd_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MTi-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2c0b84d-fb4d-4523-90b6-679e5b3653bd_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!MTi-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2c0b84d-fb4d-4523-90b6-679e5b3653bd_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!MTi-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2c0b84d-fb4d-4523-90b6-679e5b3653bd_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!MTi-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2c0b84d-fb4d-4523-90b6-679e5b3653bd_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Ever feel like tech trends are on a loop? </p><p>Remember when everyone and their grandma wanted to be a portal? You'd visit some homepage jam-packed with weather, news, stock tickers, and a cute puppy of the day, that is until Google came along with a single search bar and ate everyone's lunch. Portals were supposed to be the ultimate digital destination, the homepage you woke up to and the last page you closed at night. AOL, Yahoo!, MSN, those were the cool kids and everyone wanted to be them. But then, simplicity won. A clean interface, no clutter, and quick results. Google understood something fundamental: convenience beats clutter every single time.</p><p>Fast forward a few years, and the buzz shifted from portals to platforms. Everything became a platform overnight, advertising platforms, consumer platforms, even "marketplace platforms" that were basically fancy portals with a login page and some APIs. Platforms promised to be the central hub connecting businesses, customers, and third-party developers. Suddenly everyone was building APIs, integrating third-party solutions, and proclaiming themselves indispensable. Yet, much like portals before them, success was elusive. Why? Because a platform isn't just something you decide to be overnight. It has to evolve naturally. It has to solve real problems better than anyone else before it earns the right to become indispensable.</p><p>Look at the platform champions today. Salesforce didn't start by saying, "Hey we're a platform!" They were busy becoming the best CRM tool around. Facebook was just a way to connect with classmates before it evolved into the backbone of social media advertising. Amazon didn't wake up and declare itself the king of cloud computing; they first built the most convenient way to buy books online. In each case, the company started by solving a specific, critical problem exceptionally well. Becoming a platform was just the next logical step.</p><p>And now, we're in the era of Agentic AI. Companies are tripping over themselves to build AI Agents that do everything from scheduling meetings and managing your emails to acting as therapists. The air is thick with ambition and venture capital cash. But here's the kicker, this Agentic race feels suspiciously familiar. It's like I've seen this movie before. My POV&#8230; just like portals and platforms before them, most Agentic hopefuls are destined for the tech scrap heap.</p><p>Here's the thing: if you're thinking of jumping into the Agentic AI game in 2025, pause. Think. Reflect. Because the odds aren't exactly stacked in your favor. There's already a handful of billion-dollar companies throwing around enough cash and hype to make your modest startup look like a lemonade stand. Google, Microsoft, OpenAI, Amazon; they've all got their agents out in the wild, backed by the kind of budgets that could buy small islands. The landscape is already crowded, noisy, and competitive. If you're jumping into the space what's your edge?</p><p>I'm sure most businesses would agree with the odds but recognize there's a penalty for not doing anything; and I agree. So don't chuck your AI dreams out the window just yet. History gives us a pretty clear blueprint on how to win, or at least survive, in this latest tech frenzy. Platforms didn't win by screaming, "I'm a platform." Instead, they quietly dominated the core functionality first. AI Agents will follow the same path. Your best bet isn't competing head-to-head with Manus or OpenAI's Operator. Instead, create a specialized, undeniably awesome AI Agent that makes users' lives noticeably better. Focus on specific tasks or niches, things that big, generalized AI agents don't handle perfectly. Maybe you're perfecting an agent for healthcare appointment scheduling or an AI expert in analyzing specialized financial data. Specificity and mastery of a niche are your competitive advantages.</p><p>Then, and only then, can you think about the broader integrations. Realize that unless you've been building AI systems for the last decade, your opportunity to take on the behemoths is slim. Instead, leverage new tools and standards, like Google's A2A protocol or MCP servers to smoothly integrate your solutions where users already are. MCP servers (<a href="https://modelcontextprotocol.io/introduction">Model Context Protocol</a>) serve as secure hubs that enable seamless data sharing and collaboration between different systems and agents, ensuring smooth interoperability while maintaining robust privacy and security standards. Google's A2A (<a href="https://google.github.io/A2A/">Agent-to-Agent</a>) protocol, on the other hand, facilitates standardized communication between different AI agents, allowing them to interact efficiently regardless of their proprietary frameworks.</p><p>This is how you avoid becoming just another failed agent chasing unattainable dreams. Interoperability is key, your specialized agent should integrate easily into the larger ecosystems that inevitably will dominate corporate desktops and smartphones.</p><p>This integration is critical. Think about it: users are creatures of habit. They're not going to switch agents just because your marketing pitch is slick. They&#8217;ll stick with what's already integrated into their workflow unless you offer something undeniably better, simpler, or significantly more valuable. And even then, the path of least resistance usually wins.</p><p>Meeting customers exactly where they hang out is always a smarter play than trying to drag them kicking and screaming to your shiny new Agentic platform. It's not about being flashy; it's about being seamlessly useful. Having an agent is cool, but being useful? That's timeless.</p><p>And speaking of timelessness, remember that technology is fundamentally about improving human workflows. Don't get caught chasing the hype; instead, build solutions deeply rooted in real-world utility. Solve genuine, specific problems. Your users don't care if you're powered by the latest LLM or quantum computing, they just want things to work better than before. They want fewer headaches, less complexity, and more results.</p><p>History suggests the race will have plenty of casualties. But it also tells us clearly: focus on the user, perfect the niche, integrate smartly, and maybe, just maybe, you'll find yourself riding the next wave rather than being swept away by it.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.greymatterunloaded.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Grey Matter Unloaded! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Let's Stop Talking About ResTech. ]]></title><description><![CDATA[Time to stop apologizing for selling services.]]></description><link>https://www.greymatterunloaded.com/p/restech-has-hit-its-saturation-point</link><guid isPermaLink="false">https://www.greymatterunloaded.com/p/restech-has-hit-its-saturation-point</guid><dc:creator><![CDATA[Marc Ryan]]></dc:creator><pubDate>Wed, 14 May 2025 13:15:38 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!S50L!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2070f3ef-e04f-45ec-b53d-dbb568cb7b6f_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!S50L!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2070f3ef-e04f-45ec-b53d-dbb568cb7b6f_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!S50L!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2070f3ef-e04f-45ec-b53d-dbb568cb7b6f_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!S50L!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2070f3ef-e04f-45ec-b53d-dbb568cb7b6f_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!S50L!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2070f3ef-e04f-45ec-b53d-dbb568cb7b6f_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!S50L!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2070f3ef-e04f-45ec-b53d-dbb568cb7b6f_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!S50L!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2070f3ef-e04f-45ec-b53d-dbb568cb7b6f_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2070f3ef-e04f-45ec-b53d-dbb568cb7b6f_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2359762,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.greymatterunloaded.com/i/163146125?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2070f3ef-e04f-45ec-b53d-dbb568cb7b6f_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!S50L!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2070f3ef-e04f-45ec-b53d-dbb568cb7b6f_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!S50L!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2070f3ef-e04f-45ec-b53d-dbb568cb7b6f_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!S50L!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2070f3ef-e04f-45ec-b53d-dbb568cb7b6f_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!S50L!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2070f3ef-e04f-45ec-b53d-dbb568cb7b6f_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>No one has yet tried fitting a jet engine onto a tricycle, it might look fast, but it&#8217;s not built for that kind of ride. That&#8217;s what the research industry keeps doing with ResTech. We keep dressing up what is, at its core, a services business and trying to sell it as a SaaS rocket ship. Don&#8217;t get me wrong, technology has a huge role to play in our industry, but the current obsession with positioning ourselves as "tech-first" is leading us down a path that ignores what clients actually want and how this industry really operates.</p><p>Let&#8217;s start with the good: ResTech <em>sounds</em> amazing to investors. They hear "tech" and immediately see scalable margins, recurring revenue, and some mythical path to a unicorn valuation. The pitch goes something like: "We&#8217;re like Salesforce, but for surveys." But here&#8217;s the dirty little secret: most of the so-called ResTech companies? They&#8217;re still getting around half their revenue from services.</p><h2>The ResTech Frenzy</h2><p>It's not hard to see when the concept of ResTech became interesting to investors. It started with SurveyMonkey. This little DIY survey platform had a dozen employees, cared little about selling to big Fortune 500 clients, and was still clearing millions a month. They were the epitome of ResTech, a pure-play SaaS platform for research with the discipline to focus on just building the tech.</p><p>Next was Qualtrics who did the same thing but bigger with more enterprise features and a massive inside sales operation. They shot to the moon on the back of double-digit growth. With an IPO in sight, SAP decided they needed the company as part of their portfolio and bought it for $8B in 2018. Three years later, they spun it off to complete that IPO with a new $15B market cap.</p><p>ResTech was validated and something to pursue, and something drawing the eye of venture and private equity investors. In the wake of the Qualtrics acquisition, we saw Cint jump from privately funded to an IPO in 2019 on the Swedish stock exchange that pegged its market cap at $1.2B, definitely a tech valuation. Then in late December 2021, Cint bought out their biggest competitor Lucid for around $1.1B in cash and stock. This took their market cap up to around $1.6B.</p><p>Clearly, market research was fertile ground for investment and over the past 20 years we've seen some interesting companies started on the back of this unlocking of capital (Suzy, Zappi, etc.). However, just like any good hero's journey, our hero (ResTech) was hit with a series of trials.</p><h2>Clients Love DIY Pricing, Hate Doing the Work</h2><p>Procurement and finance teams loved the new round of SaaS vendors. They were easier to buy, more cost-effective, the accounting was predictable (in theory), and felt more turnkey. That said, it turns out when you give clients the tools to do the job themselves, a surprising number of them say, "Thanks, but can you actually just do it for me?" We&#8217;ve seen this again and again. Self-serve tools drive down price expectations, and then the same clients turn around and ask for managed services on top. It's a weird hybrid model where the margin math doesn&#8217;t really work unless you&#8217;ve got tight controls and stellar execution.</p><p>The only ResTech vendor I've ever seen knock the SaaS model out of the park was SurveyMonkey back in the day. However, they too were not immune to the call of easy money. In 2009, they took money from Bain and Spectrum Equity, went public in 2018 at a valuation of $2B, and renamed themselves Momentive in 2021. By January of the year following IPO, they'd fallen to a $1.7B market cap and eventually were taken private again in 2023 by STG at a $1.5B valuation. Rinse and repeat for all the high fliers. All things considered SurveyMonkey and Qualtrics are successful businesses but that&#8217;s because they pivoted to become MarTech vendors. </p><p>And yet, we keep building for the mythical SaaS client. We keep staffing up customer success teams to do what was once called "account service." We&#8217;re pretending this is product-led growth, when it&#8217;s really people-led delivery, just with prettier dashboards.</p><h2>What Clients Actually Want: Outcomes, Not Interfaces</h2><p>Instead of obsessing over how much of our business is tech vs. services, we should be asking a different question: <em>What problem is the client trying to solve?</em> Not, "How do we get them to adopt our platform," but, "What outcome are they desperate for, and how can we get them there faster?"</p><p>This is where a use-case-first approach makes way more sense. Clients don&#8217;t care if it&#8217;s a fancy dashboard or a scrappy team of analysts behind the scenes. What they do care about is getting help from their insights teams:</p><ul><li><p>Understanding what their consumers think</p></li><li><p>Getting to an insight faster than their competitor</p></li><li><p>Making smart decisions before the next board meeting</p></li></ul><p>Client-side insights teams have to be jacks of all trades and they're unlikely to be experts in the best way to run ad tests, concept tests, segmentation, or how to understand the methodological nuances of a syndicated service. This doesn't mean they're not smart, in fact, the opposite is often true, however, their roles are such that they need support from specialty research vendors to help make them smarter. If the answer involves a survey tool, great. If it involves a strategic workshop and a follow-up call, also great, but the core of what they want is a trusted team backing them up and making their workload lighter.</p><h2>The Services Core Is Not a Bug, It&#8217;s the Feature.</h2><p>Given this, we need to stop apologizing for and hiding the services side of this industry. The best firms have strong human capital because that&#8217;s what clients are really buying, expertise. Sure, we can't thrive on outdated technology and our platforms should make us more efficient and create some leverage, but most of the meaningful differentiation still happens in how smart people solve tough problems. The danger is in trying to build like a tech company while still selling like a consultancy. </p><p>I've been part of many companies that struggle with where to put the next dollar; into tech for a payoff in 2 years that's not guaranteed, or into another person selling an existing service to the big clients. Doing the former requires conviction from the entire management team and buy-in from investors. Doing the latter drives short-term growth. What do you think always happens? </p><p>So instead of asking, "How do we increase ARR," let&#8217;s ask, "How do we deliver scalable services that align with what clients are actually willing to pay for?"</p><h2>A Better Way to Think About the Industry</h2><p>What if we flipped the script? What if we stopped categorizing firms by whether they are "tech" or "traditional," and started mapping them to the use cases they serve? Think of categories like:</p><ul><li><p>Companies that help you <em>find and target audiences</em></p></li><li><p>Companies that help you <em>ask better questions and analyze responses</em></p></li><li><p>Companies that help you <em>make decisions and predict outcomes</em></p></li></ul><p>Then group firms by the client problems they solve, not by whether they call themselves a platform or research technology.</p><h2>Let Investors Keep Us Honest</h2><p>This isn&#8217;t an argument to ignore business model innovation or take a lower multiple on valuation. Staffing-based businesses have growth ceilings and investors have every right to push for more scalable models. But scalability doesn&#8217;t mean everything has to be SaaS. Every founder loves the idea of building something once and selling it to millions of customers, but that's not a great fit for the insights space. </p><p>There&#8217;s such a thing as scalable services, and now with Agentic AI we're on the doorstep of a huge boom in AI-powered managed services. This new business model isn't chasing annual recurring revenue (ARR), but average customer value (ACV). Traditionally, this model might be considered less attractive, but with Agentic AI powering the back end, the cost model scales similarly to SaaS. </p><p>Now I can hear some of you calling me out here, "yeah but, an agentic AI powered insights firm is still technically ResTech isn't it?" Short answer, yes. An automated managed services business can't do what they do without technology. However, the fundamental distinction I'm making with my argument is that with ResTech we're assuming ARR is king and adding staff is bad. Not true with a services business. Most ResTech founders looking at an exit will do everything in their power to describe their business as a tech business (higher multiple) vs a services business (lower multiple). I think we need to re-engineer this narrative and make services attractive again, and yes, we'll need tech to get there. </p><p>A great step forward for the industry would be to stop chasing the mirage of being SaaS and embrace the underlying need by building businesses that actually help make clients smarter not just give them a new fancy dashboard.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.greymatterunloaded.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Grey Matter Unloaded! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Redefining Brand Lift ]]></title><description><![CDATA[A New Approach to Always-On, Syndicated, Hybrid-AI Measurement]]></description><link>https://www.greymatterunloaded.com/p/redefining-brand-lift</link><guid isPermaLink="false">https://www.greymatterunloaded.com/p/redefining-brand-lift</guid><dc:creator><![CDATA[Marc Ryan]]></dc:creator><pubDate>Wed, 07 May 2025 15:30:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!DYBS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad0854a4-7786-4cb4-9e7d-245e6bdda113_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!DYBS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad0854a4-7786-4cb4-9e7d-245e6bdda113_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!DYBS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad0854a4-7786-4cb4-9e7d-245e6bdda113_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!DYBS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad0854a4-7786-4cb4-9e7d-245e6bdda113_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!DYBS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad0854a4-7786-4cb4-9e7d-245e6bdda113_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!DYBS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad0854a4-7786-4cb4-9e7d-245e6bdda113_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!DYBS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad0854a4-7786-4cb4-9e7d-245e6bdda113_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ad0854a4-7786-4cb4-9e7d-245e6bdda113_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2811817,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.greymatterunloaded.com/i/160615341?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad0854a4-7786-4cb4-9e7d-245e6bdda113_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!DYBS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad0854a4-7786-4cb4-9e7d-245e6bdda113_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!DYBS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad0854a4-7786-4cb4-9e7d-245e6bdda113_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!DYBS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad0854a4-7786-4cb4-9e7d-245e6bdda113_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!DYBS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad0854a4-7786-4cb4-9e7d-245e6bdda113_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Brand lift measurement, while not a new category of research, became commonplace in the 1990s with the rise of online advertising. It began in 1997, when the fledgling IAB and Millward Brown (now Kantar) teamed up on a landmark research project to validate the effectiveness of internet advertising for brand marketers. The study was ambitious in its scope and methodology: it involved over 17,000 participants and was the first to apply a rigorous experimental design at scale in digital advertising. The final report (<a href="https://personal.utdallas.edu/~liebowit/knowledge_goods/iabstudy.pdf">link</a>) confirmed a link between exposure and impact, leading to the widespread adoption of brand lift as a method to assess advertising effectiveness.</p><p>I was fortunate to be part of that early wave. Back then, running a study meant hard-coding HTML ads with page redirects, primitive Web 1.0 tactics. Netscape was still king. A few months later, we were using JavaScript and ad servers, and I was telling friends to ditch Altavista and check out this cool little search engine called Google. I recall a presentation that Rex Briggs (who was running Millward Brown Interactive at the time) gave where he used a coffin to represent that the click-through had died as a metric. But despite the advances in tech and rapid pace of change, brand measurement always struggled to compete with metrics like click-through rates (then still in the double digits) because it just didn&#8217;t scale.</p><p>Today, brand lift remains a specialized and costly tool, constrained by its reliance on survey-based methods and limited in its scalability. While the industry has matured into a $500&#8211;$700M TAM, it lags behind more ubiquitous, programmatic metrics such as ROAS, CPC, and CPV, primarily because those metrics are available across all impressions. By contrast, brand lift remains a premium metric, limited to major campaigns due to its reliance on human participation and survey sampling. Even the most recent campaign measurement technologies such as engagement are scaling faster and more ubiquitously than brand lift.</p><p>As digital advertising has become more fragmented and always-on, traditional approaches to brand measurement have failed to keep up. They&#8217;re costly, slow, unscalable, and out of step with real-time marketing operations. I&#8217;ve already written here about the failure of brand measurement but don&#8217;t take that for me suggesting we discard traditional brand lift entirely, however, it's time for another step change.</p><p>What Brand Lift needs to become a part of the media ecosystem is first and foremost scale. Those of you that have read my piece on consumer panels know that I believe the future of measurement will be hybrid solutions and I think this is the perfect place to propose such an approach. </p><p>Enter a new model: <strong>hybrid-AI, syndicated, always-on brand lift measurement</strong>. You can read my proposal <a href="https://www.greymatterunloaded.com/p/syndicated-brand-lift-measurement">here</a>. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.greymatterunloaded.com/p/syndicated-brand-lift-measurement&quot;,&quot;text&quot;:&quot;Read My Proposed Approach To Brand Lift&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.greymatterunloaded.com/p/syndicated-brand-lift-measurement"><span>Read My Proposed Approach To Brand Lift</span></a></p><p>Powered by AI, behavioral data, and predictive modeling, this approach offers marketers a real-time, scalable alternative to gauge brand performance. What you&#8217;ll find in the document is a proposal for a measurement protocol&#8212;intended to provoke conversation, test assumptions, and ultimately evolve how the industry thinks about upper-funnel metrics. </p><p>It&#8217;s likely this is far from what the market will get, and it&#8217;s assuredly very imperfect, but I&#8217;m casting this into the wind hoping to spark some much needed change. </p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.greymatterunloaded.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Grey Matter Unloaded! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Syndicated Brand Lift Measurement Protocol]]></title><description><![CDATA[A Proposal For a New Measurement Approach]]></description><link>https://www.greymatterunloaded.com/p/syndicated-brand-lift-measurement</link><guid isPermaLink="false">https://www.greymatterunloaded.com/p/syndicated-brand-lift-measurement</guid><dc:creator><![CDATA[Marc Ryan]]></dc:creator><pubDate>Tue, 06 May 2025 17:42:48 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65a45860-7741-4117-b0cf-08a6172b4649_600x600.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2><strong>Executive Summary</strong></h2><p>This document proposes a Syndicated Brand Lift Measurement Protocol based on a hybrid-AI (synthetic + human), impression level measurement solution. By leveraging machine learned models applied to campaign data (ad exposure, creative quality, media context), this approach predicts brand lift without relying on traditional, slow, and costly survey methods. The core objective is to provide an always on, scalable, and cost effective way to evaluate brand impact, delivering standardized metrics integrable with media optimization platforms. The protocol involves three key data sources (Exposure &amp; Audience, Creative Effectiveness, Media Engagement &amp; Quality) feeding a central predictive model, all operating within privacy preserving clean room environments. While acknowledging significant operational and technical challenges, this framework offers a path toward more timely, granular, and actionable brand measurement, ultimately enabling optimization for brand outcomes and establishing a potential industry standard.</p><div><hr></div><h2><strong>Vision</strong></h2><p>A synthetic, impression level brand measurement solution that uses machine learned models to predict lift from campaign data. This eliminates the need for direct survey based input and enables always-on, scalable, and cost effective evaluation of brand impact.</p><div><hr></div><h2><strong>Objectives</strong></h2><p>The objectives of this initiative are fourfold. </p><ol><li><p>To eliminate dependence on human survey data for real-time brand lift. </p></li><li><p>To provide standardized, impression level metrics that can integrate with media optimization platforms. </p></li><li><p>To establish scalable data sources that leverage ad server logs, creative scoring, and normative engagement models. </p></li><li><p>To build a shared syndicated/human model architecture that evolves with industry collaboration.</p></li></ol><div><hr></div><h2><strong>Challenges with Current Brand Measurement Methods</strong></h2><p>Despite decades of investment and iteration, traditional brand measurement methods continue to face significant limitations.</p><p>Surveys remain the dominant methodology, but they introduce latency and cost. Recruiting participants, designing and fielding surveys, and analyzing results can take weeks, far too slow for the pace of modern, always-on campaigns. Survey responses are also prone to bias and sampling error, particularly when targeting niche or high value segments.</p><p>Ad tracking, typically reliant on cookies or pixels, has become increasingly fragile. Cross device identity is fragmented, signal loss is growing in privacy centric environments, and platform walled gardens restrict data sharing. These factors collectively reduce the fidelity of exposure data, making attribution and effect modeling more speculative.</p><p>Reporting infrastructures, often built on siloed or batch processed data, fail to provide marketers with actionable insight in real time. Legacy dashboards surface metrics that are lagging, aggregated, and often disconnected from brand performance. Meanwhile, campaign managers are forced to interpret success through proxies, like viewability, engagement or reach,  and these proxies don't reveal whether brand perception has actually improved.</p><p>Together, these issues contribute to a system that is slow, opaque, and disconnected from the realities of media execution today. A more adaptive, data rich, and scalable alternative is urgently needed.</p><div><hr></div><h2><strong>Foundations of Brand Lift Measurement</strong></h2><p>At its core, brand lift is driven by two primary forces: the effectiveness of the creative and the quality of the media environment in which it runs. These two components form the foundation of any serious brand impact assessment and should be treated as interdependent, yet distinctly measurable, pillars.</p><p>Creative effectiveness is widely theorized, and supported by a broad base of empirical evidence, to account for 70&#8211;80% of overall brand performance in advertising. This includes how well the creative captures attention, conveys the brand message, and evokes the desired emotional or cognitive response. Elements such as brand cues, storytelling, pacing, and audio visual quality all play a significant role. Creative quality may be perceived differently depending on the audience, cultural context, or format, but its core attributes remain consistent over time; it can vary by audience, cultural context, and even format. Yet when well executed, strong creative has the ability to significantly drive brand metrics such as recall, favorability, and purchase intent.</p><p>The second factor is media, which acts as a moderator of creative effectiveness. Media determines whether the right audiences are reached, at the right time, and in the right mindset. A compelling creative that appears in a low attention or cluttered environment may see its potential squandered. Conversely, a moderately effective ad placed in a high attention, premium context may outperform expectations. Metrics such as viewability, attention scores, dwell time, and engagement are all indicators of the quality of the media context. Furthermore, media also influences frequency and sequencing, two important dimensions that affect cumulative brand lift over time. Unlike creative, however, media quality is in constant flux. It is subject to an array of variables that can shift quickly and unpredictably, from programming choices and content adjacency to changes in platform management, ad policy, or even broader cultural and news cycles. The same publisher or placement may perform very differently from one week to the next, influenced by changes in user behavior, media narratives, or trending content. This volatility makes continuous media quality evaluation not just important but essential for accurate brand measurement.</p><p>Understanding brand lift, therefore, requires an approach that considers not just exposure but the dynamic interplay between what the consumer sees and where they see it. Any model that omits either side of this equation risks misattributing performance and failing to deliver actionable insights.</p><div><hr></div><h2><strong>Connecting Foundations to Function: Summary</strong></h2><p>The prior section established that brand lift is primarily driven by the dual forces of creative quality and media context. Building on that foundation, the three core data sources introduced here, Exposure &amp; Audience, Creative Effectiveness, and Media Engagement &amp; Quality, form the basis for modeling and predicting brand lift in a scalable, synthetic framework. Each dataset represents a critical vector of influence and contributes to a holistic view of how brand perception is shaped across digital campaigns.</p><p>The Exposure &amp; Audience dataset provides the who, what, where, and when of advertising delivery. It captures the essential metadata that anchors every impression and ensures the model can reconstruct campaign delivery patterns. The Creative Effectiveness dataset scores the intrinsic quality of the ad content, how well it is likely to perform in driving attention and emotional resonance, based on its structure, messaging, and design. The Media Engagement &amp; Quality dataset adds critical context by evaluating the environment in which the creative appears, helping the model understand whether and how that environment enhances or suppresses effectiveness.</p><p>These datasets are not isolated. They operate in concert to reflect the real world conditions in which advertising succeeds or fails. When structured and integrated correctly, they provide the resolution, nuance, and predictive power needed to model brand lift without relying on surveys. They are the analytical scaffolding that supports an always-on measurement approach attuned to the realities of today&#8217;s media landscape.</p><div><hr></div><h2><strong>Core Product Components</strong></h2><h4><strong>1. Exposure &amp; Audience Data Source</strong></h4><p>The goal of this source is to capture impression level data that identifies where ads were delivered, to whom, and in what context. Inputs include ad server logs detailing publisher, placement, and creative IDs; impression metadata such as viewability, duration, fraud filtering; and estimated audience reach and frequency drawn from ad servers or clean room sources.</p><p>To be most effective, this pipeline can and should operate within clean room environments such as Google ADH, Amazon Marketing Cloud, or other privacy compliant data sharing infrastructures. It should normalize formats and data schemas across a variety of DSPs, SSPs, and publishers, which often differ in log structure, taxonomy, and granularity. Furthermore, it should support deterministic or probabilistic mapping of creative assets to impressions in near real time to preserve campaign fidelity.</p><p>Challenges in executing this source pipeline are multifaceted. First, access to ad server logs is increasingly restricted due to privacy regulations, data governance policies, and proprietary constraints from platforms. Second, creative ID resolution is inconsistent, many environments strip or obscure these identifiers, making linkage to creative scoring or metadata more difficult. Third, data fragmentation across programmatic, social, and direct buy environments leads to incomplete or duplicated exposure records. Fourth, time synchronization issues across logs (e.g., impression timestamps from different platforms) introduce noise into sequencing and frequency capping analysis. Finally, data refresh intervals and latency in clean room query environments may limit near real-time performance.</p><p>Fortunately, there is growing industry momentum to address many of these issues. Brands and agencies are already investing in infrastructure and standardization efforts aimed at improving data interoperability, asset tracking, and clean room practices. These initiatives, including centralized creative repositories, enhanced ad server integrations, and clean room schema alignment, can serve as valuable building blocks. Rather than reinventing the wheel, this measurement protocol should aim to align with and build upon the best practices emerging from those investments, accelerating adoption while reducing implementation friction.</p><p>In addition to aligning with these ongoing initiatives, it is ideal that the core brand lift measurement application runs natively within clean room environments themselves. By embedding the model directly within platforms such as Snowflake, leveraging Snowflake Native Apps or similar constructs, the solution can operate on sensitive, row level impression data without requiring data to be extracted or moved. This approach helps mitigate privacy and compliance challenges, reduces latency, and ensures compatibility with existing data governance protocols adopted by major brands, agencies, and media owners.</p><p>While this proposal is primarily framed with agency and brand use cases in mind, it is equally applicable to media sellers. In fact, media sellers may find implementation to be substantially more straightforward due to their direct ownership of the necessary impression, creative, and contextual data. This access reduces the technical and legal friction typically associated with integrating disparate data sources across multiple platforms. Because the internal data environments of media owners are often already structured for measurement, the complexities addressed in this document, such as data normalization and asset resolution, are less severe. That said, for the sake of broader applicability and to tackle the more complex scenario, this proposal focuses on the agency side implementation. Still, the framework and methodology are fully extensible to media sellers and can be readily adapted to support publisher led brand measurement solutions.</p><p>Addressing each of these technical, legal, and operational constraints is essential to ensure the Exposure &amp; Audience data source is not only robust and scalable, but also compliant and interoperable across a fragmented ecosystem. These complexities cannot be solved in isolation; they require strategic alignment with the broader industry and the systems already in place within clean rooms and marketing infrastructure. Doing so will establish a strong foundation for consistent, impression level data capture that underpins reliable and scalable brand lift modeling.</p><p></p><h4><strong>2. Creative Effectiveness Data Source</strong></h4><p>The goal of this source is to assign creative quality scores using validated machine learned models to estimate expected brand effect. It draws from creative assets, video, display, native, synthetic scoring models such as Kantar LinkAI, RealEyes, and other emerging AI driven systems, as well as historical benchmarks derived from human tested creative studies.</p><p>In recent years, the efficacy of synthetic creative evaluation has improved dramatically. Modern models can accurately predict attention, recall, and even brand favorability lift using only the content and structural elements of the ad. These systems leverage deep neural networks trained on thousands of past campaigns, enabling them to recognize features and patterns strongly associated with brand outcomes. Many creative scoring solutions now demonstrate strong correlation with traditional human panels, often delivering comparable rankings with greater speed, scale, and consistency.</p><p>However, the biggest challenge is no longer methodological, it is operational. The infrastructure for applying these models across live campaigns in a standardized, scalable fashion is still lacking. There are inconsistencies in how creative assets are stored, identified, and shared across platforms and organizations. Real-time access to final creative versions is rare, and creative metadata is frequently missing, unstructured, or disconnected from impression level data. As a result, many campaigns are still evaluated using incomplete or outdated creative inputs.</p><p>Solving this problem will require executional discipline and collaboration. It is likely that meaningful progress will depend on deep partnerships with Agency Holding Companies and Creative Management Platforms. These entities play a central role in creative production and distribution workflows and are best positioned to ensure access to the right creative assets at the right time. Standardizing asset ID systems, an area where the IAB has recently made promising strides, can help address this fragmentation. Through initiatives like the IAB Tech Lab&#8217;s efforts to define a universal creative identifier, the industry is moving toward a shared framework for tracking, referencing, and scoring creative assets consistently across platforms. These developments create an encouraging foundation for interoperability and will be critical in enabling seamless integration of synthetic scoring into campaign workflows, ensuring timely ingestion into scoring models, and integrating creative quality scores back into planning and measurement environments will be critical. Without this operational foundation, the predictive power of synthetic creative evaluation cannot be fully realized.</p><p>Despite these challenges, the opportunity is significant. Synthetic creative quality scoring is not a theoretical capability, it is a proven asset waiting to be fully integrated into modern brand measurement. Doing so will close a major gap in understanding what drives brand performance and unlock new opportunities to optimize creative at scale.</p><p></p><h4><strong>3. Media Engagement &amp; Quality Data Source</strong></h4><p>The Media Engagement &amp; Quality data source is designed to quantify the environmental context in which advertising is delivered and assess how that context amplifies, or diminishes, the performance of creative assets. This dimension of the model is critical to understanding brand lift holistically, as it captures the nuances of platform dynamics, audience behavior, and content adjacencies that directly influence campaign effectiveness.</p><p>This is also where the most novel and high leverage synthetic modeling is likely to occur. While creative quality scoring has already achieved a high level of maturity, the variability and unpredictability of media environments present a far more complex challenge for prediction. Media engagement is deeply influenced by context, platform behavior, user mood, and cultural timing, all of which are fluid and rarely standardized. As such, modeling media quality will require innovative techniques in causal inference, time series analysis, and potentially even reinforcement learning to identify the latent structures that modulate performance over time. Success in this area will mark a significant leap forward in making synthetic brand lift measurement more adaptive and precise.</p><p>Inputs into this dataset typically include historical brand lift studies that can be matched to specific media environments, publisher and platform level engagement scores, and contextual metadata such as page content, ad clutter, screen size, and scroll velocity. Third party data providers such as IAS, TVision, Adelaide, and others have made it possible to normalize and standardize many of these variables across environments, offering a more stable and comparative foundation for measurement.</p><p>If we return to the foundational model of ad effectiveness, where brand lift is the result of creative quality moderated by media context, then the implications of accurate synthetic scoring become even more powerful. If we can reliably estimate a creative quality score, and if we accept that creative quality is stable over time, then any observed variation in performance across media environments can be attributed to the media context itself. In other words, once the creative signal is isolated and held constant, media quality scores can be derived by subtracting the predicted creative effect from the total observed brand impact. This approach unlocks the potential for empirical, impression level benchmarking of media effectiveness using only modeled data, provided the inputs are accurate and clean. Importantly, the resulting media quality score may be either positive or negative, meaning it can either enhance or detract from the overall brand performance of a campaign. In this way, media is not just a passive vessel for creative, it is an active participant that can amplify or dilute impact depending on the audience's engagement and the environment&#8217;s attentional dynamics.</p><p>To function properly, this data source must support normalization of creative impact across historical benchmarks to isolate media quality as a distinct driver. This enables the development of environment specific multipliers that can be used to tune brand lift projections based on where and how ads are delivered. For example, a 15 second video in a high attention CTV environment may carry more brand impact weight than the same ad in a skippable mobile pre-roll context. Additionally, this dataset must support the modeling of diminishing returns curves and optimal frequency thresholds, helping determine when repeated exposures transition from effective reinforcement to waste.</p><p>Key challenges stem from the inherently dynamic nature of media environments. Publisher layouts, ad loads, user behavior, and platform policies are constantly shifting. Media quality is not static; it is a moving target that can vary week to week based on everything from editorial changes and algorithm updates to shifts in audience sentiment and cultural trends. Normative benchmarks, therefore, must be continuously updated and contextualized.</p><p>There is also a likely concern about the long term viability of new human based brand lift studies needed to fuel these models. If a synthetic model succeeds one would expect budgets shifting and fewer campaigns being measured traditionally, shrinking the pool of fresh training data. To mitigate this, strategic partnerships with third party verification vendors and publishers will be essential, not only to gain ongoing access to proprietary engagement scores and attention signals, but to align on standards for how those signals are interpreted and applied.</p><p>As synthetic approaches become more reliable, the role of human data collection is likely to evolve rather than disappear. Rather than relying on broad, panel based studies to power measurement systems, human based research will increasingly shift toward more precise, targeted forms of data collection. This "thinner" signal, collected directly from media sellers through attention panels, experimental designs, or in-platform instrumentation, will be better suited for model training and validation. These targeted studies can serve as truth sets for calibration, improving the accuracy of synthetic models without requiring the scale or cost of traditional panels. As publishers grow more sophisticated in their measurement capabilities, they are well positioned to take on a more active role in providing this type of high resolution signal, shifting the source of validation from third party survey panels to the environments where campaigns are actually running.</p><p>At the same time, it is important to acknowledge a foundational challenge facing all synthetic measurement approaches: the data used to train models is not collected at census scale. Most normative datasets are built from a limited set of panel studies, opt-in user groups, or historical campaign records that reflect only a fraction of the full advertising landscape. As a result, the training data used to calibrate media quality scores may not fully represent the diversity of audiences, platforms, or creative executions currently in market. This creates a fundamental tension, models are being asked to predict census level outcomes using non-census data.</p><p>Compounding this issue is the age of much of the normative data in circulation. The media ecosystem is in constant flux, shaped by shifts in user behavior, platform algorithms, ad load policies, and cultural relevance. Older studies, even those just a few years old, can reflect outdated assumptions about media engagement or platform value. If used without contextualization or adjustment, these datasets risk introducing structural bias into the model.</p><p>To address this, the measurement framework must include mechanisms for dynamic updating, retraining, and validation. Synthetic models must be regularly recalibrated against fresh empirical signals, whether from controlled experiments, platform instrumentation, or human verified performance benchmarks. Doing so ensures the system evolves with the media environment it seeks to measure and maintains relevance over time.</p><p>There are a range of modeling techniques that can be applied to isolate media quality while still accounting for the compounding effects of reach and multi platform exposure. These include hierarchical modeling approaches, such as multilevel regression with post-stratification (MRP), which allow media level variables to be contextualized within broader audience level and creative level effects. Uplift modeling techniques and synthetic control methods can also be used to isolate marginal media impacts when multiple variables are in play. Furthermore, causal inference frameworks, especially those that use temporal ordering and counterfactual estimation, can help parse out whether brand lift is accumulating due to media quality or merely repeated exposure.</p><p>Importantly, these methods must reflect how media actually works in the real world. Consumers rarely see a campaign in a single channel or on a single platform. Therefore, the model must account for the additive, or even multiplicative, effects of sequential or simultaneous exposure across multiple sites. This means controlling for both intra-channel and inter-channel frequency effects and attributing lift appropriately. Sophisticated path modeling or media mix modeling logic, when layered with synthetic data inputs, can help make these estimates more precise and actionable.</p><p>Despite these challenges, the opportunity is substantial. Media context is the most under leveraged variable in the brand measurement stack. With the right data structure and modeling logic, this dataset can transform passive exposure logs into actionable, predictive inputs. By integrating these insights directly into the core model, marketers can begin to understand not just <em>if</em> an ad worked, but <em>why</em> it worked in a specific environment, enabling smarter planning, buying, and optimization decisions at scale.</p><p>While much of the value in media quality modeling stems from novel methods and diverse inputs, its effectiveness depends heavily on the ability to structure this data with precision. This data source is designed to generate normalized media quality indices that measure how different publishers and platforms moderate creative performance. It draws from a diverse range of inputs, historical brand lift studies matched with media data, publisher level engagement scores, contextual metadata such as page content and ad clutter, and supplemental attention or quality scores from third-party sources like IAS, TVision, and Adelaide.</p><p>However, turning this data into a stable, reliable model of media quality requires more than aggregation. The creative impact must be normalized across these historical studies to isolate the distinct influence of the media environment. Only by stripping out the creative contribution can media specific effects be accurately observed. From there, modeling must account for the complexity of cross platform combinations, frequency effects, and evolving audience behavior, calculating dynamic multipliers that reflect the unique ways media environments shape ad effectiveness over time.</p><p>This task is not without challenges. Publisher environments are inherently dynamic, influenced by rapid shifts in layout, ad density, content strategy, and user experience. Normative data is perishable, what reflected platform quality a year ago may be outdated today. Additionally, as synthetic models gain traction, the flow of new human based brand lift studies may dwindle, making it harder to replenish and recalibrate these benchmarks over time. Addressing these issues will be essential to maintaining accuracy and relevance in media quality scoring models.</p><div><hr></div><h3><strong>Core Model Design</strong></h3><p>At the heart of the synthetic brand lift approach lies a compositional model that treats advertising impact as the result of creative strength, modulated by the quality of the media context and influenced by campaign specific delivery conditions. The central formula is:</p><div class="pullquote"><p>Brand Lift Effect = Creative Quality Score &#215; (Media Quality Score &#215; Campaign Specific Multipliers).</p></div><p>This formulation enables impression level prediction of brand outcomes by breaking down the constituent parts of what makes an impression effective. The creative score captures the intrinsic potential of the ad to drive attention and recall; the media quality score adjusts that potential based on how conducive the environment is to brand engagement; and the campaign specific multipliers introduce granularity around timing, targeting, and specific audience effects/saturation.</p><p>To operationalize this model, each impression is scored in real time based on its associated creative asset, the media environment in which it was delivered, and context specific variables such as format, frequency, and sequencing. The system aggregates these impression level estimates upward to compute placement level, publisher level, and campaign level brand lift scores. These results can then be surfaced to marketers through dashboards, reporting tools, or optimization platforms.</p><p>A key feature of the model is its ability to perform impression level attribution, assigning incremental brand impact back to specific tactics or placements. This enables more precise budgeting decisions and unlocks a pathway to optimize not just for efficiency or viewability, but for actual brand outcomes.</p><p>The model must be modular enough to support a wide range of media types or formats, and flexible enough to evolve as new data becomes available. For example, in high attention environments like connected TV, the media quality score may carry more weight. In contrast, for lower attention environments like mobile banner ads, frequency multipliers or sequencing effects may be more significant. This adaptability is crucial to reflect the non-linear nature of brand building across platforms.</p><p>Tuning and calibration will be critical. Initial model weights may be set using historical brand lift studies and human evaluated creative scores, but these must be regularly recalibrated using ongoing validation signals from experimental designs, panel surveys, or observed market lift. Maintaining alignment between model predictions and observed outcomes ensures long term trust and utility.</p><p>The modeling methodology itself can vary based on available data and use case complexity. In simple contexts, a regression based scoring framework may suffice. In more complex, multi-touch environments, ensemble models, causal forests, or structural causal models may be more appropriate. What&#8217;s essential is not the specific technique but the transparency and interpretability of results, marketers must be able to understand and act on what the model tells them.</p><p>Ultimately, the core model is both a prediction engine and a decision support system. It transforms fragmented, high volume campaign data into clear signals of what&#8217;s working and why, serving as the foundation for a modern, scalable, and fully synthetic brand measurement solution.</p><blockquote><p>If you&#8217;ve made it this far into this document - thanks! What follows below is a more technical and governance overview which you can skip if you&#8217;re not technically inclined. Feel free to jump to the section called <strong><a href="https://www.greymatterunloaded.com/i/160614923/strategic-impact">Strategic Impact</a></strong>.</p></blockquote><div><hr></div><h3><strong>Technical Infrastructure</strong></h3><p>The successful deployment of synthetic brand lift measurement depends heavily on a robust and interoperable technical infrastructure, one that can handle the scale, sensitivity, and velocity of modern advertising data. This infrastructure ideally resides within clean room environments to ensure privacy safe computation. Platforms such as Snowflake, BigQuery, and Databricks are already widely used for data warehousing and analytics, and they are increasingly being adopted as environments where modeling applications can be natively deployed. Snowflake Native Apps, in particular, offer the ability to run code and inference directly on customer data without ever moving it, a critical capability in a privacy first landscape.</p><p>Model management must be equipped to support a continuous cycle of training, validation, and deployment. Historical brand lift studies and creative test datasets serve as the foundation for initial model training, while campaign data streamed in near real time allows for active refinement. MLOps frameworks should be embedded into the architecture, supporting model versioning, testing, deployment, and rollback. Monitoring infrastructure should be in place to track drift, anomalies, and divergence from expected outcomes.</p><p>Output delivery needs to be tightly integrated into the activation and reporting workflows marketers already use. This includes API endpoints that return impression level brand lift scores, batch pipelines that export aggregated lift metrics by publisher, placement, or tactic, and data feeds that can be merged into DSPs, CDPs, or BI tools like Looker and Tableau. Realtime integrations with tools such as The Trade Desk or Google's Display &amp; Video 360 (DV360) would allow for brand outcomes to be optimized mid flight.</p><p>Moreover, the technical stack must be built to interface with the existing measurement ecosystem. This includes alignment with standard identity frameworks (such as UID2.0, ID5, or RampID), and seamless interoperability with attention providers and verification platforms. Companies like TVision, Adelaide, DoubleVerify, and Integral Ad Science are already supplying impression level engagement and quality metrics that can enrich brand lift modeling. Similarly, platforms such as VideoAmp, Samba TV, and iSpot offer exposure level TV and cross-screen data that can be used to validate or supplement synthetic estimates.</p><p>Finally, the system should be cloud native, horizontally scalable, and built to operate efficiently at massive data volumes. Ingesting and scoring billions of impressions daily requires a highly performant data pipeline that supports parallel processing, query optimization, and data partitioning across large scale environments, something not often done by market research companies. Without this scalability, the promise of synthetic, always on brand lift measurement cannot be realized. </p><div><hr></div><h3><strong>Validation and Governance</strong></h3><p>While this proposal outlines a scalable, synthetic solution that could materially increase the value of brand measurement across the advertising ecosystem, it is not intended to replace traditional methods outright. Rather, it should be seen as a complementary system, one that sits on top of existing measurement approaches to expand their reach, frequency, and granularity. Synthetic brand lift offers a way to estimate brand outcomes at the impression level and in near real time, but its credibility and utility are deeply dependent on ongoing human validation.</p><p>Validation should therefore be handled with care and rigor. One effective strategy is to run parallel human survey measurements, which can be used to calibrate synthetic estimates and provide a benchmark for model accuracy. These parallel studies can be designed to focus on key campaign types, audience segments, or creative formats to ensure the model performs well across varied contexts. In addition, validation should include comparisons against holdout test/control designs wherever feasible, especially in environments with well structured media delivery.</p><p>Over time, a network of trusted calibration studies can serve as an evolving &#8220;truth layer&#8221; for the model, used to adjust weights, tune sensitivity, and track model drift. These studies don&#8217;t need to be frequent or large scale, but they do need to be smartly designed and regularly refreshed to ensure alignment with current market realities.</p><p>Governance must support transparency in how the model is built, trained, and maintained. This includes documentation of modeling assumptions, openness around training data sources, and third-party review where appropriate. Industry collaboration will be essential, especially if the goal is to move toward a syndicated model architecture that benefits the broader ecosystem. Participating stakeholders, brands, agencies, media sellers, and vendors, must align on how brand lift is defined, validated, and operationalized.</p><p>In this way, the synthetic measurement framework becomes not just a technical solution, but a collaborative effort to advance the science of brand effectiveness without compromising on rigor or trust.</p><div><hr></div><h2><strong>Strategic Impact</strong></h2><p>This proposal directly addresses the long standing limitations of traditional brand lift measurement described earlier: high costs, latency, limited scalability, and an over reliance on survey based methods. By shifting to a synthetic, impression level framework, this approach fundamentally redefines what is possible in brand effectiveness measurement.</p><p>The immediate impact of this model is to unlock visibility into brand performance at a scale and speed that legacy systems cannot match. Instead of waiting weeks for post-campaign survey results, marketers can now access continuous, real time insight into what creative is working, where, and why. Costly and isolated research efforts can be replaced with a common infrastructure that is syndicated across brands, publishers, and platforms, turning what was once a bespoke measurement effort into a shared, adaptive utility.</p><p>This also enables true optimization. With granular brand lift scores tied to individual impressions, tactics, and media placements, marketers can start to optimize for brand outcomes just as they do for performance metrics like clicks and conversions. Brand building can move from strategic guesswork to operational precision.</p><p>Over the long term, this approach creates the foundation for an industry standard protocol, enabling consistent benchmarking across categories and establishing a shared outcome currency for evaluating media value. Brand lift can finally become a core KPI, integrated into media planning, buying, and attribution models alongside performance metrics.</p><p>This evolution won't eliminate the need for human validation, nor is it designed to. Instead, it raises the ceiling on what brand measurement can do, extending its reach, increasing its frequency, and grounding it in the same programmatic infrastructure that powers modern advertising. In doing so, it bridges the gap between what brand marketers want and what measurement systems have historically been able to deliver.</p><div><hr></div><h2><strong>Secondary Applications</strong></h2><p>The data pipelines proposed herein offer the opportunity for secondary monetization by measurement firms. Potential applications include forecasting brand lift using planned media and creative inputs before campaigns launch, benchmarking publisher performance to support outcome based deal structuring, and identifying underperforming creatives in pre-launch QA. One of the most promising implications of this pre-campaign forecasting capability is the opportunity to create a new, predictive currency metric, one that reflects the expected brand impact of media environments before spend is committed. By modeling projected lift based on the combination of media context and creative quality, marketers and media sellers can align on performance expectations upfront.</p><p>This shifts the value conversation in fundamental ways. Instead of media being judged post hoc through outdated proxy metrics or isolated survey results, it can be evaluated in advance using standardized brand outcome forecasts. That allows media sellers to be held accountable for the one thing they truly control: the attentional and contextual environment in which ads appear. It also creates a more equitable measurement landscape, ensuring that a poor performing creative execution doesn&#8217;t unfairly penalize a high quality media placement.</p><p>Over time, this predictive currency could evolve into a foundational planning tool, shaping how media is priced, sold, and optimized. It opens the door for publishers to compete not just on reach or CPM, but on expected brand impact per impression, bringing brand outcomes into the core economic logic of digital media transactions.</p><div><hr></div><h2><strong>Getting Started: A Blueprint for Implementation</strong></h2><p>While the long term vision of syndicated, always on synthetic brand lift measurement is ambitious, there are low friction ways to begin demonstrating value today. The simplest path forward involves partnering with a single publisher and a known advertiser, creating a controlled environment to validate the core model and operational workflow.</p><p>Start with a media partner that has strong engagement metrics and robust clean room infrastructure, such as a premium news publisher or connected TV platform. Choose an advertiser with a consistent creative strategy and a history of brand lift measurement. Ideally, the campaign should feature creative assets that have already been scored using AI based quality tools and be delivered across formats that are well instrumented for media attention signals.</p><p>Use this initial engagement to map impressions to creative IDs, extract attention and engagement signals from the publisher environment, and model brand lift synthetically using the proposed framework. To validate outcomes, run a parallel human based brand lift survey to benchmark the synthetic estimates.</p><p>This first test will create a microcosm of the larger ecosystem and help identify operational gaps, calibration opportunities, and integration challenges. From there, the framework can be expanded to additional publishers, advertisers, and platforms, scaling with confidence and a track record of proof.</p><p>This kind of pilot approach not only accelerates learning but helps socialize the methodology and build stakeholder trust through real world performance.</p><div><hr></div><h2><strong>Final Thought</strong></h2><p>Synthetic brand lift measurement has the potential to unlock a new era of upper funnel accountability. By fusing creative diagnostics, media quality data, and programmatic infrastructure, we can create a system that is not only more efficient, but better aligned with how modern marketing actually works.</p><p>It&#8217;s time to build it.</p>]]></content:encoded></item><item><title><![CDATA[Consumer Panels are a s**tshow, is there hope?]]></title><description><![CDATA[Time to Evolve or Die Trying]]></description><link>https://www.greymatterunloaded.com/p/consumer-panels-are-a-stshow-is-there</link><guid isPermaLink="false">https://www.greymatterunloaded.com/p/consumer-panels-are-a-stshow-is-there</guid><dc:creator><![CDATA[Marc Ryan]]></dc:creator><pubDate>Thu, 01 May 2025 12:32:51 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!i7i3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa25e4e21-0c69-4f90-b69b-c1c171e53a03_2464x1856.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!i7i3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa25e4e21-0c69-4f90-b69b-c1c171e53a03_2464x1856.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!i7i3!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa25e4e21-0c69-4f90-b69b-c1c171e53a03_2464x1856.png 424w, https://substackcdn.com/image/fetch/$s_!i7i3!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa25e4e21-0c69-4f90-b69b-c1c171e53a03_2464x1856.png 848w, https://substackcdn.com/image/fetch/$s_!i7i3!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa25e4e21-0c69-4f90-b69b-c1c171e53a03_2464x1856.png 1272w, https://substackcdn.com/image/fetch/$s_!i7i3!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa25e4e21-0c69-4f90-b69b-c1c171e53a03_2464x1856.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!i7i3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa25e4e21-0c69-4f90-b69b-c1c171e53a03_2464x1856.png" width="1456" height="1097" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a25e4e21-0c69-4f90-b69b-c1c171e53a03_2464x1856.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1097,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6503079,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.greymatterunloaded.com/i/162561405?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa25e4e21-0c69-4f90-b69b-c1c171e53a03_2464x1856.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!i7i3!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa25e4e21-0c69-4f90-b69b-c1c171e53a03_2464x1856.png 424w, https://substackcdn.com/image/fetch/$s_!i7i3!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa25e4e21-0c69-4f90-b69b-c1c171e53a03_2464x1856.png 848w, https://substackcdn.com/image/fetch/$s_!i7i3!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa25e4e21-0c69-4f90-b69b-c1c171e53a03_2464x1856.png 1272w, https://substackcdn.com/image/fetch/$s_!i7i3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa25e4e21-0c69-4f90-b69b-c1c171e53a03_2464x1856.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>They say that you never really know you&#8217;re in a recession until you can look backwards and see what happened. Likewise, I think we&#8217;ll never really know when the consumer panel industry died until we can look back and see what happened. Don&#8217;t get me wrong, I think we have passed the starting line, but the decline has the potential to be long lived. We won&#8217;t be able to say anything with authority until 5-8 years down the line. In the meantime, I&#8217;m curious to see who has the ability to evolve and who&#8217;ll end up on the trash heap of history.</p><p>Before jumping in it's only fair to be clear on what we&#8217;re talking about here: the survey access panel industry. Not every panel. Not the special-purpose, high-commitment panels like Nielsen&#8217;s people meters or Circana&#8217;s receipt scanning armies. Those panels have their own drama to deal with, sure. This is about the access panel world, where people get pinged for online surveys, earn a buck or some points, and the industry turns that into "insights." </p><p>The survey access panel industry is massive. Global numbers are tricky to pin down because most players are private or folded inside holding companies, but conservative estimates peg the market at several billion dollars a year. In the U.S. alone, major players like Dynata, Toluna, IPSOS, Kantar, Cint, PureSpectrum, and dozens of niche networks churn out millions of completes a day. And that&#8217;s not even counting the exchanges, aggregators, and ghost suppliers that make up the long tail.</p><p>By all measures that's a big industry and in case you haven&#8217;t noticed, it's in trouble.</p><h3><strong>A Dumpster Fire of Issues</strong></h3><p>If the access panel industry were a house, the foundation would be cracking, the roof leaking, and someone just lit a match in the living room.</p><p>Let&#8217;s walk through the problem list:</p><ul><li><p><strong>Opaque Supply Chains:</strong> Who's taking your survey? Good luck finding out. Panels source from panels who source from exchanges who source from... no one knows. And at the end of that chain? You'll often find panel companies that exist solely to supply exchanges, with zero direct relationship with the brands buying the data. That lack of connection means they face no accountability for the quality or authenticity of their respondents, they're just slinging completes into the void, hoping no one asks too many questions.</p></li><li><p><strong>Self-Selection Bias:</strong> Panels are made up of people who choose to be in them. That means your "average consumer" is often anything but. And even panels that boast about their "high quality" respondents are usually finding those people through affiliate partnerships, referral programs, or recruitment schemes that tend to attract a very specific type of participant, people already predisposed to engaging with surveys. A 2009 study in Public Opinion Quarterly found that long-term panel participants scored significantly higher on measures of conformity and conscientiousness than the general population. So, while you might think you know your panel, you can be pretty sure it doesn&#8217;t reflect the general population in a meaningful way.</p></li><li><p><strong>Price Pressure:</strong> Buyers want cheap sample. Sellers want margin. The result? Corner cutting, lazy targeting, and dropping quality standards. And here's where it gets performative: everyone talks about wanting quality, but data is invisible, it doesn't come in a pretty box you can inspect. It's easy to say you value accuracy, but when the price tag drops and the data still 'looks fine' in the dashboard, the temptation to trade down is strong. Quality is a materials cost, not a visible feature of the final product. It's the same logic behind choosing store-brand Fruity O's over Fruit Loops. You know the ingredients aren't identical, but hey, its cereal, it tastes close enough, and it costs half as much. Out of sight, out of mind.</p></li><li><p><strong>Boring Surveys:</strong> Most surveys are still designed like it's 1999. Long. Ugly. Clunky. No wonder dropout rates are sky-high. Back in the day, surveys were run through RDD, random digit dialing on the phone. And because there was a real human voice on the line, respondents felt guilty hanging up, which meant you could get away with 30- or 40-minute monstrosities. But when those same surveys got ported online, nobody stopped to ask, "Wait, will people actually <em>want</em> to do this?" Spoiler: they don&#8217;t. Especially Gen Z, who'd rather scroll TikTok than wade through a soul-crushing battery of scale questions.</p><p>And thanks to research productization, the same cookie-cutter surveys are sent to the same people again and again, just swapping out the brand or product. That trains professional respondents to game the system. Say I know five yogurt brands? Cool, now I'm on the hook for 10 follow-up questions on each one. Next time? I might just "remember" one brand so I can get to my reward faster. It's Pavlov meets capitalism, and it's killing data quality.</p></li><li><p><strong>Demographic Skew:</strong> Panels tend to over represent older, female respondents, the folks most likely to have time, patience, and an opinion. Likewise, they under represent men, younger populations and massive swaths of higher income earners. But it goes deeper than age or gender. As previously mentioned, panels attract "joiners", people with personality traits inclined toward participation, routine, and structured activity. So even if the demographics look balanced, the psychology of your sample is still skewed. You're not just getting average people; you're getting the people who love being asked questions.</p></li><li><p><strong>Fraud &amp; The Incentive Trap:</strong> And don&#8217;t even get me started on incentives. Micro-payments and points-for-prizes systems might seem efficient, but they create short-term thinking. Respondents chase quantity, not quality. Fraud becomes lucrative, and worse, it's growing. According to the Insights Association, up to 30% of responses in some survey panels are estimated to be fraudulent. A 2022 report by CloudResearch found that 95% of surveys conducted through online panels contained at least some level of fraudulent activity, ranging from bot participation to identity misrepresentation. Fraudulent respondents are increasingly sophisticated, using tools like local proxies and AI-based click farms to bypass basic quality checks. </p><p>But fraud isn&#8217;t just about bots or AI. Sometimes it's about survival economics. If you're a Venezuelan national, the average monthly income is around $230, in that case earning $50 a week slogging through boring U.S.-based surveys becomes an attractive side hustle. Respondents in lower-income regions may misrepresent location, identity, or behavior just to qualify. And can you really blame them? The system incentivizes it. This isn&#8217;t just a minor annoyance. It fundamentally breaks the trust model of survey research. And when you pair that with boring, repetitive surveys and an incentive model that rewards speed over accuracy, you're just begging people to cheat.</p></li></ul><h3><strong>The Structural Problem</strong></h3><p>Here&#8217;s the kicker: non-panel sampling methods have been shown to outperform access panels. Studies comparing panel recruits to methods like random intercepts, social media recruitment, or even survey walls can and often show that non-panel respondents look more like the general population and give higher quality data. The data is fresher. The answers were more thoughtful. The bias smaller.</p><p>If access panels are so riddled with problems and if validated alternatives like random intercepts and social recruitment routinely deliver better, more balanced results why hasn&#8217;t the market evolved? That&#8217;s the frustrating paradox. We know what&#8217;s broken. We know how to fix it. And yet the system stays stuck, mostly because inertia is easier than change. The dysfunction is visible, but the data still lands on a dashboard, and as long as the charts keep moving in the right direction, too many buyers are happy to look the other way.</p><p>Before we get into client behavior, there's a big, structural reason the market hasn't shifted: scale. The more innovative approaches, random intercepts, survey walls, social recruitment, often exist as part of specific research methodologies or tools, not as stand-alone sample sources. That makes it harder to scale up or swap into a traditional research study. Meanwhile, the big research firms, think IPSOS, Kantar, GfK, have sunk massive costs into building and maintaining access panels. These companies <em>need</em> to monetize those panels to maintain margins. So, what do they do? They package them inside broader research solutions, push them to brands, and defend their quality and applicability even when better options exist.</p><h3><strong>The Client Conundrum</strong></h3><p>This ties back to something I wrote in a previous piece, "All Market Research is About Choices," where I laid out the three fundamental types of research: to understand, to adjudicate, and to track. Of those, tracking is where the real money is. It's the annuity product, a predictable, recurring source of revenue that's been funding research departments and agency P&amp;Ls for decades. But here&#8217;s the rub: the clients paying those annuities hate when the data changes in those trackers. Especially when those changes are driven by the sample source.</p><p>Many clients are stuck. They&#8217;re using trackers with fixed source blends written in stone years ago. They&#8217;re terrified of breaking a trend line, even if that line is already warped beyond reason. Everyone in this business has a story of a client who <em>knows</em> their sample is garbage but refuses to change it.</p><p>It&#8217;s not just inertia. It&#8217;s fear. "What if the data moves and I have to explain it to the CMO?" Especially when the CMO's variable compensation is tied directly to those tracker results. A sudden dip, even if it's caused by a cleaner, more representative sample, can set off alarms across the org chart.</p><p>There&#8217;s also a skill and priority gap at play. Many buyers of insights work aren&#8217;t digging into sampling methods or evaluating representativeness. They&#8217;re often project administrators, managing requests from internal teams and just trying to get answers quickly and cheaply. What they want is a proposal that sounds credible, not a debate over methodological nuance. Vendors know this. And the very vendors clients look to for research leadership, many of whom are sitting on massive sunk costs in panel infrastructure, have every incentive to keep the broken model alive and kicking.</p><h3><strong>From Craft to Copy-Paste</strong></h3><p>Sample design isn&#8217;t the only thing clients turn to vendors for, survey design is also on the menu. But that, too, has fallen off. Survey design used to be a craft. You learned from mentors. You argued over question wording, articles used to surface in academic journals about the validity of scales with and without a midpoint. You thought about respondent cognitive load and preventing instrument bias. Now? It&#8217;s a copy-paste from the last tracker. Slap on a five-point scale and call it a day.</p><p>Part of the reason for this decay in quality is structural. The massive global expansion and consolidation of MR firms through the &#8217;80s and &#8217;90s turned the vendor account management function into something far more administrative than strategic. Today&#8217;s research professionals at a vendor are more likely juggling fielding schedules and managing quotas across 80 markets than sweating over the cognitive load of a 30-question grid. There&#8217;s barely time to optimize survey design when your job is to keep the engine running, on time, and on budget.</p><p>How does this relate to access panels? Bad design alienates good respondents and encourages fast-click fraud. It&#8217;s a vicious cycle, bad surveys attract bad respondents, and bad respondents teach us the wrong things.</p><h3><strong>Exchanges: Blessing or Curse</strong></h3><p>I know, I can hear some of you asking... what about the exchanges? Exchanges ushered in a wave of technical innovation that the market desperately needed. They modernized an industry that was still limping along on voicemails and email invites, replacing it with APIs, real-time routing, and programmatic infrastructure. For the first time, survey supply could be automated, standardized, and scaled, giving rise to the now-ubiquitous concept of "sample liquidity." This shift enabled dynamic targeting, real-time feasibility checks, and instant project setup across geographies. In short, they brought the tech stack up to speed with the rest of the digital world.</p><p>But with that modernization came an unintended consequence: a race to the bottom. Exchanges function like marketplaces, and in marketplaces, price often wins. The default trading logic rewards lower-cost suppliers with a bigger share of completes. If you're a supplier willing to undercut your competitors by a few cents per complete, congratulations, you just got more business. Quality? That&#8217;s someone else&#8217;s problem downstream. The result is a perverse incentive system where vendors who cut corners or flood their panels with low-quality traffic are rewarded, not penalized. J.D Deitch has similarly pointed this out in his ebook the <a href="https://getitfrom.jddeitch.com/">Enshitification of Programmatic Sampling</a>.  Programmatic sample. RTB for humans. What could go wrong? Well, it turns out, quite a bit.</p><h3><strong>This Thing Is Broken</strong></h3><p>The access panel industry is cracked at every level. From sampling to design, from incentives to buyer behavior, it's a system that limps along only because everyone's afraid to stop.</p><p>One might think that the <a href="https://www.justice.gov/usao-nh/pr/eight-defendants-indicted-international-conspiracy-bill-10-million-fraudulent-market">DOJ indictment</a> is a wake up call and turning point for the industry but I doubt it'll be more than a blip. Fraud has been in the headlines of MR conferences for years and the only thing we'll see in the wake of the DOJ trial will be panel and market research companies flocking to LinkedIn to espouse how their approach to panel is better. We're bound to see distancing not differentiating, after all it's not like the sunk costs, or corporate incentives have changed.</p><p>Fraud, fatigue, and fake data aren&#8217;t temporary glitches. They&#8217;re symptoms of a model past its sell-by date. The panel industry as we know it isn&#8217;t evolving. It&#8217;s unraveling. </p><h3><strong>So, what&#8217;s next</strong></h3><p>The unraveling of the traditional access panel model isn&#8217;t just a slow fade, it&#8217;s happening against the backdrop of an existential threat: the rise of synthetic research firms and alternative collection ecosystems. These aren't side projects or experimental labs, they're full-blown, investor-backed companies built to sidestep the entire panel paradigm.</p><p>Companies like Evidenza, Quilt.ai, Delphi, RealEyes and others offer research solutions that don&#8217;t rely on fielding surveys to real humans in the traditional sense. They use synthetic personas, behavioral modeling, and AI-generated respondents to simulate market reactions at scale. Meanwhile, other players like n-Infinite, Fairgen, and others are reinventing how data is collected by using enhanced AI systems to boost samples or impute missing data for sparse datasets. Leveraging these systems means no longer chasing the last 25 respondents for a survey or in some cases even worrying about a survey at all.</p><p>These companies don&#8217;t just offer a different tool, they offer a fundamentally different cost and scale model. And because they&#8217;re not dragging around the same panel-based overhead, they&#8217;re far nimbler. They don&#8217;t have to protect an outdated asset; they&#8217;re building from scratch. That&#8217;s a major advantage in a market ready for change.</p><h3><strong>The Middle Way</strong></h3><p>On the one hand we have the establishment players: the panel and big MR complex, on the other hand we have the synthetic players, smaller but nimbler in execution. Who's going to win the day?</p><p>Regarding this question I'd argue that neither should win as both have merits. This is the case with most methodological debates. The ideal path is often right down the middle, a hybrid approach that takes the best of what both solutions offer. We've done this before in research with things like hybrid audience measurement, and data fusions, and there's no reason we can't do it again.</p><p>Synthetic solutions scale phenomenally. They are generative in nature; they easily construct answers to questions they receive but those answers need to be grounded in a robust training dataset. Specifically, a training dataset based on human data, the higher quality the better. Panels don't scale like synthetic, but as we've seen in many specialty panels, with money, time and effort you can create high quality human feedback panels that could generate the data needed for synthetic solutions to succeed.</p><p>Today, panels are trying to compete on scale, it's the, &#8220;I can fill your n=500&#8221; game. I believe that competing this way is going to go into decline. Sure, the traditional MR survey will always have a role, but for the big solutions, access panels will never be able to keep up with the scale and speed of synthetic solutions for insights generation. Chasing sample size is a lost cause. On the other hand, synthetic is generative, it needs to be validated, ideally by humans, and to generate insights it needs training data, ideally high quality human training data.</p><p>There's a future version of the panel industry where panels are fixed cost assets used not on a project-by-project basis but as an R&amp;D cost to validate synthetic outcomes and create the training data to feed the models. In that version of the future, panel size still matters given that bigger panels create more training data and better validations. But as an R&amp;D asset, the incentive for creating a panel changes and suddenly two things come to the forefront as critically important: quality and panelist experience.</p><p>Companies that treat a panel as an R&amp;D cost are going to want to make sure that investment is paying off with great data. This is the opposite of the current access panel transactional model that treats panels as COGS on a specific client study, in that model it&#8217;s all about monetizing panelists as fast as possible. These panels will be monetized across the full line of products, and they will be more representative, more valuable and the participants will feel they're receiving better value for their time.</p><p>If an R&amp;D panel is not there to deliver the n=500 for a client study, it's there to create the data to make the models work and to validate the models. For the panelists, they reap the rewards of having researchers design more engaging and tailored data collection experiences, and they are no longer presented with a sea of survey opportunities and routers to navigate. They'll receive surveys they qualify for, are interested in and feel fairly compensated for the experience. Without the pressure of the access panel being what's front and center in the client deliverable these panels will become true assets, treated and managed well and used to drive the future of insights.</p><p>In this post access panel world owning a panel will still be a status symbol and a barrier to entry. The real question is who'll get there first? Will the synthetic suppliers secure funding or partnerships to make an investment into global R&amp;D panels, or will the big MR firms use the panels to fuel the creation of their own synthetic offers? The jury is out, but while it might not yet be obvious the race is on.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.greymatterunloaded.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Grey Matter Unloaded! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item></channel></rss>