Demand validation: testing whether the scientists you think need your tool actually do

Reading time: 9 minutes

The preceding articles in this Pillar 3 series established two things. The first was that positioning starts with identifying which specific workflow steps your product changes, for which users, under which conditions. The second was that the competitive landscape includes not just direct competitors but adjacent technologies and the status quo — the habituated set of manual steps, workarounds, and compromises that currently produces an acceptable outcome. This article takes the question that must be answered once that groundwork is done: will the people running those workflows actually pay to change them?

This is demand validation. It is the question most frequently answered by assumption rather than evidence in TechBio companies serving stem cell science. The reason is structural. The founders are often deep technical experts who experienced the problem themselves in an academic setting. Their conviction that the problem is real is usually correct. Their assumption that the problem is prioritised, funded, and purchase-ready in the workflows they hope to serve is often wrong.

Across the wider startup landscape, the most commonly reported cause of failure is the absence of sufficient market demand. A large-scale analysis of post-mortem reports from over four hundred VC-backed companies that shut down since 2023 found that poor product-market fit was the underlying cause in roughly 43% of failures, ahead of bad timing, unsustainable unit economics, and team breakdown [1]. Capital exhaustion was the proximate cause in 70% of cases, but the analysis found this to be the terminal symptom rather than the root condition [1]. In ancillary TechBio for stem cell science, where development cycles are long, validation is slow, and the buyer population is small, these numbers are not abstract. A positioning hypothesis that is wrong about demand burns years, not months.

Why TechBio demand validation is harder than in software

The difficulties specific to demand validation in this field deserve explicit statement, because they explain why otherwise rigorous teams skip the step.

Biology takes time. A software product can be tested against user behaviour in days or weeks. A cell culture tool or a bioprocess reagent cannot. Demonstrating that a product works in a specific workflow requires experiments that take weeks to months, and the result depends on variables — cell line, passage number, medium lot, operator skill — that the company does not control. Testing demand therefore takes longer and costs more than in technology sectors where feedback loops are fast.

The user population is small. The number of laboratories, manufacturing facilities, and CDMOs running workflows that your product serves may be in the hundreds globally, sometimes fewer. This means that demand validation cannot rely on large-sample surveys or A/B testing. Each conversation, each pilot, each piece of evidence carries disproportionate weight, and the margin for misinterpretation is wide.

The expert-founder problem. Many TechBio founders are trained scientists who lived with the problem their tool addresses. They developed the tool because they needed it in their own work. This creates a specific blind spot: the founder's experience of the problem is taken as representative of the market's experience of the problem. But the founder's laboratory, protocols, scale, quality standards, and incentive structure may be different from those of the buyer population. A problem that consumes hours of effort in an academic research context may be a tolerated five-minute inconvenience in a manufacturing context where different steps are the binding constraint. The problem is real; its priority relative to other problems in the target workflow may not be what the founder assumes.

Politeness is a false signal. Scientists are, on the whole, polite and intellectually curious. When shown a new tool, they ask thoughtful questions, express interest, and often suggest follow-up conversations. These are social signals, not purchasing signals. The gap between "that is interesting" at a conference and "we will pay this amount, on this timeline, against these success criteria" is enormous. Teams that use conference interest as demand evidence learn this late and expensively.

What demand validation actually tests

A demand validation exercise for an ancillary technology in stem cell science tests four things, and all four need affirmative answers before the positioning hypothesis is confirmed.

Problem recognition. When you describe the problem your product addresses, using the language of the user's own workflow rather than the language of your technology, does the user confirm that this problem is one they experience, in the workflow step you have identified? Problem recognition is the minimum threshold. It tests whether the problem exists in the user's context, not just in yours.

Problem priority. Even if the problem is recognised, it may not be among the problems the user would choose to solve next. Workflows contain many friction points, and teams with limited budgets and limited willingness to change established protocols allocate attention to the most urgent or most costly ones. A product that addresses the fourth-priority problem competes against the first three for attention, budget, and adoption risk. Priority is tested by asking the user to rank their workflow problems, not by asking whether your specific problem matters.

Willingness to change. A user who recognises the problem and ranks it highly may still be unwilling to change the workflow step your product addresses. Change carries risk: revalidation of upstream and downstream steps, retraining of operators, documentation updates, and the possibility that the new tool introduces problems of its own. A 2024 survey of FACT-accredited cell processing facilities in the United States found that while most facilities were broadly interested in adopting new manufacturing models, the most significant constraints were financial and human resources rather than technical performance [2]. The barrier was not that better tools did not exist; it was that adopting them required investment that competed with other priorities. Willingness to change is tested by describing what adoption would involve — not just what the product does, but what the user would have to do differently — and asking whether that is realistic given their current constraints.

Willingness to pay. The final test is whether the user will commit resources to the change. This is tested not by asking "would you pay for this?" (the answer is almost always "yes" because it costs nothing to say so) but by proposing a specific engagement: a paid pilot with defined success criteria, a letter of intent, a purchase order for a defined quantity at a defined price. The distinction between stated preference and revealed preference is the most important distinction in demand validation. Only actions that cost something — time, money, reputation — constitute evidence of demand.

Three methods that generate evidence before full development

The Pillar 2 article on product-market fit introduced three approaches to early testing. This article expands each into a method that TechBio teams can run before committing to the full development path.

Problem-first interviews. The interview is structured around the user's workflow, not around the product. The interviewer asks the user to describe the last time the relevant workflow step caused a problem: what happened, what the consequence was, what the user did about it, and what it cost in time, material, or quality. The product is not mentioned until the user has described their experience in their own terms. Only then is the concept introduced, and the user's reaction is tested against the four criteria above.

The discipline of the problem-first interview is that it separates problem existence from product fit. A user may describe a real and costly problem that your product does not address, or that your product addresses in a way the user would not adopt. Both findings are valuable. The first suggests that a different product might serve this user better. The second suggests that the product's form, rather than its function, is the obstacle.

A minimum sample for TechBio demand validation interviews is typically fifteen to twenty conversations with people who actually run the workflow, spread across at least three distinct organisations. Fewer than fifteen produces anecdotes; more than twenty adds diminishing returns unless the workflow varies substantially across sites.

Workflow shadowing with structured debrief. The workflow identification article described process shadowing as a method for mapping the real workflow. For demand validation, the same method is extended with a structured debrief in which the observer describes what they saw, asks the operator whether the friction points observed are representative, and proposes a specific change at the step the product addresses. The operator's response to the proposed change — enthusiasm, concern, indifference, specific objections — is the evidence. Shadowing is more expensive than interviews but produces higher-quality evidence because it is grounded in what the user does rather than what the user says they do.

Paid pilot evaluations with agreed success criteria. The strongest evidence of demand is a pilot in which the user pays for the evaluation. Payment need not be full commercial price; it can be a subsidised pilot fee, cost-of-materials contribution, or a commitment of staff time that has an internal cost. The point is that the user has committed something of value, which distinguishes an evaluation from a courtesy. The success criteria are agreed before the pilot begins: what will be measured, against what baseline, over what period, by whom. If the criteria are met, the path to purchase is defined. If they are not met, the positioning hypothesis is refined or abandoned.

The reproducibility article in the Pillar 2 series described how pharmaceutical partners evaluate ancillary technologies: controlled comparisons against defined acceptance criteria, with documentation sufficient for regulatory review. Running a version of that evaluation process internally, before the partner does, generates evidence that either confirms or contradicts the positioning hypothesis. It is cheaper to discover a misfit during a self-managed pilot than during a customer evaluation that determines the future of the commercial relationship.

What happens when demand validation contradicts the hypothesis

The most valuable outcome of demand validation is a clear negative: the users do not recognise the problem, or recognise it but do not prioritise it, or prioritise it but are unwilling to change the workflow, or are willing to change but not willing to pay. Each of these outcomes points to a specific adjustment.

If problem recognition fails, the workflow identification is wrong. The product addresses a step or a problem that the target users do not experience. Return to the workflow identification analysis and reassess.

If priority fails, the product may address a real problem in a market segment that is not yet ready to solve it. This is a timing question rather than a product question. The segment may be ready in two years, or a different segment may already be ready. The applications overview in the Pillar 1 series maps the maturity of different application domains and can help identify where readiness is higher.

If willingness to change fails, the product's integration cost is too high relative to its benefit. The product works, and the user wants what it provides, but the surrounding workflow cannot accommodate it without disruption that the user will not accept. This is a design problem. The solution is to redesign the product's interface with the rest of the workflow — its inputs, outputs, data formats, physical form factor, or operational procedure — to reduce the cost of change. The toolchain fragmentation article in the Pillar 2 series examines this class of problem.

If willingness to pay fails while the other three tests pass, the pricing or commercial model is wrong. The user wants the product, would adopt it, but cannot justify the expenditure against other claims on their budget. This is sometimes a pricing problem and sometimes a problem of who pays: the budget holder may not be the person who experiences the benefit. The next article in this series, on defining the buyer when the decision maker trained in a different discipline, addresses this directly.

The cost of skipping demand validation

The alternative to demand validation is assumption. A team assumes demand because the founders experienced the problem, because conferences showed interest, because a competitor raised funding for a similar product, or because the technology is objectively impressive. None of these constitute evidence that the specific users the company plans to sell to will change their workflows and pay for the product.

The cost of assumption in TechBio is magnified by the long development timelines that biology imposes. A software company that skips demand validation may waste six months of engineering time. A TechBio company that skips demand validation may spend two to four years developing a product, validating it in their own laboratory, building manufacturing capability, and preparing regulatory documentation — only to discover at the first serious sales engagement that the target user does not experience the problem as described, or prioritises a different problem, or cannot integrate the product into their workflow, or cannot justify the price against their alternatives.

The positioning framework this pillar describes is designed to make each of those discoveries before the development investment is made, not after. Demand validation is the step that converts the positioning hypothesis from an internal assumption into an externally tested proposition. The subsequent articles in this series — on defining the buyer, framing technical value, and learning from prior art in other species — build on that tested proposition rather than on the assumption it replaces.

References

[1] CB Insights. Why Startups Fail: Top 9 Reasons. 2026. Available from: https://www.cbinsights.com/research/report/startup-failure-reasons-top/

[2] Elsallab M, Bourgeois F, Maus MV. National Survey of FACT-Accredited Cell Processing Facilities: Assessing Preparedness for Local Manufacturing of Immune Effector Cells. Transplant Cell Ther. 2024;30(6):626.e1-626.e11. DOI: 10.1016/j.jtct.2024.03.016

About StemCells.Help

StemCells.Help is an advisory consultancy that aids innovation and real-world impact of life science applications built on developmental and stem cell biology. Founded by Dr Paul De Sousa, it draws on over four decades of experience spanning early embryo development, animal cloning, pluripotent stem cell manufacturing, and technology commercialisation. If you build tools for these domains or work in an emerging application where the biology is the enabling technology, StemCells.Help can provide experienced scientific counsel to ground your decisions. To discuss your needs, talk to Paul.

ORCID: 0000-0003-0745-2504

Web: stemcells.help

Previous
Previous

Defining your buyer when the decision maker trained in a different discipline

Next
Next

Mapping competitive landscape when your category barely exists yet