Framing technical value in language that resonates with biologists and engineers
Reading time: 9 minutes
The preceding article in this series described the buyer: a decision maker who often trained in a discipline other than developmental and stem cell biology, supported by an internal evaluation chain whose members bring distinct concerns. This article addresses what to say to those buyers — and, more importantly, what not to say.
The challenge is not a shortage of things to claim. It is a surplus. The technology works. The data is real. The performance improvement over the prior method is measurable. The temptation is to say all of this, loudly, using the vocabulary of the academic paper in which the technology was first reported. The problem is that this vocabulary does not land with the people making the purchase decision, and in many cases it actively triggers scepticism. Technical buyers in this field have been exposed to more overclaimed, under-evidenced product descriptions than they can count. Their default posture is mistrust, and their attention is earned not by enthusiasm but by specificity, honesty, and evidence they can evaluate in their own terms.
The Pillar 3 anchor made the argument that positioning is a product decision rather than a marketing output. Framing technical value is where that argument becomes visible. The language you use to describe what your tool does is itself a positioning statement. If it reads like marketing, it signals that the company does not understand its buyer. If it reads like an academic abstract, it signals that the company has not translated its technology into a product. The narrow band between those two failure modes is where credible technical communication sits.
The two languages the buyer does not speak
Most ancillary TechBio companies default to one of two communication registers. Both fail.
The first is academic register: the language of the journal paper. Mechanism-of-action detail, statistical significance against a defined p-value, comparison with a baseline under controlled conditions, explicit limitations, and hedged conclusions. This register is excellent for peer review. It is poor for procurement decisions, because it answers questions the buyer is not asking. The buyer does not need to know the molecular pathway by which the tool operates. The buyer needs to know what the tool does to their workflow, how reliably, at what cost, and with what evidence that the improvement is real under their conditions. Academic register buries this information inside a presentation designed for a different audience.
The second is marketing register: the language of the product brochure. Performance claims stated as absolute advantages ("transforms cell yield", "eliminates batch variability", "enables seamless integration"), illustrated with a single best-case data point and supported by testimonials rather than controlled data. This register works in consumer markets where the buyer's evaluation is impressionistic. It does not work in a market where the buyer is a trained scientist or engineer who reads the claim, asks "under what conditions, with what cell type, at what passage, measured against what standard?", and disqualifies the vendor when those questions are not answered.
The narrow band between these two registers is what the rest of this article describes.
Say what the tool does, measured against the user's standard
The first principle of credible technical communication in this field is specificity about what was measured, under what conditions, against what comparator, and with what result.
This sounds obvious but is routinely violated. A claim that a product "improves cell viability" is meaningless without specifying: viability of which cells (iPSC-derived cardiomyocytes? primary MSCs? thawed cryopreserved products?), measured by which assay (trypan blue exclusion? flow cytometric annexin V? functional readout?), after which step in the workflow (post-thaw? post-expansion? post-sort?), compared to which baseline (manual process? competitor tool? no-treatment control?), and across how many independent replicates.
The reason this level of specificity matters is not pedantic rigour. It is that the technical buyer is trying to predict whether the product will perform in their specific context. Every unspecified condition is a variable the buyer cannot evaluate. Every unstated baseline is a comparison the buyer cannot make. The more conditions are left implicit, the less confidence the buyer has that the claimed performance is relevant to their workflow.
The articles in the Pillar 2 series documented why this matters in practice. The reproducibility article showed that batch-to-batch variability and operator-dependent outcomes are the leading operational failure modes. The characterisation gaps article showed that the measurements currently used in the field do not reliably predict functional performance downstream. A product claim that does not specify the measurement conditions is making a prediction it cannot support.
Be honest about what the evidence does not show
The stem cell field has a well-documented problem with the gap between what is measured and what matters. A 2023 Nature Methods editorial observed that the standards for stem cell characterisation remain insufficient for the clinical and manufacturing applications the field is pursuing [1]. A 2023 review of potency assays for cell therapy products noted that demonstrating potency — the regulatory requirement that a product's biological effect can be measured and predicted — is particularly difficult when the mechanism of action is not fully understood, which is the case for the majority of cell-based therapies currently in development [2].
For the ancillary TechBio company, these field-level uncertainties have a direct communication consequence. If the field itself cannot yet fully define what constitutes functional quality in a stem cell product, then a tool vendor who claims to "ensure quality" or "guarantee potency" is making a claim the field cannot validate. The honest version of the claim is more limited and more credible: the tool measures a specific attribute, under defined conditions, with quantified precision, and there is evidence that this attribute correlates with a specific downstream outcome in a defined system. What the tool does not measure, and what the correlation does not cover, is stated explicitly.
This kind of honesty is uncomfortable for companies trained in marketing conventions that reward confidence over precision. But it is the language that technical buyers recognise as trustworthy. A product description that says "our assay measures X with a coefficient of variation of Y% across Z replicates, and published data show that X correlates with functional outcome W in cell type V under conditions U [reference]" is both less exciting and more convincing than "our assay ensures cell quality". The first statement can be evaluated. The second cannot.
Provide evidence in the buyer's native format
The preceding article on the cross-disciplinary buyer argued that different decision makers read evidence in different forms. The implication for technical communication is that the same underlying data should be presented in multiple formats, matched to the audience evaluating it.
For the biological evaluator: performance data against the specific cell types, passage ranges, and culture conditions relevant to their workflow. This is the closest to the academic register, but reorganised around the evaluator's use case rather than the scientist's publication narrative.
For the engineering or operations evaluator: quantified specifications — throughput, footprint, consumable cost per run, integration requirements (data formats, physical connections, workflow dependencies), maintenance schedule, and failure modes with documented frequencies.
For the regulatory or quality evaluator: material traceability documentation, lot-release testing data, qualification and validation summaries, and a clear statement of what regulatory standards the product has been tested against and what gaps remain. The Pillar 1 article on GMP and Quality by Design and the Pillar 2 article on the regulatory cliff describe what this evaluator needs to see.
For the strategic decision maker: a one-page summary that states the workflow step the product addresses, the before-and-after performance in quantified terms, the competitive position (including against the status quo), the commercial terms, and the evidence basis for the claims made. This is the document that travels through the internal decision path described in the buyer article.
The effort of producing these materials feels disproportionate for an early-stage company. It is not. Each format is a translation of the same evidence, not new evidence. And the alternative — a single set of materials that does not match what any evaluator needs — produces longer sales cycles, more stalled evaluations, and more pilots that fail to convert.
Avoid the credibility traps
Three specific patterns destroy credibility with technical buyers in this field, and all three are common.
Claiming uniqueness without defining the comparison set. The claim "our technology is the only one that does X" is almost always false or trivially true. It is false if competitors or adjacent technologies do something closely similar. It is trivially true if "X" has been defined so narrowly that no one else would bother to claim it. Either way, the claim invites the buyer to identify the counter-example, and once they do, credibility is lost on a wider front than the original claim covered. The competitive landscape article in this series described how to read the competitive field honestly; the communication should reflect that reading.
Using undefined quality language. Terms like "high quality", "premium grade", "best-in-class", and "gold standard" convey nothing unless the quality attribute is named and the standard is referenced. In a field where quality attributes are themselves contested — where a 2021 review documented multiple competing strategies to address MSC heterogeneity and noted that no shared definition of reduced heterogeneity has emerged [3] — undefined quality language is not just imprecise; it claims authority the field does not yet provide.
Confusing a measurement with an outcome. A tool that measures cell surface marker expression is not a tool that predicts therapeutic efficacy, even if marker expression correlates with efficacy in published studies. A tool that monitors bioreactor parameters is not a tool that guarantees product consistency, even if parameter control improves consistency in controlled trials. The distinction between measuring something and predicting something, and between predicting something and guaranteeing something, is the distinction that careful buyers enforce. Claims that blur these boundaries are the claims that technical buyers reject.
Writing that earns trust
The practical test for any piece of product communication is whether a scientist or engineer reading it would forward it to a colleague with the note "this is worth looking at" rather than "this is marketing". The difference lies in a small set of writing practices that apply whether the format is a website, a product brief, a conference poster, or a pilot proposal.
State the claim, then state the conditions under which it holds. State what was measured, then state what was not. Give the comparator — whether that is the manual baseline, a competitor tool, or a no-treatment control — and show the data in a format the buyer can interrogate. Name the cell types, passage ranges, and culture systems used. Identify the limitations: which workflows were not tested, which cell types are not yet validated, which performance characteristics are inferred from published work rather than demonstrated with this product. Close with a statement of what the evidence supports and what it does not, and let the buyer draw their own conclusion about fit.
What this leads to next
The final article in this series, on prior art in laboratory and domesticated species, extends the framing question to applications where the developmental biology is itself the enabling technology — species preservation, cultured food, human ageing. In those contexts, the evidence base is thinner, the workflows are less mature, and the challenge of framing value honestly is correspondingly harder. What transfers from established species systems and what must be discovered anew is a question that affects both the product and the language used to describe it.
For readers working through the series in sequence, the positioning framework is now nearly complete: workflow identification, competitive landscape, demand validation, buyer definition, and this article on framing the value. Together, these articles address the constructive side of the question the Pillar 2 series raised: not just why TechBio products fail, but how to position one that does not.
References
[1] Setting standards for stem cells. Nat Methods. 2023;20:1267. DOI: 10.1038/s41592-023-02016-5
[2] Torrents S, Grau-Vorster M, Vives J. Potency Assays: The 'Bugaboo' of Stem Cell Therapy. Adv Exp Med Biol. 2023;1420:29-38. DOI: 10.1007/978-3-031-30040-0_3
[3] Dunn CM, Kameishi S, Grainger DW, Okano T. Strategies to address mesenchymal stem/stromal cell heterogeneity in immunomodulatory profiles to improve cell-based therapies. Acta Biomater. 2021;133:114-125. DOI: 10.1016/j.actbio.2021.03.069
About StemCells.Help
StemCells.Help is an advisory consultancy that aids innovation and real-world impact of life science applications built on developmental and stem cell biology. Founded by Dr Paul De Sousa, it draws on over four decades of experience spanning early embryo development, animal cloning, pluripotent stem cell manufacturing, and technology commercialisation. If you build tools for these domains or work in an emerging application where the biology is the enabling technology, StemCells.Help can provide experienced scientific counsel to ground your decisions. To discuss your needs, talk to Paul.
ORCID: 0000-0003-0745-2504
Web: stemcells.help