Reproducibility as a commercial problem, not just a scientific one
Reading time: 8 minutes
A cell line that performs reliably in one laboratory may behave differently in another, even when the protocol is identical. The same operator, following the same steps on consecutive days, can obtain results that diverge in ways that matter. This may not just be a failure of technique. It will also a property of the material.
Living cells respond to variables that protocols cannot fully specify: the lot-to-lot composition of the culture medium, the precise timing of a reagent addition, ambient temperature during handling, the mechanical history of the passage before this one. In academic research, this variability is managed by running replicates and applying statistics. In a commercial product, it can be a deal-breaker.
This article examines reproducibility not as a scientific inconvenience but as a commercial bottleneck, one that can block partnerships, stall regulatory submissions, and cause customers to abandon products that perform well in demonstrations but not in practice. It is the first cluster article in the Pillar 2 series on why ancillary technologies for stem cell science fail.
The scale of the problem
The cost of irreproducible research across the life sciences is staggering. In 2015, Freedman, Cockburn and Simcoe estimated that over half of all preclinical research in the United States could not be reliably replicated, representing roughly $28 billion per year in wasted expenditure [1]. The principal sources of failure were, in order: faulty biological reagents and reference materials, flawed study design, errors in data analysis and reporting, and poor laboratory protocols [1]. Contaminated or misidentified cell lines featured prominently among the reagent failures, a problem that persists despite the availability of inexpensive authentication assays.
These figures apply to preclinical research broadly. Within stem cell science specifically, the challenge is amplified by the biological sensitivity of the material. I believe, lack of reproducibility is the first and most commercially consequential failure mode for ancillary TechBio such that small changes in conditions, variability between laboratories, operators, and even batches undermine standardisation. These prompt prospective customers to lose confidence in the technology and product being offerred.
Where variability enters
The sources of inconsistency in stem cell workflows are well documented, even if they remain poorly controlled. A 2020 study by Volpato and Webber outlined categories of variability that affect induced pluripotent stem cell (iPSC) research: donor genetics, epigenetic memory, reprogramming method, passage number, culture conditions, and differentiation protocols [2]. Each of these operates independently, and they interact in ways that are difficult to predict.
Organoid systems provide a revealing case study. Brain organoids derived from iPSCs are increasingly used to model neurodevelopmental conditions, but concerns about their reproducibility have limited confidence in cross-laboratory comparisons. In 2024, Glass and colleagues addressed this directly by testing whether a harmonised cortical organoid protocol could produce consistent results across three independent research sites. Using a single iPSC line and a standardised miniaturised spinning bioreactor method, they found that cell type proportions and cortical wall-like structural organisation were reproducible across sites. Differences did emerge in organoid size and in the expression of genes related to metabolism and cellular stress, and these correlated with the quality of the stem cells before differentiation and with technical factors during seeding [3]. The implication is that even with a harmonised protocol, upstream cell quality and handling precision remain decisive variables.
Sandoval and colleagues, reviewing the broader landscape of brain organoid reproducibility in the same year, reinforced this finding. They noted that the quality of the pluripotent stem cell lines used to generate organoids and their cultivation conditions significantly affect downstream outcomes. Their recommendations called for stringent adherence to stem cell maintenance standards, including regular testing for pluripotency, pathogen contamination, and genetic integrity, alongside standardised quantitative methods for organoid analysis [4]. These are not new suggestions, but the fact that a major cross-consortium review found it necessary to restate them in 2024 indicates how far the field remains from routine practice.
For the ancillary technology company, this has a specific consequence. If your product is a culture substrate, an imaging system, a media formulation, or a quality control assay, its performance is entangled with all of these variables simultaneously. Validating the product means demonstrating that it works across a meaningful range of cell lines, operators, and conditions, not just the ones you used during development. For a broader introduction to these methods and their limitations, see the Pillar 1 articles on key methods and characterisation and quality control.
Many companies underestimate this requirement. The temptation is to validate against a single well-characterised cell line in your own facilities, declare success, and move to market. That approach produces data that looks convincing internally but fails the first external test. The pharmaceutical partner who evaluates your product will use their own cell lines, their own media, their own operators, and their own assessment criteria. If performance does not translate, the evaluation ends.
The cost of inconsistency
Batch-to-batch variability has financial consequences that compound through the product development cycle. A manufacturing process that yields inconsistent results requires more quality control testing per batch, more rejected material, longer production timelines, and more expensive documentation to satisfy regulators. Each of these adds cost, and in cell therapy manufacturing, where margins are already thin, the cumulative effect can make a product economically unviable.
The International Society for Stem Cell Research (ISSCR) recognised this urgency when it published new standards for stem cell characterisation in 2023 [5]. These guidelines establish minimum requirements for cell line authentication, pluripotency verification, and genomic stability testing. They also introduced reporting checklists intended to make experimental conditions explicit enough that results can be meaningfully compared between laboratories. Nature Methods endorsed these standards in an editorial the same year, noting that the journal now requires reproducibility to be demonstrated across three or more iPSC lines for publication [6].
These are welcome steps, but they address the problem from the academic side. For the commercial developer, the bar is higher. Regulatory agencies including the European Medicines Agency (EMA) and the U.S. Food and Drug Administration (FDA) expect manufacturing processes to be validated under Good Manufacturing Practice (GMP) conditions, which means demonstrating consistency not across three cell lines but across every production batch, with documentation that traces every input and every deviation [7]. For more on what GMP compliance requires in practice, see the companion article on GMP and Quality by Design.
Why pharma partners walk away
The most visible commercial consequence of poor reproducibility is failed partnership evaluations. Large pharmaceutical and biotechnology companies evaluate ancillary technologies through structured technical assessments that typically run for three to six months. During this period, the product is tested by the partner's own scientists, using the partner's protocols, cell lines, and quality criteria.
The most common mode of failure in these evaluations is not that the product does not work. It is that the product does not work consistently. A substrate that supports attachment in seven out of ten tests is not useful for manufacturing. An assay that gives different readings depending on which operator runs it is not useful for quality control. The partner needs a product that performs within specification every time, because their regulatory obligations require them to demonstrate process control, and a tool that introduces variability is the opposite of control.
The typical trajectory for TechBio companies is thus: strong initial breakthrough, promising early data, failure to reproduce at scale, integration or regulatory barriers, then capital exhaustion. This means that the reproducibility problem is not something to address after achieving product-market fit. It is a precondition for achieving it.
Designing for consistency
The practical responses to the reproducibility challenge fall into three categories: reducing biological input variability, standardising process controls, and implementing measurement systems that detect inconsistency early.
Biological input control. The cell is the primary source of variability, and it cannot be engineered away entirely. What can be controlled is the state in which cells enter a workflow. Master cell banks, maintained under defined conditions with regular quality testing, provide a more consistent starting point than ad hoc thawing of frozen stocks. Documenting passage number, culture history, and karyotype status at the point of use is not bureaucracy. It is the minimum required to interpret downstream results [5]. Glass et al. showed that stem cell gene expression prior to differentiation correlated with downstream organoid phenotypic variation, reinforcing that input cell quality is not just a quality control checkbox but a predictive variable for outcomes [3].
Process standardisation. Manual cell culture involves dozens of micro-decisions per passage: pipetting force, aspiration timing, incubator door opening frequency, reagent warming duration. Each of these introduces operator-dependent variation. Automation addresses some of this, but automated systems bring their own variability in calibration, consumable fit, and software version. The objective is not to eliminate human handling but to identify which process steps are most sensitive to variation and to control those specifically. The article on scale-up challenges examines how these sensitivities multiply at manufacturing volumes.
Early detection. If variability is inevitable, the next line of defence is to detect it before it propagates through a workflow. In-process monitoring, whether by imaging, metabolite measurement, or electrical impedance, can flag batches that are drifting outside specification before they reach the final quality control step. For the ancillary technology company, this is both a product opportunity and a validation requirement: if your tool is part of the workflow, it needs to detect problems as well as perform its primary function. The Pillar 2 article on characterisation gaps examines the measurement side of this challenge in more detail.
What this means for TechBio founders
The reproducibility problem is often framed as a scientific challenge, and it is. But for the company building tools for this field, it is foremost a commercial and engineering problem. The science tells you that variability exists and where it comes from. The engineering challenge is to design products that function within that variability rather than despite it. The commercial challenge is to produce evidence of that consistency in a form that regulators, partners, and customers can trust.
Companies that treat reproducibility as a quality assurance issue to address in late-stage development are making a strategic error. The earlier you design for variability, validate across conditions, and build a documentation trail, the shorter your path to a product that the market will adopt.
References
[1] Freedman LP, Cockburn IM, Simcoe TS. The economics of reproducibility in preclinical research. PLoS Biol. 2015;13(6):e1002165. DOI: 10.1371/journal.pbio.1002165
[2] Volpato V, Webber C. Addressing variability in iPSC-derived models of human disease: guidelines to promote reproducibility. Dis Model Mech. 2020;13(1):dmm042317. DOI: 10.1242/dmm.042317
[3] Glass MR, Waxman EA, Yamashita S, et al. Cross-site reproducibility of human cortical organoids reveals consistent cell type composition and architecture. Stem Cell Reports. 2024;19(9):1351-1367. DOI: 10.1016/j.stemcr.2024.07.008
[4] Sandoval SO, Cappuccio G, Kruth K, et al. Rigor and reproducibility in human brain organoid research: where we are and where we need to go. Stem Cell Reports. 2024;19(6):796-816. DOI: 10.1016/j.stemcr.2024.04.008
[5] International Society for Stem Cell Research. Standards for Human Stem Cell Use in Research. 2023. Available from: https://www.isscr.org/basic-research-standards
[6] Setting standards for stem cells. Nat Methods. 2023;20:1267. DOI: 10.1038/s41592-023-02016-5
[7] Borys BS, Dang T, Worden H, et al. Robust bioprocess design and evaluation of commercial media for the serial expansion of human induced pluripotent stem cell aggregate cultures in vertical-wheel bioreactors. Stem Cell Res Ther. 2024;15(1):232. DOI: 10.1186/s13287-024-03819-9
About StemCells.Help
StemCells.Help is an advisory consultancy that aids innovation and real-world impact of life science applications built on developmental and stem cell biology. Founded by Dr Paul De Sousa, it draws on over four decades of experience spanning early embryo development, animal cloning, pluripotent stem cell manufacturing, and technology commercialisation. If you build tools for these domains or work in an emerging application where the biology is the enabling technology, StemCells.Help can provide experienced scientific counsel to ground your decisions. To discuss your needs, talk to Paul.
ORCID: 0000-0003-0745-2504
Web: stemcells.help