Mapping competitive landscape when your category barely exists yet

Reading time: 9 minutes

In our Pillar 3 anchor we put five positioning questions in sequence. The first, covered in the article on workflow identification, asks which steps in which workflows your product actually serves. Once that view is built, the next question follows naturally: who else is operating at the same steps, and how does your product differ from them on substance?

This is the question of competitive landscape. In established markets, it has a familiar shape. Direct competitors exist. Their products are visible, their prices are roughly known, their customer bases overlap with yours. Differentiation is mapped along features, performance, price, and service. The analysis is well-practised.

In ancillary TechBio for developmental and stem cell science, the familiar shape rarely applies. Categories are emergent rather than established. A team assessing their competitive position often discovers that the firms they would naturally list as competitors are pursuing different workflows, different customers, or different applications — and that the real competition comes from somewhere else entirely. The classic mistake is to declare "we have no competitors" and treat that as a strength. In this field, having no direct competitors is almost always a sign that the competitive field has been read too narrowly, not that the field is empty.

The three forms of competition that actually matter

The competitive landscape for an ancillary technology serving stem cell science has three layers, and all three need to be mapped before any claim about differentiation can be defended.

Direct competitors are firms or academic groups building products that address the same workflow step for the same user population with the same performance parameters. In emerging categories these are frequently few, sometimes none. Their absence is informative but not conclusive.

Adjacent competitors are firms building products that address a different workflow step but whose output or presence shapes what your product has to do. A company building characterisation assays is not a direct competitor to a company building a cell expansion platform, but the characterisation methods that are credible in the market define what the expansion platform has to demonstrate it preserves. Adjacent technologies set the evidence bar your product will be held to.

The status quo is the workflow a potential buyer currently runs without your product. It is not a company, not a product, and not a marketing construct — it is the sequence of manual steps, in-house software, adapted protocols, and habituated compromises that produce the outcome your product is supposed to improve. The status quo is almost always the most powerful competitor an emerging category faces, because switching cost is paid against it whether or not it is named.

Most failed competitive-landscape exercises in TechBio treat only the first layer. The result is a picture that is flattering (few direct competitors, clear differentiation on features) and inaccurate (strong status quo, mis-read adjacent market). The teams that hold themselves to mapping all three layers make fewer positioning errors downstream.

Reading the status quo as a competitor

The status quo earns more attention than it usually gets. It has a property that named competitors lack: it is already installed, already trusted, and already producing acceptable-if-imperfect outcomes. A potential buyer who keeps the status quo pays no procurement cost, no validation cost, no training cost, no documentation cost, and no integration cost. A product that aims to displace the status quo has to justify all of those costs against whatever improvement it offers.

A concrete example helps. Automated cell culture systems for human induced pluripotent stem cells compete, in practice, against manual cell culture. A 2021 study in Stem Cell Reports by Tristan and colleagues at the NIH reported a robotic platform that produced billions of hiPSCs in parallel from up to 90 patient-specific cell lines under chemically defined conditions, with comparable performance to manual operations across molecular and cellular characterisation [1]. The study itself is a careful benchmarking exercise. What it reveals about positioning is that the comparator is manual culture, not other robots. For a company bringing a similar automation platform to market, the competitive argument is not primarily "our robot is better than theirs". It is "our system replaces a manual workflow with documented variability, operator-dependent yields, and scaling constraints at a total cost of ownership that the laboratory should accept". The competitive reference point is the thing the laboratory already does.

This reframing matters because it changes what evidence the company needs to produce. A head-to-head comparison with another automation vendor is secondary. The primary evidence burden is a clean comparison with the laboratory's own manual baseline: does the product produce cells of equivalent or better functional quality, with lower variability, at an acceptable cost per batch, while fitting the rest of the laboratory's workflow. If the status quo answer is unfavourable, the product does not displace anything, regardless of how it compares to other automation platforms.

Reading adjacent technologies as competition

Adjacent technologies do not compete for the same purchase order, but they constrain what the purchase means. Two examples illustrate.

First, if your product's value proposition is to improve mesenchymal stromal cell consistency through a specific process intervention, the way "consistency" gets measured in the market is set by adjacent characterisation tools. A 2021 review in Acta Biomaterialia by Dunn and colleagues catalogued the strategies proposed to address MSC heterogeneity — clonal selection, priming, donor screening, critical-quality-attribute panels, potency assays — and observed that none has emerged as a standard, which leaves the market without a shared definition of what "reduced heterogeneity" looks like [2]. For a company selling a heterogeneity-reduction tool, this is a competitive problem even though the referenced strategies are not competitor products. The adjacent methodological landscape defines the evidence the tool has to generate to be credible.

Second, if your product serves a specific part of a cell therapy workflow, the scale logic of adjacent steps constrains yours. An autologous iPSC-based cell therapy scales out: many small, patient-specific batches produced in parallel. An allogeneic therapy scales up: few large batches each supplying many patients [3]. A tool built for one scale logic competes not only with other tools built for the same logic but also with the alternative logic itself. An automated small-batch system competes with the argument that allogeneic is the economically viable model and autologous tools are a dead end. An allogeneic large-batch system competes with the argument that immune compatibility favours autologous and the infrastructure for parallel small batches is what the field needs. These are competitive frames even though they are not competitive products.

The general rule is that adjacent technologies set the terms on which your product is evaluated. Mapping them is part of mapping the competitive landscape, even though they do not appear on a direct-competitor spreadsheet.

When direct competitors are scarce, the category is the problem

In a category that barely exists, direct competitors are often few. The temptation is to treat this as an advantage. It is usually the opposite. A sparse direct field typically signals one of three conditions, each of which is a positioning risk rather than a moat.

The first condition is that the category has been defined too narrowly. A company describes itself as "the only firm building X for Y" where X and Y are specified with enough precision to exclude every plausible competitor. The specification sounds like differentiation but functions as isolation. The narrowly defined category also excludes the buyers who would otherwise recognise the product as solving their problem, because their problem sits at a less narrowly defined level.

The second condition is that the category is real but commercially immature. Other firms have looked at the same problem and decided it is not yet economically tractable. Being the first in a category is only an advantage if the category will exist when the product is ready. For some emergent categories in stem cell science, the infrastructure that would make the category commercially viable — shared standards, validated reference materials, regulatory guidance, customer willingness to pay — is still forming. A product built for a category that has not yet arrived competes against the question of whether the category will arrive in time.

The third condition is that the stated competitors are not the actual competitors. The firms nominally in the category are not what potential buyers would evaluate the product against; the buyers evaluate it against the status quo, against an adjacent technology, or against not doing the project at all. A competitive landscape that omits these is a fiction.

A useful discipline is to ask: if a potential buyer sees a demo of our product and is impressed, what are the three most likely reasons they still do not purchase? If the answers include "they stick with what they have", "they divert the budget to a characterisation tool", or "they decide the project is not ready", then the competition that matters is the status quo, the adjacent tool, and the project-scope decision — not a competitor product at all.

What a defensible competitive map looks like

A competitive map that can actually guide positioning has five elements, regardless of whether direct competitors are abundant or scarce.

A named workflow step. The map is not of "the cell therapy market"; it is of the specific step your product changes. This is the link back to the workflow identification article.

The current status quo for that step. What do the target users do at this step today, with what tools, at what cost, with what outcomes. This is described operationally, not aspirationally.

Direct competitors for that step, if any. Firms or groups that offer a different way of doing the same step for the same users. If there are none, that fact is recorded and its plausible causes are assessed rather than claimed as a moat.

Adjacent technologies that set the terms. Products that address neighbouring steps, or methodological approaches that define what "good" looks like at your step. These shape the evidence your product has to generate.

A positioning statement derived from the above. One sentence, concrete enough to be falsified, naming the user, the step, the status quo comparator, and the specific improvement. A claim that cannot be stated in this form is not yet a position.

The exercise is uncomfortable because it usually reveals that the competitive claim the team has been making — "we are the only firm doing X" or "we are better than competitor Y on benchmark Z" — is either too narrow to be commercially meaningful or too broad to be defended. This is useful information. Positioning errors discovered at this stage are cheap. The same errors discovered after launch cost years.

What this leads to next

With the workflow identified and the competitive landscape read honestly, the next question is whether the users running that workflow will actually pay to change it. That is the domain of demand validation: testing, before committing to development, whether the scientists and engineers you believe need your tool actually experience the problem you think they do and would prioritise a solution over their current workarounds. The competitive map produced here is one of the inputs to that testing, because the alternatives it names are the alternatives buyers will weigh your product against.

The remaining positioning questions follow. Who is the buyer, and what training do they bring to the decision? Can you describe the product in language that earns their trust? For applications where developmental and stem cell biology is itself the ancillary technology, the article on prior art in laboratory and domesticated species adjusts the competitive-landscape logic for cases where established knowledge from other species is the most important reference point.

For readers who want the biology grounding, the Pillar 1 articles on key stem cell methods and current applications supply it. The Pillar 2 series on why TechBio fails provides the failure-mode context in which positioning decisions play out.

References

[1] Tristan CA, Ormanoglu P, Slamecka J, et al. Robotic high-throughput biomanufacturing and functional differentiation of human pluripotent stem cells. Stem Cell Reports. 2021;16(12):3076-3092. DOI: 10.1016/j.stemcr.2021.11.004

[2] Dunn CM, Kameishi S, Grainger DW, Okano T. Strategies to address mesenchymal stem/stromal cell heterogeneity in immunomodulatory profiles to improve cell-based therapies. Acta Biomater. 2021;133:114-125. DOI: 10.1016/j.actbio.2021.03.069

[3] Madrid M, Sumen C, Aivio S, Saklayen N. Autologous Induced Pluripotent Stem Cell-Based Cell Therapies: Promise, Progress, and Challenges. Curr Protoc. 2021;1(3):e88. DOI: 10.1002/cpz1.88

About StemCells.Help

StemCells.Help is an advisory consultancy that aids innovation and real-world impact of life science applications built on developmental and stem cell biology. Founded by Dr Paul De Sousa, it draws on over four decades of experience spanning early embryo development, animal cloning, pluripotent stem cell manufacturing, and technology commercialisation. If you build tools for these domains or work in an emerging application where the biology is the enabling technology, StemCells.Help can provide experienced scientific counsel to ground your decisions. To discuss your needs, talk to Paul.

ORCID: 0000-0003-0745-2504

Web: stemcells.help

Previous
Previous

Demand validation: testing whether the scientists you think need your tool actually do

Next
Next

How to identify which stem cell workflows your product actually serves