Want expert eyes on your spatial transcriptomics pipeline? From raw images to final figures, we help researchers catch silent failures before they become costly. Request a free consultation →
Spatial transcriptomics technologies have enabled biologists to interrogate gene expression in the context of tissue architecture - something that single-cell RNA-seq, for all its resolution, cannot offer. Platforms such as 10X Visium, Xenium, and NanoString CosMx are now widely used to study cancer microenvironments, neuronal circuits, organoid structure, and immune infiltration in diseased tissues. Yet, after having supported many spatial transcriptomics projects from labs across the world - some successful, some barely salvageable - we must say this clearly: many spatial studies are built on false assumptions, untested workflows, and misread signals.
It is not only new users who fall into these traps. In fact, the most serious issues we have encountered often appear in projects from very capable teams. Sometimes, the errors are subtle and accumulate gradually - a poor alignment step, an undetected spatial artifact, a misinterpreted spot cluster - until the final results look compelling on the surface but collapse under scrutiny.
This two-part blog series outlines ten of the most common - and dangerous - failure modes in spatial transcriptomics data analysis. These lessons are drawn not only from clients’ projects that we have rescued, but also from mistakes we ourselves made earlier in our journey. We hope this writing helps others avoid the same setbacks, and advance the field with more rigorous, reproducible spatial biology.
The Problem
The very foundation of spatial transcriptomics is the accurate mapping between molecular barcodes and tissue histology. Yet, many researchers underestimate how fragile this alignment process is. For 10X Visium, for instance, the automated registration between the H&E image and the spatial barcode grid is done by the Space Ranger pipeline. But it is not infallible - especially in the presence of tissue folds, staining artifacts, or irregular section geometry.
Why It Happens
We have encountered cases where subtle misalignment (even a few microns) led to gross misinterpretation. For example, immune cell markers were thought to localize to the tumor edge, when in fact the underlying spot coordinates were offset and overlaid on stromal regions. In one project, the spatial gradient of hypoxia genes appeared biologically meaningful - until re-alignment showed the pattern was an artifact of a skewed overlay.
This happens because many analysts blindly accept the default image alignment. They do not visually inspect the overlay or cross-validate with known histological landmarks. Worse, the use of low-resolution tissue scans (or JPEG-compressed images) further degrades alignment fidelity. Very few teams reprocess with manual alignment - even when it’s necessary.
How We Address It
What we do differently is simple but crucial: we always perform visual inspection of the registered tissue-overlay, adjust scaling and rotation manually when needed, and align to high-quality TIFFs whenever possible. We also cross-reference the spot overlay with known markers or anatomical features. Misaligned images can silently poison all downstream inferences - we do not take that risk.
Struggling with alignment, QC, or tissue distortion? We've helped researchers fix foundational errors in spatial transcriptomics before they compromised results downstream. Request a free consultation →
The Problem
Most researchers know to remove spots with high mitochondrial reads, low UMI counts, or low gene complexity. But few realize that in spatial data, some of the most biologically important regions are also the dirtiest - at least by those superficial metrics.
Why It Happens
For example, in tissue border zones or necrotic cores, many spots show elevated mitochondrial content or lower transcript diversity. But these regions often contain immune infiltration, hypoxia response, or transitional cell states. Automatically filtering them because they fall outside of “normal” thresholds erases precisely the biology you are trying to study.
This mistake usually stems from applying thresholds learned from scRNA-seq. Spot-based data is different: each spot captures multiple cells, often from heterogeneous neighborhoods. The statistical properties vary not only by sample quality, but also by histological zone.
How We Address It
To address this, we never apply global QC cutoffs. Instead, we stratify by tissue region, use data-driven thresholds (e.g., elbow methods), and validate low-quality spots by comparing marker gene expression and histological alignment. Sometimes, we even retain spots with low UMIs but high signal for a specific pathway, especially in CosMx or Xenium data. Removing “bad” spots without domain knowledge is not quality control - it is self-sabotage.
The Problem
Spatial transcriptomics assumes that the tissue section is intact, evenly sliced, and well adhered to the slide. But in practice, this assumption fails more often than most people realize. Warping during embedding, partial detachment during staining, or tears introduced during microtomy can all cause structural distortions.
Why It Happens
These distortions may be small in physical space, but they wreak havoc on analyses. For example, linear tissue structures (like the intestinal crypt or cortical layers) become curved or disconnected, breaking expected spatial continuity. If not accounted for, these deviations cause clustering algorithms like BayesSpace or SpaGCN to invent artificial boundaries or merge distinct zones.
What makes this dangerous is that teams often perform downstream analyses - clustering, spatial gene detection - as if the slide were perfect. Very few check for tissue warping systematically.
How We Address It
In our practice, we routinely inspect section integrity, apply geometric correction if needed, and avoid over-interpretation of spatial transitions that occur near damaged regions. Sometimes, we even mask out folded or lifted tissue regions prior to analysis. Clean data begins with clean slides - and if not, clean masking.
Worried your spatial signals don’t reflect real biology? We specialize in separating meaningful patterns from artifacts in spatial transcriptomics datasets. Request a free consultation →
The Problem
All spatial platforms are susceptible to background noise - but the nature of that noise varies. In Visium, it is often due to tissue-free spots near the edge. In CosMx and Xenium, autofluorescence, light scattering, or incomplete washing can create localized false signals.
Why It Happens
We have seen reviewers puzzled by “gene expression” localized outside of tissue boundaries, or in acellular regions. One group found that a supposedly important immune gene was enriched in white matter - until we discovered it was signal leakage from a nearby channel.
Even within tissue, background noise can create misleading patterns. For example, in highly vascularized regions, nonspecific RNA sticking to damaged vasculature may appear as overexpression.
How We Address It
To manage this, we use spatial signal-to-noise metrics, remove tissue-free border zones, and perform background subtraction using negative control probes (if available). In CosMx or Xenium, we often use channel-specific filtering and spatial smoothing carefully - not to create cleaner images, but to remove predictable artifacts. Many teams assume that fluorescence equals expression - we know better.
Continue Reading Part 2