Launch HN: BlankBio (YC S25) - Making RNA Programmable

Hey HN, we're Phil, Ian and Jonny, and we're building BlankBio (https://blank.bio). We're training RNA foundation models to power a computational toolkit for therapeutics. The first application is in mRNA design where our vision is for any biologist to design an effective therapeutic sequence ( " rel="nofollow">

).

BlankBio started from our PhD work in this area, which is open-sourced. There’s a model [2] and a benchmark with APIs access [0].

mRNA has the potential to encode vaccines, gene therapies, and cancer treatments. Yet designing effective mRNA remains a bottleneck. Today, scientists design mRNA by manually editing sequences AUGCGUAC... and testing the results through trial and error. It's like writing assembly code and managing individual memory addresses. The field is flooded with capital aimed at therapeutics companies: Strand ($153M), Orna ($221M), Sail Biomedicines ($440M) but the tooling to approach these problems remains low-level. That’s what we’re aiming to solve.

The big problem is that mRNA sequences are incomprehensible. They encode properties like half-life (how long RNA survives in cells) and translation efficiency (protein output), but we don't know how to optimize them. To get effective treatments, we need more precision. Scientists need sequences that target specific cell types to reduce dosage and side effects.

We envision a future where RNA designers operate at a higher level of abstraction. Imagine code like this:

  seq = "AUGCAUGCAUGC..."
  seq = BB.half_life(seq, target="6 hours")
  seq = BB.cell_type(seq, target="hepatocytes")
  seq = BB.expression(seq, level="high")
To get there we need generalizable RNA embeddings from pre-trained models. During our PhDs, Ian and I worked on self-supervised learning (SSL) objectives for RNA. This approach allows us to train on unlabeled data and has advantages: (1) we don't require noisy experimental data, and (2) the amount of unlabeled data is significantly greater than labeled. However the challenge is that standard NLP approaches don't work well on genomic sequences.

Using joint embedding architecture approaches (contrastive learning), we trained model to recognize functionally similar sequences rather than predict every nucleotide. This worked remarkably well. Our 10M parameter model, Orthrus, trained on 4 GPUs for 14 hours, beats Evo2, a 40B parameter model trained on 1000 GPUs for a month [0]. On mRNA half-life prediction, just by fitting a linear regression on our embeddings, we outperform supervised models. This work done during our academic days is the foundation for what we're building. We're improving training algorithms, growing the pre-training dataset, and making use of parameter scaling with the goal of designing effective mRNA therapeutics.

We have a lot to say about why other SSL approaches work better than next-token prediction and masked language modeling: some of which you can check out in Ian's blog post [1] and our paper [2]. The big takeaway is that the current approaches of applying NLP to scaling models for biological sequences won't get us all the way there. 90% of the genome can mutate without affecting fitness so training models to predict this noisy sequence results in suboptimal embeddings [3].

We think there are strong parallels between the digital and RNA revolutions. In the early days of computing, programmers wrote assembly code, managing registers and memory addresses directly. Today's RNA designers are manually tweaking sequences, improving stability or reduce immunogenicity through trial and error. As compilers freed programmers from low-level details, we're building the abstraction layer for RNA.

We currently have pilots with a few early stage biotechs proving out utility of our embeddings and our open source model is used by folks at Sanofi & GSK. We're looking for: (1) partners working on RNA adjacent modalities (2) feedback from anyone who's tried to design RNA sequences what were your pain points?, and (3) Ideas for other applications! We chatted with some biomarker providing companies, and some preliminary analyses demonstrate improved stratification.

Thanks for reading. Happy to answer questions about the technical approach, why genomics is different from language, or anything else.

- Phil, Ian, and Jonny

founders@blankbio.com

[0] mRNABench: https://www.biorxiv.org/content/10.1101/2025.07.05.662870v1

[1] Ian’s Blog on Scaling: https://quietflamingo.substack.com/p/scaling-is-dead-long-li...

[2] Orthrus: https://www.biorxiv.org/content/10.1101/2024.10.10.617658v3

[3] Zoonomia: https://www.science.org/doi/10.1126/science.abn3943


Comments URL: https://news.ycombinator.com/item?id=44986809

Points: 10

# Comments: 2

https://news.ycombinator.com/item?id=44986809

Creado 4h | 22 ago 2025, 18:20:17


Inicia sesión para agregar comentarios