The Instrument
The mRNA vaccine exists because Katalin Karikó spent thirty years on an idea no one believed in, and because she happened to meet Drew Weissman at a photocopier. 1. Gina Kolata, “Kati Karikó Helped Shield the World From the Coronavirus,” The New York Times, April 8, 2021. Karikó and Weissman shared the 2023 Nobel Prize in Physiology or Medicine for their work on nucleoside-modified mRNA. The transistor emerged because Mervin Kelly at Bell Labs spent years scouting graduate programs for people who built radios as children and carried a maker’s temperament alongside theoretical depth, then put three of them in the same corridor. 2. Jon Gertner, The Idea Factory: Bell Labs and the Great Age of American Innovation, Penguin Press, 2012. Kelly’s talent scouting, the personality traits he sought, and the physical design of Bell Labs are detailed throughout. The structure of DNA was solved because James Watson sought out Francis Crick, a physicist, and both had access to Rosalind Franklin’s X-ray crystallography 3. James Watson, The Double Helix, Atheneum, 1968. For Franklin’s contribution, see Brenda Maddox, Rosalind Franklin: The Dark Lady of DNA, HarperCollins, 2002. : three disciplines brought to bear on a single question.
In each case, the breakthrough began with a person, or a collision between people, that no one had planned. Bell Labs, Xerox PARC, the Institute for Advanced Study all understood this. Beneath everything else, each was an answer to the same question: how do you find the right people and bring them together?
Kelly could answer that question because the landscape was small enough for one person to hold in view: a few thousand researchers, a handful of universities. Today, eight million researchers work across thousands of institutions in nearly every country on earth. 4. UNESCO Institute for Statistics, R&D Data Release, 2024. The global count was approximately 9 million FTE researchers as of 2022; “eight million” is a conservative round number. The methods we use to find them are still the methods Kelly used in the 1930s: reputation, networks, conferences, institutional prestige. The landscape has grown by orders of magnitude, but the methods have not.
In 1609, Galileo pointed a telescope at Jupiter and saw moons that had always been there. Science has since built instruments for every scale of reality: telescopes for the distant, microscopes for the small, sequencers for the invisible, accelerators for the fundamental. It has never built an instrument to see the people who make the science possible. Atlas is that instrument.
The Faculty
The common explanation of Bell Labs goes like this: gather brilliant people, give them freedom, fund them for decades, design the building so they collide in hallways. Mervin Kelly did all of these things. He built the Murray Hill facility with corridors so long they disappeared at a vanishing point, ensuring that walking from one end to the other meant encountering a dozen colleagues, problems, and ideas along the way. He gave researchers years to work without deadlines or progress reports. He mixed physicists with metallurgists with electrical engineers and housed them side by side.
But all of this depended on something prior: finding the right people.
Starting in the late 1920s, Kelly personally scouted graduate programs across the country. He was looking for people who had grown up with a peculiar desire to know more about the stars or the radio, and who had almost always built something with their hands. They carried what one observer described as an exotic mix of conscientiousness, high openness, and highly directional neuroticism. People who woke up in the middle of the night asking whether they had accomplished anything worthwhile, who combined a maker’s temperament with a theorist’s depth.
Kelly had taste in people the way a great editor has taste in writers. He could see something in a twenty-three-year-old physics student that no metric would capture for another decade. When the transistor was demonstrated on December 23, 1947, Kelly had not even been told what Bardeen and Brattain were working on. He did not need to manage the discovery; he needed to have already found the people capable of making it.
Richard Hamming, who spent thirty years inside the institution Kelly built, asked every scientist he ate lunch with the same question: “What are the important problems of your field?” And then, after a pause: “Why aren’t you working on them?” 5. Richard Hamming, “You and Your Research,” talk delivered at Bell Communications Research, March 7, 1986. Transcribed and widely reprinted; available at cs.virginia.edu. Hamming was probing what a person believed mattered enough to organize a career around. The answer, or its absence, told him more than any CV.
Bob Taylor, who built Xerox PARC’s Computer Science Laboratory, had the same gift. Taylor was a psychologist with no formal computer science training. One colleague described him as a concert pianist without fingers. He recruited fifty of the best computer scientists in the country, and in three years they invented the personal computer, Ethernet, the laser printer, and the graphical user interface. When Xerox pushed Taylor out in 1983, fifteen of his top researchers resigned and followed him. Chuck Thacker, co-inventor of the personal computer, said: “If you are looking for the magic, it was him.” 6. Michael Hiltzik, Dealers of Lightning: Xerox PARC and the Dawn of the Computer Age, HarperBusiness, 1999. The “concert pianist without fingers” description is from Severo Ornstein. The unstated hiring criterion at PARC was that every new person had to raise the average of the group. Taylor, like Kelly, could see who would.
Call the thing Kelly and Taylor possessed a faculty: the ability to perceive in a person qualities that no metric captures and no credential guarantees. Simone Weil called attention “the rarest and purest form of generosity.” 7. From a letter to the poet Joë Bousquet, April 13, 1942. Often attributed to Gravity and Grace; the source is Joseph Marie Perrin, Mon dialogue avec Simone Weil, 1984. The faculty is that quality of attention directed at people. Every institution that produced transformative science had someone with it, and every time, it was personal, informal, and dependent on a single individual’s taste.
The faculty is fragile. In 1913, Srinivasa Ramanujan was a shipping clerk in Madras earning twenty pounds a year, with no university education. He sent nine pages of mathematical results to three professors at Cambridge. The first two returned his papers without comment. The third was G. H. Hardy, who almost dismissed the letter as a crank’s. That evening he looked again, showed the pages to his colleague Littlewood, and by the next day Bertrand Russell found them both in a state of wild excitement, convinced they had found a second Newton. Hardy later said that discovering Ramanujan, not proving any theorem, was his single greatest contribution to mathematics. 8. Robert Kanigel, The Man Who Knew Infinity: A Life of the Genius Ramanujan, Scribner, 1991. Hardy’s self-assessment appears in C. P. Snow’s foreword to A Mathematician’s Apology, 1967 edition.
The same letter sat on three desks. Ramanujan’s trajectory, from shipping clerk to one of the greatest mathematicians in history, turned on whether one person, on one evening, decided to look twice.
The faculty is also rare, and when it disappears, the institution decays from the inside. Arena BioWorks launched in January 2024 with $500 million from five billionaire backers and an explicit aspiration to become the next Bell Labs. They recruited a co-founder from the Broad Institute and a CRISPR pioneer from Harvard Medical School. They had the money, the freedom, the location, and the stated intention to let scientists pursue high-risk research without the constraints of academia. Nineteen months later they laid off thirty percent of their staff. By November 2025 they shut down entirely. 9. “Billionaire-backed Arena BioWorks shutters less than 2 years after launch,” Fierce Biotech, November 4, 2025. Arena launched with $500 million from five backers including Michael Dell and Steve Pagliuca.
Arena had replicated the container with resources Kelly could not have imagined, but they had missed his foundational insight: finding the right people comes first. The rest is secondary.
Episteme was built on this insight. Before it had a lab or a building, it had two years of conversations with hundreds of scientists and engineers across the world, each one an attempt to understand what the existing system was failing to see. What emerged was a conviction about what matters most in a researcher: the scientific substance of their work, their ability to execute, and their theory of change. The third proved the most important. Theory of change is the question a researcher carries across projects, the one that sustains her when funding disappears and the field moves on. It separates the person who will still be working on an idea in year twenty-nine from the person who moves on when the field cools.
At its root, theory of change is simple. In Big Hero 6, Tadashi runs into a burning building: “Someone has to help.” Three words, before any professionalization. You can only learn what drives someone this way by sitting across from them and listening.
Episteme invests in people across whatever arc their work requires. If an idea fails, the researcher pivots and Episteme stays. The bet is always on the person, because someone who generates good questions will keep generating them. You do not fund a hypothesis; you fund the kind of person whose hypotheses are worth funding for the rest of their life.
But the faculty for finding those people cannot scale. Kelly’s method worked because the landscape was small enough for one person to hold in their head. That small landscape produced the transistor, the laser, information theory, Unix, the solar cell, and eleven Nobel Prizes. It no longer exists. No individual, however gifted, can see eight million researchers across thousands of institutions. The faculty remains essential, but without a way to extend it across a landscape that has grown by three orders of magnitude, the people we need to find remain invisible.
The Shadow
Science measures its people constantly.
The h-index, introduced in 2005 by physicist Jorge Hirsch, 10. J. E. Hirsch, “An index to quantify an individual’s scientific research output,” Proceedings of the National Academy of Sciences 102(46), 16569–16572, 2005. reduces a researcher’s entire career to a single number: the count of papers that have each been cited at least that many times. Within a few years of its invention, this number had become one of the most consequential figures in a scientist’s life. It shapes hiring decisions, promotion cases, and grant reviews. In some countries and disciplines, publishing in journals with impact factors below a certain threshold is officially considered to have no value.
The metric has become so central to scientific careers that researchers have learned to optimize for it: strategic self-citation, gift authorship, salami-slicing results across multiple papers to maximize publication count. A large-scale study found that the correlation between h-index and scientific awards in physics dropped from 0.34 in 2010 to 0.00 in 2019. 11. Koltun and Hafner, “The h-index is no longer an effective correlate of scientific reputation,” PLOS ONE, 2021. The decline is associated with changing authorship patterns, including the rise of hyperauthorship. When a measure becomes a target, it ceases to be a good measure. The h-index no longer correlates with the thing it was supposed to measure.
But the gaming is secondary to a deeper problem: what these metrics were designed to measure in the first place.
The h-index measures publication volume weighted by citation count. Citations correlate with audience size, not importance. Impact factor is a property of the journal, not the scientist. Institutional prestige reflects the selection effects of admissions committees decades earlier. Grant history measures fit within a funding system built around incremental, risk-averse progress. Each of these captures something real. They are instruments, and like all instruments, they reveal exactly what they were calibrated to reveal.
They were calibrated to see a specific kind of researcher: prolific, well-cited, well-credentialed, productive within established disciplinary boundaries. For that person, they work well. They are structurally incapable of seeing anyone else.
Consider what they miss. A postdoc who has pivoted from condensed matter physics to computational biology, carrying insights that could transform how protein folding is modeled, has an h-index that reflects the field she left. The reason someone chose this problem, the motivation no publication record encodes, is invisible to any citation-based metric. A scientist who spends two years building an experimental platform that twenty other labs use has produced something far more durable than a paper, but publication metrics draw a hard line between the researcher and the artifact. Every existing metric measures accumulation, which means the person who will matter most in ten years is the person these metrics are least equipped to find today.
Katalin Karikó is the clearest example. For thirty years, she worked on mRNA therapeutics. Her h-index was low. Her funding was nonexistent. She was demoted at the University of Pennsylvania, her grant applications rejected year after year. 12. “After Shunning Scientist, University of Pennsylvania Celebrates Her Nobel Prize,” The Wall Street Journal, October 2023. Karikó accepted demotion from the tenure track in the late 1990s rather than abandon her mRNA research. By every metric science had, she did not matter. When she met Drew Weissman at the photocopier, no system in the world would have flagged her as someone worth paying attention to.
The infrastructure we have built for evaluating scientists works perfectly for what it measures. It measures the shadow a researcher casts inside the existing academic system: publication record, citation count, institutional affiliation, grant history. These shadows are real and correlate with real things. But they are projections of a person onto a surface. Plato’s prisoners mistake shadows for reality not because the shadows are wrong, but because they have never turned around to see what casts them. Our metrics have the same limitation: they see the projection, not the person.
Kelly looked at people. He saw the restlessness, the maker’s temperament, the specific kind of obsession that would sustain a researcher through years of uncertain work. His method cannot scale to the landscape as it exists today, but it raises a question worth taking seriously: whether an instrument could learn to see some of what he saw.
The Compiler
Previous attempts to see researchers at scale all failed for the same reason: the signal was too expensive to extract.
The qualities Kelly perceived exist in the record. A researcher’s publication history, read chronologically, shows direction. Her choice of problems, traced across a career, encodes motivation. Her GitHub profile shows whether she builds. Her grant proposals describe what she believes her work will make possible. The personal statement on her lab website sometimes contains, in a single paragraph, the question that drives her research. All of this information is public, scattered across dozens of databases and websites, in formats no system has ever synthesized.
The synthesis was the bottleneck. For one researcher, reading and interpreting all of this takes an afternoon. For eight million, it was, until recently, impossible.
Large language models have changed this. For the first time, a system can read a researcher’s entire body of work and begin to perceive the patterns a human scout would recognize: direction shifting across a career, questions persisting across domains, the difference between someone who uses a technique and someone who thinks in it. 13. The word is begin. Theory of change, like all tacit qualities, resists full articulation. What extraction captures is an approximation, useful for narrowing the landscape but insufficient for replacing judgment. The faculty remains essential at the final stage.
What once required Kelly to visit a graduate program in person now takes minutes of compute. The significance of this is hard to overstate, but it cuts both ways. A scout no longer reads every publication; she reviews a computed trajectory and decides whether what she sees warrants a conversation. Yet an AI system trained on h-index and citation counts will reproduce those blindnesses at scale, at speed, and without the human judgment that sometimes corrects for them. What you calibrate AI to see is what AI sees. Calibrate it for shadows and it will see shadows, faster and more confidently than any human bias ever could.
There is, however, a crucial difference between extracting facts and understanding people.
A researcher’s arc cannot be objectively verified the way a fact can. Two people looking at the same publication history may perceive entirely different trajectories. One sees a physicist drifting aimlessly between subfields; another sees a physicist systematically assembling the tools to solve a problem that does not have a name yet. The interpretation is where the insight lives. Hardy and the two professors who returned Ramanujan’s letter looked at the same nine pages and reached opposite conclusions. The difference was not in the data but in the person looking.
This is why Atlas needs a model rather than a formula: a set of commitments about what matters in a researcher, which determines what the instrument can see. A different model would see different people. The model is calibrated by conviction, not consensus.
The compiler makes extraction possible at scale; the model determines what the extraction looks for. Together they produce an instrument that can survey the landscape and surface the people a human scout should meet. Kelly could hold a few hundred researchers in his head. An instrument could narrow eight million to eight hundred; a person with the faculty could narrow eight hundred to eight.
What It Means to See
Every instrument embeds a model of the thing it is trying to see, and the model determines what the instrument can detect and what it is blind to.
The model embedded in current metrics is simple: a researcher is a collection of outputs. Papers, citations, grants, degrees, affiliations. The system ingests these outputs, counts them, weights them, and produces a score that stands in for the person. Martin Buber described two ways of encountering another being: as It, an object to be measured and classified, or as Thou, a person to be met. 14. Martin Buber, Ich und Du, 1923; English translation by Ronald Gregor Smith, 1937. The entire metric system is an I-It infrastructure applied to people. It captures what a researcher has deposited into the academic system, but it cannot capture the researcher herself, because a person reduced to measurable outputs has already ceased to be a person in any meaningful sense.
The philosopher Michael Polanyi gave the epistemological reason: “We know more than we can tell.” 15. Michael Polanyi, The Tacit Dimension, Doubleday, 1966. The qualities that made Kelly recognize a future Nobel laureate in a graduate student, the judgment that told Hardy to look twice at a letter from Madras, theory of change itself: these are forms of tacit knowledge, real and consequential but resistant to articulation. They can be perceived by someone with the faculty for seeing. They cannot be captured by counting outputs.
Atlas embeds a different model: a researcher is an arc.
A CV tells you where a researcher is: what institution, what title, what publications as of today. It tells you nothing about where she is headed. Shelley described certain people as “mirrors of the gigantic shadows which futurity casts upon the present.” 16. Percy Bysshe Shelley, A Defence of Poetry, 1821 (published posthumously, 1840). Current metrics project backward, recording what someone has already done. The faculty perceives something else: a person whose work is not yet legible to the system but whose direction is. A materials scientist who has started publishing in synthetic biology is carrying tools and intuitions into a field that has never seen them, and the intersection of what she knows and what the new field lacks is where breakthroughs tend to happen.
Within a trajectory, the most revealing moments are the turns: points where a researcher’s direction shifts. A computational chemist publishes steadily in her field for six years, then abruptly begins collaborating with a neuroscience lab. Something caused that turn, a paper or a conversation or a question that found a new form, and whatever it was, the turn is where something crossed a boundary and one person’s work collided with another’s. Watson seeking out Crick was a turn. The collisions Kelly engineered in the hallways of Murray Hill were attempts to manufacture turns.
But trajectory alone is insufficient. Two researchers can have identical publication records and entirely different reasons for the work. One publishes in a field because it is productive; the other because she believes it is the path to solving a problem she cannot put down. The first will leave when the field cools. The second will stay for decades because the question has not been answered. Theory of change lives in the pattern of questions a researcher returns to, in the choices she makes about which problems deserve sustained attention. It is the most consequential distinction Atlas can make.
A researcher’s intellectual landscape also has a topology: which fields she draws from, which boundaries she crosses, which questions persist as she moves between domains. The shape tells you something keywords cannot: how someone thinks, what she connects, what kind of problems she is equipped to solve. A researcher who published one paper using a technique from another field is adjacent; one who has spent years learning to think in both fields is a bridge. The difference between adjacency and fluency is the difference between a tourist and a translator, and an instrument worth building can tell them apart.
Researchers also exist in relation to one another. The history of science is a history of connections: who worked with whom, who influenced whom, who should have met and never did. The instrument must make those collisions deliberate, and it must see across scale: a single researcher has a trajectory, a lab has a composition, a field has a shape, and the global community has gaps and concentrations invisible from inside any one of them.
What the Instrument Reveals
Atlas does not yet exist as the instrument described in the previous section. What exists is the beginning of one.
The value of an instrument is proven by what it reveals that was previously invisible. Even Atlas in its earliest form, pointed at Episteme’s own researcher pipeline and closer to a hand-drawn map than a satellite, surfaced patterns that traditional evaluation would have missed.
The pipeline held approximately 2,400 researchers, identified through manual scouting, professional networks, and early tooling. It was built the way most organizations build pipelines: professors asked who their best students were, scouts attended conferences, followed citation trails. The pipeline reflects the landscape the scouts could see, which is a small fraction of the landscape that exists.
When we looked at the pipeline as a landscape rather than a list of individuals, the first thing we saw was concentration. Ninety-three percent of researchers were US-based. Fifty-three percent came from Boston and the Bay Area alone. Forty-three percent came from five institutions: MIT, Stanford, Harvard, Princeton, and Berkeley. A search intended to find the best researchers on earth had, in practice, searched a handful of buildings in two cities. Frontier groups in Toronto, Glasgow, Liverpool, the Max Planck Institutes, and EPFL were barely represented. International researchers accounted for seven percent.
We had not decided to ignore these places. We had simply never seen them.
The second pattern was a structural gap invisible at the level of individual evaluation. The pipeline contained seventeen chemistry and synthesis researchers, but all four faculty-level researchers were computational or theoretical. The pipeline could model a catalyst but could not synthesize one. For any closed-loop experimental program, this was a binding constraint, the equivalent of an engineering team that can design a bridge but has never poured concrete. The gap was invisible as long as candidates were evaluated one at a time and became obvious the moment you looked at capability across the pipeline as a whole. We started scouting experimental synthesis groups the following week.
The third was false depth. Seventy percent of researchers carried two or more domain tags, which appeared to indicate strong cross-disciplinary coverage. But when we applied a stricter definition, sustained work at the intersection and genuine translation between fields, the number collapsed to six genuine bridges between machine learning and the physical sciences. The tags measured adjacency, not fluency.
The fourth was a career-stage imbalance. Seventy-six percent of the pipeline was early-career. This carried real advantages in flexibility and freedom from institutional assumptions, but limited coverage of researchers who had owned multi-year programs and mentored others through failure. Certain forms of judgment only emerge from years of living with the consequences of decisions.
These are modest findings, the product of a hand-drawn map rather than a satellite survey. Atlas cannot yet track trajectory over time, surface theory of change, detect turns, or identify missing collaborations at global scale.
But even this early map revealed the shape of our own search. A search built on reputation, networks, and institutional prestige had reproduced the geography and biases of those networks exactly. We were seeing the shadow of our own methods projected onto the landscape, not the landscape itself.
The question Atlas asks is what becomes visible when you look at the landscape as a whole rather than at individual researchers against a checklist.
Beyond what we have already mapped lies what we have not: the dark landscape. 17. The dark landscape is the talent that exists but has never been seen by any scouting system. No database indexes it. No network reaches it. It is dark not because it is dim, but because no instrument has been pointed at it. Researchers in Taipei, São Paulo, Nairobi, Kraków, whose trajectories are as interesting as anyone in our pipeline but who exist outside the networks our scouts inhabit. No conference brought them to our attention. No professor recommended them. No citation trail led to their work. The instrument that can see them does not yet exist, and building it is what remains.
The Choice
The microscope encoded a belief that the very small was consequential. The radio telescope encoded a belief that the universe spoke in wavelengths the eye could not hear. Every instrument begins as a philosophical commitment about what is worth seeing, and what a society chooses to build instruments for reveals what it values.
Atlas encodes the beliefs Kelly acted on, the ones current metrics were never designed to capture: that where a researcher is headed matters more than where she has been, and that what someone is becoming tells you more than what they have accumulated.
Episteme’s Atlas is calibrated to find someone whose trajectory is more legible than their CV, whose question has kept them working for years past the point where a rational career optimizer would have moved on, and whose work crosses disciplinary boundaries because the problem they care about does not respect them.
Every one of these calibrations could be wrong. Every instrument begins miscalibrated and improves by being used, pointed at the world, corrected by what it finds. When a researcher thrives, Atlas learns what signals predicted that success. When a collaboration produces a breakthrough, Atlas learns what the preconditions looked like. Over time, the instrument refines its own model of what matters, which is what separates it from a database. Atlas is Episteme’s instrument, encoding one institution’s convictions about what makes a scientist worth finding.
In Constellations of Borrowed Light, 18. William Blair, “Constellations of Borrowed Light,” Borrowed Light Collective, 2026. borrowedlight.org. we argued that scientific knowledge lacks the infrastructure to arrive. But knowledge travels through people. Before you can build infrastructure for knowledge to arrive, you need to find the people who will carry it.
Finding them is only half the work. The researchers Atlas is built to find, the ones who cross fields, who pursue the long shot, whose most important work is too early to publish, are the ones traditional institutions fail. Academia asks them to spend a third of their time writing grant proposals, then funds the safest version of their ideas. Industry offers resources but demands quarterly results. Episteme exists because a researcher carrying an idea across disciplinary boundaries deserves an institution that will stay with her while the work unfolds, without grant writing, teaching loads, or fundraising. The collisions that produce breakthroughs happen under one roof, by design, but only if you can find the right people.
Imagine a materials scientist in Kraków. Her h-index is modest. Her institution is not one the scouts visit. But her trajectory has just turned: her last two papers draw on techniques from computational neuroscience, methods for modeling ion transport that no one in battery research has thought to apply. She built the simulation framework herself; the code is on GitHub, used by three other labs. Her personal statement, buried on a university webpage no recruiter has read, describes the question she has spent four years circling: whether the principles governing ion channels in neurons could unlock the lithium dendrite problem, unsolved for fifty years.
No existing system sees her. A keyword-based system reads the field crossing as drift. An instrument calibrated for trajectory, for turns, for the builder’s temperament, reads it as what it is: someone assembling tools from two worlds to solve a problem that neither world can solve alone.
The instrument is not yet built. But the infrastructure is real: the enrichment pipeline, the data sources, the models that can read a researcher’s work and begin to perceive the patterns a human scout would recognize. What remains is to point it at the landscape and learn whether it can see what we believe is there.
Eight million researchers work across thousands of institutions in nearly every country on earth. Somewhere among them is the next Ramanujan, the next Karikó, the next graduate student whose tools belong in a field she has not yet entered. We built Atlas to find them.