About Paperstars
We became scientists to do science. Not to play the publication game.
Right now, scientific success is tied to where you publish, not what you found. Good science gets buried. Mediocre or flawed results get amplified. A citation tells you a paper exists, but not whether it's any good. Smith et al., 2004 means the same whether the work is brilliant or deeply flawed.
And yet these same citation metrics determine funding opportunities and career progression. The number of times a paper is cited, regardless of its quality, shapes who gets hired, who gets funded, and whose science gets built on. We are collectively advancing knowledge on a foundation we can't fully trust.
We’re building the infrastructure to change that.
What We Do
Paperstars is a community-driven platform for evaluating published scientific papers - think Goodreads, but for science.
Researchers anonymously rate and review papers using a structured, weighted system that focuses on what actually makes science good: rigorous methods, honest statistics, transparent data, and appropriate conclusions. Not journal prestige. Not citation counts. The work itself.
Why It Matters
The metrics we use, citations, impact factors, h-index, cannot distinguish good science from flawed science. A methodologically rigorous study and a deeply flawed one are cited the same way, ranked by the same systems, and rewarded with the same opportunities. High visibility is mistaken for high quality. And because these metrics determine funding and careers, we are all collectively incentivised to optimise for the wrong things, instead of focusing on just doing good science.
This is where you come in.
You already have opinions about the papers in your field. Which findings are solid. Which results don't hold up. Which work is brilliant but invisible. Right now those opinions disappear, shared at journal clubs, annotated in your reference manager and then lost.
In a world where AI can summarise any paper in seconds, what it cannot do is apply human reasoning. It cannot spot the overclaimed conclusion, the convenient limitation, the methodology that doesn't quite hold up. That takes a scientist who actually read the work. Your expert judgement is more valuable than ever, and Paperstars gives it somewhere permanent to live.
Rate a paper in a minute. Your assessment joins a growing community signal that makes good science findable and flawed science visible, for you and for everyone building on it.
Paperstars fixes the signal. Community evaluation of the work itself, ongoing and independent of where it was published.
Our long-term goal: citations that include a Paperstars rating.
Smith et al., 2004 becomes Smith et al., 2004 (4.5/53)
Quality made visible, permanently.
How It Works
- Find the paper you want to rate - search by title, DOI, or author
- Rate it across seven criteria
- Your ratings generate a weighted star score (1–5), which you can manually adjust to reflect your overall impression.
- Leave a short review - constructive, anonymous, moderated for professionalism not viewpoint.
| Component | What We're Asking | Weight |
|---|---|---|
| Title | Was the title appropriate, slightly misleading, or exaggerated? | 1 |
| Methods | Were the methods robust and appropriate for the question? | 4 |
| Statistical Analysis | Was the statistical analysis appropriate, or were there signs of p-hacking or misuse? | 4 |
| Data Presentation | Were the figures clear and appropriate, or potentially misleading? | 3 |
| Discussion | Were the results interpreted appropriately and reasonably? | 2 |
| Limitations | Were the limitations clearly acknowledged and discussed? | 2 |
| Data Availability | Were the data and code shared openly and transparently? | 4 |
All reviewers are anonymous. Verified academics receive a badge. Authors can claim their papers and respond.
→ Read more about the full rating system
Become an Ambassador
Believe science should be judged on quality, not metrics? So do we. Our ambassador programme connects researchers who want to help build the community quality signal - through journal clubs, outreach, and advocacy.
→ Learn more about becoming an ambassador here
Our Story
Paperstars was founded by researchers who were frustrated with the status quo - and decided to do something about it.
Paperstars grew out of years of conversations between two friends who couldn't stop talking about the gap between why they became scientists and what academia actually rewards. We didn't set out to be reformers. We just couldn't look away from a system that measures the wrong things and decided to do something about it.
We're two nerds who met in the front row of an undergraduate Biomedical Science lecture theatre at the University of Aberdeen, bonding over Doctor Who, Radio 4 comedy, dogs, and all things science. This is what happened next.
The Team
Dr. Danny Schnitzler is a postdoctoral researcher and PhD alumna of the University of Edinburgh, currently researching depression. Alongside her research, she develops educational tools for scientific literacy and is co-founder of Paperstars, born out of a deep belief that quality science deserves better tools
When not working, she’s on adventures around Edinburgh with her demon dogs Sid and Harry, or embarking on creative projects that may or may not get finished.
Dr. Jenna Stephen is a researcher with a background in mitochondrial and cancer cell biology, having worked across multiple labs and published papers on mitochondrial proteomics during her PhD at the University of Cambridge. She is currently training in science communication and public engagement, bringing together a deep commitment to transparency, rigour, and the belief that science should be accessible to everyone.
When not working, she’s exploring the Cairngorms with her collie Beanie, attempting new crafts, or eyeing the tuba that hasn’t been played in months.