What’s Wrong With the H-Index and Impact Factor?

What’s Wrong With the H-Index and Impact Factor?
Photo by Jordan McDonald / Unsplash

In academia, it’s easy to feel like your worth boils down to two numbers: your H-index and the impact factors of the journals you publish in. These metrics are everywhere—from grant applications and job interviews to casual conversations in the lab.

But here’s the hard truth: these numbers are broken, and they’ve been distorting science for decades.

🧮 What Are These Metrics, Anyway?

The H-Index

Your H-index attempts to quantify your scientific output and impact. If you have an H-index of 12, it means you’ve published 12 papers that have each been cited at least 12 times.

Sounds neat, right? But it:

  • Doesn’t account for how or why something was cited  
  • Favors older researchers and long careers  
  • Can be artificially inflated through self-citations or citation circles

The Impact Factor (IF)

The journal impact factor is calculated by averaging the number of citations received by articles in a journal over a 2-year period.

In practice, it means:

  • Publishing in a journal with an IF of 30 is considered more “prestigious” than one with an IF of 3  
  • Hiring and promotion committees often use it as shorthand for quality

But again, it has serious flaws.

🚩 The Problems

1. Quantity Over Quality

Both the H-index and impact factor are citation-based metrics. But citations ≠ quality. A paper can be highly cited because:

  • It introduced a useful method  
  • It made a controversial claim  
  • It was wrong, and many papers refuted it

Bad science can still rack up citations. Meanwhile, slow, rigorous work might be overlooked entirely.

2. Gaming the System

It’s not hard to boost these metrics:

  • Self-cite frequently  
  • Join a citation circle with colleagues  
  • Aim for “hot topics” that get quick attention, rather than lasting impact

Some journals have even been caught manipulating impact factors by encouraging authors to cite recent papers from the same journal.

3. Bias and Inequity

These metrics disadvantage:

  • Early-career researchers, who haven’t had time to accumulate citations  
  • Researchers in low-resource settings, who may not have access to “high-impact” publishing pathways  
  • Work published in languages other than English, or in niche but important fields

They reward visibility, not value.

4. They Don’t Reflect Actual Use

Most people don’t read or cite papers based on the H-index of the author. They cite based on:

  • Whether the paper helped them  
  • Whether it supports (or challenges) their point  
  • What’s easy to find or already well-known

And yet we treat these metrics as if they are objective, fair, and meaningful. They're not.

🔍 What We Actually Need

Researchers, funders, and institutions need better ways to assess research—ones that look at:

  • The quality of the work, not just where it was published  
  • The integrity of the methods  
  • The usefulness of the results to the broader scientific community  
  • The openness and reproducibility of the findings  

That’s where qualitative metrics come in: ratings, reviews, and community-driven feedback.

⭐ Enter: Paperstars

At Paperstars, we believe that science deserves better than citation counts.

We’re building a platform where researchers can:

⭐ Rate papers from 1 to 5 stars based on actual quality  

💬 Leave short, structured reviews (anonymous but academically verified)  

🧪 Reward open data and reproducibility  

🧭 Help shift the culture from “publish or perish” to “publish well, and be proud of it”

📚 Want to read more about this?

Here are some pointers to get you started:

🧠 Final Thought

Your H-index doesn’t define you.  

The impact factor of a journal doesn’t define your work.  

What matters is rigour, insight, clarity, and honesty.

It’s time to stop chasing numbers—and start valuing research for what it truly is.

✨ Want to help reshape the way we evaluate science?

👉 Sign up for the paperstars newsletter  

🦋 Follow us on Bluesky