Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How might we create credible systems of measuring the reputation of research as an alternative to journal title? #4

Open
char-siuu-bao opened this issue Oct 26, 2017 · 12 comments

Comments

@char-siuu-bao
Copy link
Contributor

char-siuu-bao commented Oct 26, 2017

Confused? New to Github? Visit the GitHub help page on our site for more information!

At a glance

Submission name: How might we create credible systems of measuring the reputation of research as an alternative to journal title?

Contact lead: Formerly @jpolka but I will be working on: #58

Issue area: #OpenResearch

Region: #Global

Issue Type: #Challenge

Description

// More info coming soon!

How can others contribute?

// More info coming soon!

end.

This post is part of the OpenCon 2017 Do-A-Thon. Not sure what's going on, head here

@VorontsovIE
Copy link

VorontsovIE commented Nov 4, 2017

It's a great challenge. I wish to join it at least in github discussion.
We can highlight at least two different ways to move this point forward. First, we can discuss a system accompanying journal reputation system. Journal reputation can be one of features to assess quality of individual research.
Another prospective way - is to discuss a publication system totally independent from journal system. Such a reputation system may also demand a different peer-review system.
Just an example of how we can achieve this - is a collaborative post-publish review with automatic assignment of reviewer reliablity score according to their expertise. And the problem of such a system is how to find readers for a paper which is not yet reviewed.
It's also questionable whether reputation of a journal is actually important for discoverablity of a paper in the era of google scholar. Do people actually read journals or do they find papers with a search engine. And how readers decide whether they will or won't read a paper found. We need to base goals of a reputation system on actual values moving science forward.

@pederisager
Copy link

First of all, terrific challenge, I would love to contribute on GitHub, and probably at the conference as well!

I think journal title might be a genuinely bad predictor of research quality. For example, Nature, Science, and PNAS does not necessarily do any better than other journals on important quality metrics like reproducibility, open materials in published work, and publication bias of significant results. I have more hope for the related popularity metric of citation numbers, but I believe it needs to be one metric for quality among many.

Some interesting alternatives to impact factors and citation rates have been discussed. See for example Fraley and Vazire's idea for a quality metric based on sample size: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0109019
Another interesting initiative is the badge program started by the Open Science Framework to symbolically reward open science practices, which several journals (at least in psychology) are now starting to endorse: http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002456

I totally agree that we need to base a reputation system on meaningful values of scientific progress. A challenge of course, is finding metrics that cannot easily be abused/"hacked". I think a plurality of metrics would be a safeguard against this.

@SamanthaHindle
Copy link

SamanthaHindle commented Nov 8, 2017

This is a really important challenge/issue in science. My two-cents would be to promote the use of citation metrics (as mentioned above), data re-use metrics (using data-set DOI tracking), and impact statements (self reported statements of a researchers contribution to (open) science).
I know the focus of this challenge is to discuss metrics for measuring research reputation, but I wonder if we can open up this discussion to how we can measure overall contribution of a researcher to open science (outreach/education/mentoring etc) rather than focusing on research/publication output/impact. The goal would be to promote a culture where we value and hire greater numbers of well-rounded scientists (good communicators, mentors, educators, researchers), which will encourage researchers to develop skills away from the bench.

@VorontsovIE
Copy link

@SamanthaHindle Oh, we get one more dimension! We can focus on measuring impact for a long time after publication. Another possibiity is to estimate research quality based on authors' reputation and the field of expertise just after it was published. This aims quite a different goals: to give credits to authors of important papers or to highlight papers which worth reading (may be even for papers at the preprint stage).

@npscience
Copy link

Count me in for both online discussion and in-person!

@asbtariq
Copy link

me too!
but only online this time.

@jpolka
Copy link

jpolka commented Nov 13, 2017

I'll be working on #58 but happy to jump in on this from time to time!

@Bubblbu
Copy link
Member

Bubblbu commented Nov 13, 2017

@pederisager
Copy link

We are currently live and can be found at the Goethe hall, in the front, to the left!

@jhk11
Copy link

jhk11 commented Nov 13, 2017

I don't like the impact factor as much as the next person at OpenCon but if we (scientists, funders, reviewers, etc.) insist on having one-number metric per journal as an indication of scientific quality/relevance/interest, I would suggest a more conservative way to calculate the "impact".

For example, because scientists have broadly accepted (sometimes obsess over) the probability cutoff of P<0.05 in statistical analyses, I suggest we approach the impact factor similarly. We could look at the papers at the low end of the distribution in terms of number of citations and see below which citation number lie the lowest 5% of the journal's papers. This number would effectively mean that if you pick out a paper at random, there is 95% chance that the impact will be higher than the said number (for the period one has used for calculating the citation numbers). I think this makes much more sense than using the average. I will post some examples later.

Another thing I'd recommend is to not to restrict the metric to citation numbers in the last 2 years. 5-year and older impact factors are sometimes used but I argue the benefit of a paper often extends much further. A paper with a huge splash in the first six months and virtually no citations after seems much less impressive to me than another that started with a whimper for two years but continues to be relevant after 20 years.

@pederisager
Copy link

We talked about alternative (paper-based) metrics for assessing impact. We mainly addressed this from a psychological scientist's perspective although people from other disciplines contributed as well. We thought about including different measures of openness (data, materials, access,...) as measures of credibility, but also talked about the need to assess quality and/or importance of papers. We spent some time talking about post-publication peer review and the possibility of commenting on and rating papers. That way, we could crowd-source the peer review and reviewers could only review the parts they are "experts" on. We made an attempt to create a questionnaire for researchers from different disciplines to get a "weights" of importance of different concepts.

The concepts/possible questions we came up with so far concern:
open data
open access
open materials
open analysis scripts
open experimental scripts/procedures
open source software used
preregistration
quality
statistical power (maybe)/statcheck check
technical quality (are methods suitable for research question?)
...

@matg20
Copy link

matg20 commented Nov 14, 2017

The most important challenge assesment. Develop new indicators to be part in the national policy level and also in rankings or other research measures. Credibility and legitimation need to be strong in new methodological indicators

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

No branches or pull requests