-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How might we create credible systems of measuring the reputation of research as an alternative to journal title? #4
Comments
It's a great challenge. I wish to join it at least in github discussion. |
First of all, terrific challenge, I would love to contribute on GitHub, and probably at the conference as well! I think journal title might be a genuinely bad predictor of research quality. For example, Nature, Science, and PNAS does not necessarily do any better than other journals on important quality metrics like reproducibility, open materials in published work, and publication bias of significant results. I have more hope for the related popularity metric of citation numbers, but I believe it needs to be one metric for quality among many. Some interesting alternatives to impact factors and citation rates have been discussed. See for example Fraley and Vazire's idea for a quality metric based on sample size: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0109019 I totally agree that we need to base a reputation system on meaningful values of scientific progress. A challenge of course, is finding metrics that cannot easily be abused/"hacked". I think a plurality of metrics would be a safeguard against this. |
This is a really important challenge/issue in science. My two-cents would be to promote the use of citation metrics (as mentioned above), data re-use metrics (using data-set DOI tracking), and impact statements (self reported statements of a researchers contribution to (open) science). |
@SamanthaHindle Oh, we get one more dimension! We can focus on measuring impact for a long time after publication. Another possibiity is to estimate research quality based on authors' reputation and the field of expertise just after it was published. This aims quite a different goals: to give credits to authors of important papers or to highlight papers which worth reading (may be even for papers at the preprint stage). |
Count me in for both online discussion and in-person! |
me too! |
I'll be working on #58 but happy to jump in on this from time to time! |
Relevant blog post by Björn Brembs |
We are currently live and can be found at the Goethe hall, in the front, to the left! |
I don't like the impact factor as much as the next person at OpenCon but if we (scientists, funders, reviewers, etc.) insist on having one-number metric per journal as an indication of scientific quality/relevance/interest, I would suggest a more conservative way to calculate the "impact". For example, because scientists have broadly accepted (sometimes obsess over) the probability cutoff of P<0.05 in statistical analyses, I suggest we approach the impact factor similarly. We could look at the papers at the low end of the distribution in terms of number of citations and see below which citation number lie the lowest 5% of the journal's papers. This number would effectively mean that if you pick out a paper at random, there is 95% chance that the impact will be higher than the said number (for the period one has used for calculating the citation numbers). I think this makes much more sense than using the average. I will post some examples later. Another thing I'd recommend is to not to restrict the metric to citation numbers in the last 2 years. 5-year and older impact factors are sometimes used but I argue the benefit of a paper often extends much further. A paper with a huge splash in the first six months and virtually no citations after seems much less impressive to me than another that started with a whimper for two years but continues to be relevant after 20 years. |
We talked about alternative (paper-based) metrics for assessing impact. We mainly addressed this from a psychological scientist's perspective although people from other disciplines contributed as well. We thought about including different measures of openness (data, materials, access,...) as measures of credibility, but also talked about the need to assess quality and/or importance of papers. We spent some time talking about post-publication peer review and the possibility of commenting on and rating papers. That way, we could crowd-source the peer review and reviewers could only review the parts they are "experts" on. We made an attempt to create a questionnaire for researchers from different disciplines to get a "weights" of importance of different concepts. The concepts/possible questions we came up with so far concern: |
The most important challenge assesment. Develop new indicators to be part in the national policy level and also in rankings or other research measures. Credibility and legitimation need to be strong in new methodological indicators |
Confused? New to Github? Visit the GitHub help page on our site for more information!
At a glance
Submission name: How might we create credible systems of measuring the reputation of research as an alternative to journal title?
Contact lead: Formerly @jpolka but I will be working on: #58
Issue area: #OpenResearch
Region: #Global
Issue Type: #Challenge
Description
// More info coming soon!
How can others contribute?
// More info coming soon!
end.
This post is part of the OpenCon 2017 Do-A-Thon. Not sure what's going on, head here
The text was updated successfully, but these errors were encountered: