Scholarly Journal
Scholarly Production
I can’t imagine returning to the rigidities of the old model of the hegemony of the top law reviews and few if any other mechanisms by which to get one’s ideas out. But I also think that we’re in a very confused state in which, outside of law and economics, the standards for what is good and bad scholarship are not very clear. Those standards are not clear, at least to me – and that leaves aside the further question of whether even “good” scholarship has any use outside the academy. For that matter, the law and economics scholarship sometimes seems to operate according to a Standard of Professional Economist Envy, which is an odd way of outsourcing one’s standards.
Actually, getting practical, I think a fundamental problem with the current model of scholarly production is that it favors writing over reading. I don’t think legal academics spend much time reading, let alone pondering or puzzling, anymore. The nature of the hierarchical signaling game says that quantity matters more than anything else, because we don’t have common grounds for evaluating quality – law and economics aside – and many of the remaining, crudely social-sciencey but not really, indicators of scholarly impact and all that favor quantity. It’s like an SAT test that imposes no penalties for wrong answers, for one thing. There are no wrong answers; just big piles of stuff to compare to each other. The shift in the economics of book publishing means that lots of things that are just a couple of articles strung together appear as books, which is fine if one likes to have the book in hand – but, let’s be honest, is there any legal academic today who doesn’t have a book contract with Oxford USA?
So I think people – including me – conclude that writing is more important than reading as a career strategy, and prioritize it. Or that the connection between the two is a lot less weighted to the reading side than it once was. Once there is a bias in favor of writing over reading, the writing that is out there contains relatively less stuff that is worth reading, and on it goes. It is true that the nature of law and the fact that it has a hugely “conventional” aspect to it, means that inevitably legal scholarship moves and very little of it can hope to have a shelf life. That’s different, though, from what I think is going on today, in which people are struggling to get their voices heard and which produces an increasing amount of static as a result. Positive feedback loop.
I have, of course, no evidence for any of this (which raises another point – is it possible that the current trends in academic research, toward social science empiricism, for example, are more expensive to produce than traditional legal scholarship – but also more useful even if more expensive?), so I’d be curious particularly what legal academics think goes on in scholarly production. It might just be that I’ve confessed my own solitary scholarly failing, That I Don’t Read Enough, and that everyone else reads day and night before producing new scholarly works. But I sort of doubt it. Also, this is just my musing and this is particularly subject to revising my views.
Author: Kenneth Anderson
Journal Metrics
Impact Factor, devised by Eugene Garfield is the most widely used method to measure Journal’s influence. It’s a value calculated over time, and is based on the number of citations received in the previous two year divided by the number of articles published in same years. For a journal, all its citations of previous n years should be available to calculate impact factor for year (n+1) implying the computation of impact factor involves significant overhead. In spite of its drawback, the Journal Impact Factor is a popular method to measure the scientific influence of journals. Most of its variants too require the citation data of the preceding few years from all the indexed journals. Thomson Reuters acquired the Eugene Garfield founded Institute of Scientific Information and is generally accredited with the introduction of the widespread use of the Impact Factor as a metric in their annual publication of the Journal Citation Reports, listing the computed IF measures for popular journals that it followed.
[A minimum citation window of three years is taken into consideration to account for uniformity and maintain parity in trends of low impact as well as high impact research areas. This metric takes into account the articles which are assessed by a common panel through the process of peer-review of scholarly papers. This provides a fair impact measurement of the journal and diminishes the change of manipulation. The IF is subject to manipulation and this has led to heavy criticism by research scholars over the years. Regulations set by the editorial board of a journal can immensely influence the Impact Factor and this can be exploited to artificially boost index measures. Coercive citation, in particular, is a practice in which an editor coerces authors to add redundant citations for an article published by the corresponding journal. Activities such as this, while frowned upon, are still used to inflate a journal’s impact factor. This makes Impact Factor weaker as an index because of a high possibility of it being operated in a journal’s favor by altering citations to and from the journal].
The other popular measure is the Elsevier’s Scopus which has a large collection of peer reviewed scholarly journals across various domains. Scopus utilizes its database to provide another type of journal metric to rank journals via SCImago Journal and Country Rank (SJR) portal. The SJR rank is a score evaluated by Scopus from the past five years’ data addressing small number of journals. It is claimed that SCI, Thomson Reuters are little more selective than Scopus. […]
Neelam Jangid, SnehanshuSaha, Siddhant Gupta, Mukunda Rao J (Jangid et. al, 2015) (Jangid et. al, 2014) introduced a new metric, Journal Influence Score (JIS), which is calculated by applying Principal Component Analysis (PCA) and multiple linear regression (MLR) on citation parameters, extracted and processed from various scholarly articles in different domains, to obtain a score that gauges a journals impact. The higher the score, more the journal is valued and accepted. Journals ranking results are compared with ranks of SJR, which internally uses Google’s PageRank algorithm to calculate ranks. The results showed minimal error and the model performed reasonably well.
[The emergence of JIS is attributed to the fact that unlike SJR -see below-, JIS is conceptually lightweight. It does not require any storage of data and is computationally faster].
The general drawback in the above proposed models is that there is no common measure for the influence of journals across various domains. Also, there is a need to scrutinize the practices like self-citation which is the easiest way to increase the citation quotient of journals. Source Normalized Impact per Paper (SNIP) a measure proposed by Henk F. Moed (Henk, 2010) is the ratio of the journals citation count per paper and the citation potential in its subject field. It allows direct comparison of journals in different subject domains which is based on citation relationships. SNIP is based on citations from peer-reviewed papers to other peer-reviewed papers.
h-index(hi)
The h-index was proposed by the Physicist Jorge E. Hirsch in the year 2005 as a measure which not only considers the quantity of citations but also their quality. The h-index addresses the concerns of other indicators and metrics used in Scientometrics.
The Impact Factor impact considers the entire spectrum of articles within a journal. While this accounts for the quantity of citations considered in the calculation of the metric, it overlooks quality of scientific publications. Impact Factor considers the total number of articles published in a journal as mentioned in previous section. This cannot possibly account for a good representation of the quality of articles. It is fallacious to assume that the entire population of articles in a journal represents “good” or “impactful”. h-index considers asubset of this population by keeping in view the selection of number of articles that are greater than or equal to a positional index for computing this metric.
While, h-index is a relatively beneficial metric for usage, it does have some limitations. First and foremost, it does not account for citations across fields of a domain. Citations within and across domains could potentially reveal more information about its quality. This is not considered for the computation of the h-index and poses as a major disadvantage. h-index can be exploited by using unfair practices like self-citations. Consider a scenario where articles published in a journal practice self-citations. In doing so, the citation count for these articles would increase. It is also likely that this citation count would also increase in value over the positional index, thereby increasing the h-index. Ideally, this does not capture the essence of “quality” that makes h-index an improvement over the Impact Factor metric.
There exists a variation of the h-index that was introduced by Google as part of the indicators developed for use on Google Scholar. Primarily referred to as h5-index, it places a restriction of considering a complete 5-year period worth of journal citation data for calculating the index as defined earlier.
SCImago Journal Rank (SJR)
The key principle that sets SJR apart from other parameters is its inherent acknowledgment of the fact that citations are dynamic in nature and that no two citations are the same. In a long list of parameters including the ones described above, it is evident that the quantity of citations of a scholarly publication is not independently sufficient as a measure to understand the influence of a journal. Hence, several metrics also take into account the quality of these citations when evaluating the impact of a journal.
Authors: Sudeepa Roy Dey, Archana Mathur, Gambhire Swati Sampatrao, Sandesh Sanjay Gade, Sai Prasanna M S [India]
Resources
See Also
Journal
Further Reading
1. Buela-Casal, G., Perakakis, P., Taylor, M.,&Checa, P. (2006). Measuring internationality: Reflections and perspectives on academic journals. Scientometrics, vol. 67, n. 1, 45-65.
2. Perakakis, P., Taylor, M., Buela-Casal, P., &Checa, P. (2006). A neuro-fuzzy system to calculate a journal internationality index. In: Proceedings CEDI symposium.
3. Henk F.Moed,(July 2010). “Measuring contextual citation impact of scientific journals” Journal of InformetricsVolume 4, Issue 3, Pages 265–277.
4. Aviles, Frank Pancho, Ramirez, IvonneSaidé(2015). Evaluating the Internationality of Scholarly Communications in Information Science Publications. iConference 2015 Proceedings.
5.NeelamJangid, SnehanshuSaha, AnandNarasimhamurthy, Archana Mathur (2015). Computing the Prestige of a journal: A Revised Multiple Linear Regression Approach. WCI- ACM Digital library(accepted), Aug 10-13, 2015.
6. NeelamJangid, SnehanshuSaha, Siddhant Gupta, Mukunda Rao J (2014).Ranking of Journals in Science and Technology Domain: A Novel And Computationally Lightweight Approach; IERI Procedia, Elsevier, Vol 10, pp 5762;
7. Gunther K. H. Zupanc,(2014). Impact beyond the impact factor, J Comp Physiology A 200:113116 Springer.
8. Ludo Waltman, Nees Jan van Eck, Thed N., van Leeuwen, &Martijn S. Visser,(2013). Some modifications to the SNIP journal impact indicator, Journal of Informetrics 7 272 285
10. Walt Crawford, (July 2014),”Journals, ’Journals’ and Wannabes: Investigating The List”, Cites & Insights, 14:7,
Leave a Reply