Scholarly Journal

Scholarly Journal

Scholarly Production

I can’t imagine returning to the rigidities of the old model of the hegemony of the top law reviews and few if any other mechanisms by which to get one’s ideas out. But I also think that we’re in a very confused state in which, outside of law and economics, the standards for what is good and bad scholarship are not very clear. Those standards are not clear, at least to me – and that leaves aside the further question of whether even “good” scholarship has any use outside the academy. For that matter, the law and economics scholarship sometimes seems to operate according to a Standard of Professional Economist Envy, which is an odd way of outsourcing one’s standards.

Actually, getting practical, I think a fundamental problem with the current model of scholarly production is that it favors writing over reading. I don’t think legal academics spend much time reading, let alone pondering or puzzling, anymore. The nature of the hierarchical signaling game says that quantity matters more than anything else, because we don’t have common grounds for evaluating quality – law and economics aside – and many of the remaining, crudely social-sciencey but not really, indicators of scholarly impact and all that favor quantity. It’s like an SAT test that imposes no penalties for wrong answers, for one thing. There are no wrong answers; just big piles of stuff to compare to each other. The shift in the economics of book publishing means that lots of things that are just a couple of articles strung together appear as books, which is fine if one likes to have the book in hand – but, let’s be honest, is there any legal academic today who doesn’t have a book contract with Oxford USA?

So I think people – including me – conclude that writing is more important than reading as a career strategy, and prioritize it. Or that the connection between the two is a lot less weighted to the reading side than it once was. Once there is a bias in favor of writing over reading, the writing that is out there contains relatively less stuff that is worth reading, and on it goes. It is true that the nature of law and the fact that it has a hugely “conventional” aspect to it, means that inevitably legal scholarship moves and very little of it can hope to have a shelf life. That’s different, though, from what I think is going on today, in which people are struggling to get their voices heard and which produces an increasing amount of static as a result. Positive feedback loop.

I have, of course, no evidence for any of this (which raises another point – is it possible that the current trends in academic research, toward social science empiricism, for example, are more expensive to produce than traditional legal scholarship – but also more useful even if more expensive?), so I’d be curious particularly what legal academics think goes on in scholarly production. It might just be that I’ve confessed my own solitary scholarly failing, That I Don’t Read Enough, and that everyone else reads day and night before producing new scholarly works. But I sort of doubt it. Also, this is just my musing and this is particularly subject to revising my views.

Author: Kenneth Anderson

Journal Metrics

Impact Factor, devised by Eugene Garfield is the most widely used method to measure Journal’s influence. It’s a value calculated over time, and is based on the number of citations received in the previous two year divided by the number of articles published in same years. For a journal, all its citations of previous n years should be available to calculate impact factor for year (n+1) implying the computation of impact factor involves significant overhead. In spite of its drawback, the Journal Impact Factor is a popular method to measure the scientific influence of journals. Most of its variants too require the citation data of the preceding few years from all the indexed journals. Thomson Reuters acquired the Eugene Garfield founded Institute of Scientific Information and is generally accredited with the introduction of the widespread use of the Impact Factor as a metric in their annual publication of the Journal Citation Reports, listing the computed IF measures for popular journals that it followed.

[A minimum citation window of three years is taken into consideration to account for uniformity and maintain parity in trends of low impact as well as high impact research areas. This metric takes into account the articles which are assessed by a common panel through the process of peer-review of scholarly papers. This provides a fair impact measurement of the journal and diminishes the change of manipulation. The IF is subject to manipulation and this has led to heavy criticism by research scholars over the years. Regulations set by the editorial board of a journal can immensely influence the Impact Factor and this can be exploited to artificially boost index measures. Coercive citation, in particular, is a practice in which an editor coerces authors to add redundant citations for an article published by the corresponding journal. Activities such as this, while frowned upon, are still used to inflate a journal’s impact factor. This makes Impact Factor weaker as an index because of a high possibility of it being operated in a journal’s favor by altering citations to and from the journal].

The other popular measure is the Elsevier’s Scopus which has a large collection of peer reviewed scholarly journals across various domains. Scopus utilizes its database to provide another type of journal metric to rank journals via SCImago Journal and Country Rank (SJR) portal. The SJR rank is a score evaluated by Scopus from the past five years’ data addressing small number of journals. It is claimed that SCI, Thomson Reuters are little more selective than Scopus. […]

Neelam Jangid, SnehanshuSaha, Siddhant Gupta, Mukunda Rao J (Jangid et. al, 2015) (Jangid et. al, 2014) introduced a new metric, Journal Influence Score (JIS), which is calculated by applying Principal Component Analysis (PCA) and multiple linear regression (MLR) on citation parameters, extracted and processed from various scholarly articles in different domains, to obtain a score that gauges a journals impact. The higher the score, more the journal is valued and accepted. Journals ranking results are compared with ranks of SJR, which internally uses Google’s PageRank algorithm to calculate ranks. The results showed minimal error and the model performed reasonably well.

[The emergence of JIS is attributed to the fact that unlike SJR -see below-, JIS is conceptually lightweight. It does not require any storage of data and is computationally faster].

The general drawback in the above proposed models is that there is no common measure for the influence of journals across various domains. Also, there is a need to scrutinize the practices like self-citation which is the easiest way to increase the citation quotient of journals. Source Normalized Impact per Paper (SNIP) a measure proposed by Henk F. Moed (Henk, 2010) is the ratio of the journals citation count per paper and the citation potential in its subject field. It allows direct comparison of journals in different subject domains which is based on citation relationships. SNIP is based on citations from peer-reviewed papers to other peer-reviewed papers.

h-index(hi)

The h-index was proposed by the Physicist Jorge E. Hirsch in the year 2005 as a measure which not only considers the quantity of citations but also their quality. The h-index addresses the concerns of other indicators and metrics used in Scientometrics.

The Impact Factor impact considers the entire spectrum of articles within a journal. While this accounts for the quantity of citations considered in the calculation of the metric, it overlooks quality of scientific publications. Impact Factor considers the total number of articles published in a journal as mentioned in previous section. This cannot possibly account for a good representation of the quality of articles. It is fallacious to assume that the entire population of articles in a journal represents “good” or “impactful”. h-index considers asubset of this population by keeping in view the selection of number of articles that are greater than or equal to a positional index for computing this metric.

While, h-index is a relatively beneficial metric for usage, it does have some limitations. First and foremost, it does not account for citations across fields of a domain. Citations within and across domains could potentially reveal more information about its quality. This is not considered for the computation of the h-index and poses as a major disadvantage. h-index can be exploited by using unfair practices like self-citations. Consider a scenario where articles published in a journal practice self-citations. In doing so, the citation count for these articles would increase. It is also likely that this citation count would also increase in value over the positional index, thereby increasing the h-index. Ideally, this does not capture the essence of “quality” that makes h-index an improvement over the Impact Factor metric.

There exists a variation of the h-index that was introduced by Google as part of the indicators developed for use on Google Scholar. Primarily referred to as h5-index, it places a restriction of considering a complete 5-year period worth of journal citation data for calculating the index as defined earlier.

SCImago Journal Rank (SJR)

The key principle that sets SJR apart from other parameters is its inherent acknowledgment of the fact that citations are dynamic in nature and that no two citations are the same. In a long list of parameters including the ones described above, it is evident that the quantity of citations of a scholarly publication is not independently sufficient as a measure to understand the influence of a journal. Hence, several metrics also take into account the quality of these citations when evaluating the impact of a journal.

Authors: Sudeepa Roy Dey, Archana Mathur, Gambhire Swati Sampatrao, Sandesh Sanjay Gade, Sai Prasanna M S [India]

Resources

See Also

Journal

Further Reading

1. Buela-Casal, G., Perakakis, P., Taylor, M.,&Checa, P. (2006). Measuring internationality: Reflections and perspectives on academic journals. Scientometrics, vol. 67, n. 1, 45-65.
2. Perakakis, P., Taylor, M., Buela-Casal, P., &Checa, P. (2006). A neuro-fuzzy system to calculate a journal internationality index. In: Proceedings CEDI symposium.
3. Henk F.Moed,(July 2010). “Measuring contextual citation impact of scientific journals” Journal of InformetricsVolume 4, Issue 3, Pages 265–277.
4. Aviles, Frank Pancho, Ramirez, IvonneSaidé(2015). Evaluating the Internationality of Scholarly Communications in Information Science Publications. iConference 2015 Proceedings.
5.NeelamJangid, SnehanshuSaha, AnandNarasimhamurthy, Archana Mathur (2015). Computing the Prestige of a journal: A Revised Multiple Linear Regression Approach. WCI- ACM Digital library(accepted), Aug 10-13, 2015.
6. NeelamJangid, SnehanshuSaha, Siddhant Gupta, Mukunda Rao J (2014).Ranking of Journals in Science and Technology Domain: A Novel And Computationally Lightweight Approach; IERI Procedia, Elsevier, Vol 10, pp 5762;
7. Gunther K. H. Zupanc,(2014). Impact beyond the impact factor, J Comp Physiology A 200:113116 Springer.
8. Ludo Waltman, Nees Jan van Eck, Thed N., van Leeuwen, &Martijn S. Visser,(2013). Some modifications to the SNIP journal impact indicator, Journal of Informetrics 7 272 285
10. Walt Crawford, (July 2014),”Journals, ’Journals’ and Wannabes: Investigating The List”, Cites & Insights, 14:7,


Posted

in

,

by

Comments

14 responses to “Scholarly Journal”

  1. international

    But wouldn’t an annotated law review article help you find more direct cites? I fairly regularly follow a link in a blog comment to a news article which links to a study, then cite the study instead of the comment or article.

  2. international

    Citing a law review article is usually a sign of weakness: i.e. I cannot find a court, a restatement, or even a respectable treatise who agrees with me about what the law is. This is particularly true if there is any significant caselaw on the point.

  3. international

    Orin Kerr

    Off the top of my head, I would think that more writing means more reading, not less reading. If you’re going to write in an area, you have to have read the key works in the area: The more you write, the more you will need to have read.

    I suppose one of the questions is what professors who don’t write do all day. Do they spend their days reading law review articles? That’s not my sense, at least: I tend to find that the professors who aren’t writing much are less aware of the scholarship in the field than those that are writing more. That’s my sense, at least: These sorts of conclusions are no doubt impressionistic.

  4. international

    Thank you, thank you for making the point that law professors don’t read enough! This is probably the flip side of increased writing requirements in recent years. While Orin thinks that 1 article a year is an exaggeration, I suspect that for those who entered the academy in the last 10 years it is pretty accurate. And when law professors feel compelled to churn out that amount of writing every year, they have less opportunity to read widely and mull over what they’ve read. Sadly, I expect this is a trend that will continue, as the “new normal” becomes an article a year, and those who want to write less will be pushed by tenure standards, the promise of summer research funds, etc., to keep up producing writing that–as many commenters here have noted–is quite often not read. Many of us would be both better teachers and scholars if we just produced 1 carefully crafted, reflective article every other year; but I don’t think that will become a trend any time soon.

  5. international

    In college, in medical school, and in law school most of my best teachers also were academically productive. The names of David Gregory, Michael Simons, Brian Tamanaha, and Timothy Zick come readily to mind. Those excellent teachers that didn’t write much tended to be productive in other areas other than teaching and practice– they were editors, served on significant committees, etc. That isn’t to say they were good teachers because they wrote, though I believe that to be the case. It may be that their teaching abilities sprang from the same need to communicate ideas to others that drove them to write.

  6. international

    Assuming for the sake of discussion that law students’ work is cited at an even lower rate, what is the point of publishing it, as opposed to just writing it (for which there presumably is still a value)? Seems like many trees are dying in vain.

    I think the question is whether articles are read, not if they are cited. In my experience, lots of articles are read but not cited. As for trees, most articles are read electronically, and paper journals are very low circulation, so I don’t see that as a major problem.

  7. international

    For some, they would bet 1–2 articles in law journals per year is average for law professors, etc.

    Others believe the actual average is much much less than that. My sense is that a typical professor at a typical school who has been teaching for 20 years probably has an average of somewhere around 10 publications.

  8. international

    One study from 2005 indicates that 46 percent of law review articles are never cited elsewhere, 80 percent get 10 or fewer citations and that 1 percent of the articles get 96 percent of the citations. Assuming this is correct, which is not, it indicates that at least half, if not three quarters, of the money spent on publishing articles is a waste, apparently. The teachers should focus on teaching, therefore.

    I think it is easy to misunderstand that study. That study counted the proportion of legal publications in the lexis data base, including student notes and case comments, that are never cited. It did not count the proportion of articles by law professors that are never cited.

  9. international

    James Madison

    Most of the stuff professor’s write about has absolutely nothing to do with the intro Torts, Contracts, Civ Pro, Evidence classes they actually teach. IMHO, the knowledge of most “academics” as to the subject matter they actually teach (as opposed to what they want to teach, or think they teach) is limited to staying a chapter ahead of the students, and having taken the class themselves in law school.

    The injection of federal student loans into law schools, has created an unsustainable class of professional “academics” that are completely divorced from reality. They write twaddle back and forth to each other to bump up their page counts and poo poo practitioners and anyone who writes anything that might be useful to a practitioner- like, torts, contracts, civ pro etc. As a consequence law students leave law school knowing nothing about the practice of law, but all about Karl Llewellyn’s wife’s vision of the UCC.

    This type of knowledge is only useful at cocktail parties and the most arcane legal arguments that may only come up once in a lifetime. Some “academic” will tell you it can’t hurt to know it, but for $40k/ yr, it is excruciatingly painful to know it.

    A market correction is inevitable.

  10. international

    As a practitioner with a varied practice, I will buck the popular trend and say that I find academic articles useful.

    For the practitioner, law review articles are a great way to get a quick taste of a lot of different cases. If I’m pleading (and/or briefing) something unique, I need to know the particular differences in very similar causes of action (i.e., what do courts say is the difference in ultrahazardous activity vs. unreasonably dangerous activity). What are the contours of various affirmative and equitable defenses? How has the IRS treated a similar transaction all across the country? What do all the other federal circuits say on this issue of criminal procedure or sentencing?

    Even if a law review author’s article is never cited by a court, much less other law professors, it may have been used by hundreds of practitioners as an aggregation of cases on a topic or to make the same arguments to an appellate court. These uses aren’t measured (perhaps aren’t easily measurable either) but it does not mean they don’t exist.

  11. international

    Harry

    Kenneth is correct that academia cares more about writing rather than reading. Indeed, the writing is most important for those newest in academia but have often times spent years outside of academia and need time to readjust. Because of the time needed for readjustment, paradoxically, these new hires are probably the teachers with less knowledge and time to read. They, instead, have to plan their courses, which takes much more time during the early years.

    I think part of the solution rests in having no publication expectations from new faculty until, say, three years after one begins teaching. Those years are for in-depth reading and class preparation. Once the new faculty member has had a chance to familiarize herself with the academic literature and case law, her ideas are likely to be better developed and better written. Simply put, there would be significantly less scholarship and better scholarship. Just my two cents.

  12. international

    Allan

    IMHO, the question is not how much law professors are paid, but who pays for the law professors’ work and who receives value from it. Indeed, this could be said of all professors in general.

    If the purpose of law school is to produce lawyers, I am unsure of the value added of scholarship to the law school experience. Really, does publishing an article make one’s teaching any better than simply researching the issues and teaching?

    On the other hand, an argument can be made that the publishing contributes to the knowledge base of the world as a whole and is, therefore, worthwhile in and of itself.

    Assuming that publications does benefit the world, writers should be compensated. But, why should the compensators be university students? That is, why should university students (as opposed to society as a whole) foot the entire bill, when all they get is teaching?

    I would note that much of the money paid to non-law school professors comes from grants (especially in the hard sciences), so students in those disciplines are not paying the full cost of the bill. Further, graduate students in most disciplines (business, law, and medical being the exception) are subsidized, either by direct subsidies or being teaching assistance.

    The bottom line is that law school costs much, much less than the cost of hiring good teachers and providing a law school infrastructure. A good portion of the tuition goes to paying for professor scholarship.

    I don’t know if it is a good thing or bad. But it is what it is.

  13. international

    It does not really, truly matter for tenure purposes if you have a “formally designated lead article” in any journal. What matters is that you have published in top journals (American Political Science Review, American Journal of Political Science, Presidential Studies Quarterly, etc.), and that you have published widely. But having a lead article in, say, Public Administration Review or the International Journal of Feminist Studies is a laurel that is not only acceptable to put on a CV, but also expected.

  14. international

    The first note is that “lead article” is only valuable for journals that don’t alphabetize. For example, Presidential Studies Quarterly alphabetizes their articles, but the articles precede their “features.” “Lead article” by itself, without qualification, is completely misleading and definitely should not be put on a CV.

    Also, HeinOnline I believe reproduces the Tables of Contents for Law Journals. If it is a “featured article,” then HeinOnline I believe would indicate that. However, I am not as familiar with HeinOnline as I am with JStor and other databases.

    Moreover, it sounds as if Law Reviews’ “featured articles” just simply aren’t as important as “featured articles” in other disciplines, and that it is possible that a few r&t committees not only don’t care about it (which is understandable), but would also actually penalize an applicant for even mentioning it on the CV. It is this penalty component that I still don’t really understand. Professor Kerr’s remarks seem more prudent: viz., it’s all about context, as special issues are different from ordinary issues; symposia are different from issues that do not contain symposia (as Prof Bernstein notes in an above comment); and the fact that certain published articles may simply be of a unique nature as to warrant some sort of added explanation as to why it was a “featured article.”

    Nevertheless, regardless if one chooses Professor Bernstein’s or Professor Kerr’s approach, it seems to me that the legal academic world generally does not give much attention to whether Law Review editors believe an article is particularly noteworthy (unless specific circumstances dictate otherwise).

    The comments that indicate Law Reviews mainly use “lead article” status to lure scholars’ papers away from other journals (as opposed to indicate true scholarly significance) are interesting. The policy sounds similar to lower-tier law schools offering full tuition-free merit scholarships to admits who have probably also received top-10 law school acceptances.

    I suppose the legal academic world deviates from the medical and A&S academic worlds in many other respects other than just this “featured article” situation (e.g., the aforementioned and widely known student-editing). I wonder what differences (besides “featured article” and student editing) are most particularly salient in understanding how legal academic publishing works.

Leave a Reply

Your email address will not be published. Required fields are marked *