Busting Myths of Academic Publishing
Dr. Debashish Sengupta, Associate Professor, Alliance Business School and Program Director of Alliance Ascent College, Alliance University, Bangalore
'Publish or Perish' has been the rule of the academic world and academicians, who fail to publish, find themselves at disadvantage. These may range from low or no salary increments to non-tenureship or even loss of job. So every academician is bound to research and publish. Research and publication therefore counts as one of the most important KPIs of every academician. Such a wholesale need for publication has resulted in enormous rise in academic publishing. The number of peer-reviewed journals is estimated to be close to 30000 that churn out nearly 2 million articles every year or 20 million articles every decade. That is quite some number isn't it! Has such growth boosted academic scholarship? It must have but then has such mountain of research publishing benefitted the real world? After all isn't that what research is all about? Two million articles every year should be enough to transform our planet but do we see that really happening? Has then academic publishing been largely reduced to mere publication and subscription cycle, which is read by those who need to publish and then left to be forgotten in the archives? This article takes a critical look at the world of academic publishing and aims to bust the myths and poor practices; thereby aiming to bring greater level of objectivity in research and publication, necessitating practical application and nothing less.
Myth 1: All Publication in Peer Reviewed Journals and Conferences are 'Human' Works
Should I have written 'original' works instead of human works!
Trust me, I have not committed any typo. In 2005, a group of MIT graduate students decided to goof off in a very MIT graduate student way: They created a program called SCIgen that randomly generated fake scientific papers. Jeremy Stribling MS ’05 PhD ’09, Dan Aguayo ’01 MEng ’02 and Max Krohn PhD ’08spent a week or two between class projects to develop SCIgen a program that randomly generates nonsensical computer-science papers, complete with realistic-looking graphs, figures, and citations. SCIgen was inspired from the online study guide 'Sparknotes' developed by Krohn. The paper developed by these MIT students made its way to a world conference where it was accepted for publication. After the secret of the paper was unveiled by its 'smart' authors, the conference lost its major sponsor and academic publishing world lay exposed. The 'act' of these three students laid bare the truth of larger part of academic publishing that questioned credibility. Thanks to SCIgen, since then computer-written gobbledygook has been routinely published in scientific journals and conference proceedings. One may be entertained by the fact that computer programs are now good enough to create drivable gabble but what is alarming and sad though is that many journals/conferences continue to accept such papers without being able to scrutinize them critically.
Myth 2: All top rated Peer Reviewed Journals make sound editorial decisions
A French computer scientist Cyril Labbé could detect more than 100 algorithmically generated articles in two major scientific publications. Later in 2012, he found out a batch of 85 fake articles in one of these publications. These two much respected publications on realizing their error decided to withdraw such papers post-date. Cyril Labbé also catalogued many such computer generated articles presented in ‘high-rated’ international conference between 2008 and 2013. Besides the computer generated articles, there are many more articles that are meaningless paper drudgery that sometimes make their way to some really ‘good’ peer-reviewed journals. A lengthy literature review with a long list of citations makes a paper being perceived as impressive and one that has depth. Given easy availability of electronic database and advanced MS Word application creation of citation list is not too difficult. The fact that Cyril Labbé could find out such editorial lapses in top rated peer reviewed journals shows that fault lines are much deeper than we perceive. It will be hence wrong to believe all 'top' rated peer reviewed journal publications as crème da la crème.
Myth 3: Publication in a top-tier journal alone certifies good research
Currently journals and publications are rated using ‘impact factor’. That is like the most critical measure for any journal. But should that really be the most important yardstick of a journal? Consider this. The impact factor, which was created by a librarian named Eugene Garfield in the early 1950s, measures how often articles published in a journal are cited. Now given the mad race to publish and availability of algorithms, electronic databases and advanced documentation applications, every year millions of academicians around the world are attempting to cite previously published research literature. Often the ones cited by author who have made their way to top journals end up being cited more, as such citations are perceived to be of greater value. In such a scenario where many authors are using citations in their papers more to impress and are blindly copying them from the papers of their accomplished peers, will impact factor be a true measure always? Eugene Garfiled became a millionaire by inventing 'impact factor' but should we not have more qualitative measures for assessing a journal or research papers? Should we not see the kind of 'impact' a journal has made in bridging the knowledge as well as practice gap over the years?
Besides what about the exciting new journals who are relatively new but are doing a great job when it comes to editing, review and publishing. It will take time for them to be ranked but that does not mean that they cannot publish a good research. It is similar to believing that all good products and innovations can come out of the great big companies alone! But then the innovation landscape has been altered more by the start-ups. Google that started only in 1998 is perhaps one of the most exciting and innovative companies that we have today.
Myth 4: All non-peer reviewed journals lack credibility
Any non-peer reviewed journal is considered as junk or one that lacks academic rigour. In other words, our belief is that collective wisdom of a group of peers make better judgements than an editor alone. I am still not in favour of putting the peer-reviewed publications on such a high pedestal; not that I am favouring the non-peer-reviewed publications out of the way as well! However the yardstick has to be quality and application alone when it comes to research journal andnot whether it is peer or non-peer reviewed publication. I know many academicians may not agree with me but then perhaps this is the reason that Gregor Mendel, the father of genetics was recognized for his work only after his death. Gregor Mendel, a priest by profession, did experiments on sweet pea that helped him postulate Laws of Genetics, that hold true even today. His only 'mistake' was to publish that in an 'obscure' journal. Almost three decades later, after his death Erich von Tschermak, Hugo de Vries, Carl Correns, and William Jasper Spillman independently verified several of Mendel's experimental findings, ushering in the modern age of genetics. Mendel finally got his due but it is really sad that the class system in journals prevented his work from spreading and being recognized. It is said that another very renowned scientist Charles Darwin came to know of Mendel's work much later. Otherwise Genetic would have been a much older science!
I have personally encountered some non-peer reviewed publications that in no way lack quality or substance. However when an academician publishes in one of this publication, he receives no or low credit for his work in the academic circuit. PM World Journal is one of them that is partnered none other than by the Project Management Centre of Excellence of University of Maryland that publishes some of the top quality papers and works on Project Management. Widely accessed and respected by the project management professionals, the PM World Library offers free subscription and access to the professionals in the developing and underdeveloped countries of the world.
Myth 5: Empirical research papers command greater rigour and credibility
A primary grader good in mathematics in school is always branded as the sharpest, most intelligent and a brilliant student, while those good I literature or history are generally considered as those who can memorise matter and work hard to reproduce the same during examinations. This perceptual bias extends right from the school to world of research where empirical papers are perceived by many to be superior to qualitative research papers. The truth is far from perceived 'reality'. To say that one or the other approach is "better" is simply a trivializing of what is a far more complex topic than a dichotomous choice can settle. Both quantitative and qualitative research rest on rich and varied traditions that come from multiple disciplines and both have been employed to address almost any research topic one can think of. Qualitative research too gathers data but not in numerical form. Analysis of qualitative data is difficult and requires an accurate description of participant responses. Unlike quantitative or empirical data analysis, qualitative data analysis not supported by software (like SPSS, 'R' etc.) or computer applications (like MS Excel).
Myth 6: All published work is good research
All published work is not good research and all good research is not necessarily published. Recognizing this fact is important to bring in greater level of objectivity in academic 'rigour' of research as many times this proudly touted 'academic rigour' makes the process of publishing painfully long and many a times a meaningless journey. The present system of peer review is based on the foundation of critiquing and many a times the review comments supposed to be incorporated by the authors of the paper lack direction and are meant more for editorial fulfilment than contributing to the improvement of the paper that enhances its application aspect. On the other hand I have from very close quarters seen consultants doing far more faster, sharper research to decide product/app launches by companies, involving big bucks and getting it right. They do not go through all this academic rigmarole, yet come out winner where stakes are much higher. I am not trying to discard the rigour in academic research but stretching it too far only makes academic research less objective and more self-fulfilling.
Myth 7: Academicians are always good researchers
Academicians are expected to be good researchers. Shouldn’t they be? But then ground realities in countries like India are different. Higher education institutions are more teaching schools than research centres and hence right from hiring to compensation of teachers/professors is based on selecting and compensating for teaching than research. No wonder even after so many years, the salaries of professors and teachers are no match to compensation in the corporate sector. The system in itself does not necessarily attract the brightest and the best in the academic profession. There are outlier private institutions but in general the focus is on getting professors at low cost and making them to teach. This also means that there is no dedicated time available for research. Research funding in countries like India is almost non-existent. Added to this is the pressure to publish and academicians are expected to show published research output every year for getting fresh increments and promotions. Jens Skou, a 1997 Nobel Laureate, put it this way in his Nobel biographical statement: today's system puts pressure on scientists for, "too fast publication, and to publish too short papers, and the evaluation process uses a lot of manpower. It does not give time to become absorbed in a problem as the previous system did." Another Nobel laureate Peter Higgs (Physics, 2013) echoed similar sentiments when he said that - "Today I wouldn't get an academic job. It's as simple as that. I don't think I would be regarded as productive enough." In short, a system that generally attracts average researchers, provides them with no time or money incentive for research and at the same time expects them to produce fast published research output can only results in poor research and peer reviews. It has also spurred increasing numbers of low-quality “predatory publishers†who spam researchers with weekly “calls for papers†and charge steep fees for articles that they often don't even read before accepting.
Myth 8: Tightly controlled academic publication is the future
With evidences of deeper dysfunction within existing tightly controlled, peer-reviewed publications, their reputation is fast on the wane. And such deep anomalies have affected rank and file of research journals, all over the world. Hence the trust that high-ranked peer reviewed journals are necessarily producing high-end research is no more confirmatory. The research needs to be deregulated and must have greater social accountability. The engagement of the civic society in accessing research and exacting accountability out of the same is the need of the hour. Application of research felt by the civic society must be the only barometer of quality. In such a scenario poor research will automatically fizzle out. Future of research publication has to more "open," less-reliant on pre-publication peer review, and self-supported by scholars. Research must also be freely accessible for all and not limited to 'privileged' few to determine its usefulness and application value. World Bank recently announced its launching of the Open Knowledge Repository, which implements an open access policy for its research outputs and knowledge products.
Debashish Sengupta
Dr. Debashish Sengupta completed his Ph.D. in Management from Central University of Nicaragua (UCN). He is the co-author of the Crossword Bestseller and KPMG cited book, 'Employee Engagement' and in addition he has also authored 5 acclaimed books. His research papers have been featured in leading international journals. His strategic and practical insights guide leaders of large and small organizations worldwide, through his teaching, writing, and direct consultation to major corporations and governments.
Myth 1: All Publication in Peer Reviewed Journals and Conferences are 'Human' Works
Should I have written 'original' works instead of human works!
Trust me, I have not committed any typo. In 2005, a group of MIT graduate students decided to goof off in a very MIT graduate student way: They created a program called SCIgen that randomly generated fake scientific papers. Jeremy Stribling MS ’05 PhD ’09, Dan Aguayo ’01 MEng ’02 and Max Krohn PhD ’08spent a week or two between class projects to develop SCIgen a program that randomly generates nonsensical computer-science papers, complete with realistic-looking graphs, figures, and citations. SCIgen was inspired from the online study guide 'Sparknotes' developed by Krohn. The paper developed by these MIT students made its way to a world conference where it was accepted for publication. After the secret of the paper was unveiled by its 'smart' authors, the conference lost its major sponsor and academic publishing world lay exposed. The 'act' of these three students laid bare the truth of larger part of academic publishing that questioned credibility. Thanks to SCIgen, since then computer-written gobbledygook has been routinely published in scientific journals and conference proceedings. One may be entertained by the fact that computer programs are now good enough to create drivable gabble but what is alarming and sad though is that many journals/conferences continue to accept such papers without being able to scrutinize them critically.
Myth 2: All top rated Peer Reviewed Journals make sound editorial decisions
A French computer scientist Cyril Labbé could detect more than 100 algorithmically generated articles in two major scientific publications. Later in 2012, he found out a batch of 85 fake articles in one of these publications. These two much respected publications on realizing their error decided to withdraw such papers post-date. Cyril Labbé also catalogued many such computer generated articles presented in ‘high-rated’ international conference between 2008 and 2013. Besides the computer generated articles, there are many more articles that are meaningless paper drudgery that sometimes make their way to some really ‘good’ peer-reviewed journals. A lengthy literature review with a long list of citations makes a paper being perceived as impressive and one that has depth. Given easy availability of electronic database and advanced MS Word application creation of citation list is not too difficult. The fact that Cyril Labbé could find out such editorial lapses in top rated peer reviewed journals shows that fault lines are much deeper than we perceive. It will be hence wrong to believe all 'top' rated peer reviewed journal publications as crème da la crème.
Myth 3: Publication in a top-tier journal alone certifies good research
Currently journals and publications are rated using ‘impact factor’. That is like the most critical measure for any journal. But should that really be the most important yardstick of a journal? Consider this. The impact factor, which was created by a librarian named Eugene Garfield in the early 1950s, measures how often articles published in a journal are cited. Now given the mad race to publish and availability of algorithms, electronic databases and advanced documentation applications, every year millions of academicians around the world are attempting to cite previously published research literature. Often the ones cited by author who have made their way to top journals end up being cited more, as such citations are perceived to be of greater value. In such a scenario where many authors are using citations in their papers more to impress and are blindly copying them from the papers of their accomplished peers, will impact factor be a true measure always? Eugene Garfiled became a millionaire by inventing 'impact factor' but should we not have more qualitative measures for assessing a journal or research papers? Should we not see the kind of 'impact' a journal has made in bridging the knowledge as well as practice gap over the years?
Besides what about the exciting new journals who are relatively new but are doing a great job when it comes to editing, review and publishing. It will take time for them to be ranked but that does not mean that they cannot publish a good research. It is similar to believing that all good products and innovations can come out of the great big companies alone! But then the innovation landscape has been altered more by the start-ups. Google that started only in 1998 is perhaps one of the most exciting and innovative companies that we have today.
Myth 4: All non-peer reviewed journals lack credibility
Any non-peer reviewed journal is considered as junk or one that lacks academic rigour. In other words, our belief is that collective wisdom of a group of peers make better judgements than an editor alone. I am still not in favour of putting the peer-reviewed publications on such a high pedestal; not that I am favouring the non-peer-reviewed publications out of the way as well! However the yardstick has to be quality and application alone when it comes to research journal andnot whether it is peer or non-peer reviewed publication. I know many academicians may not agree with me but then perhaps this is the reason that Gregor Mendel, the father of genetics was recognized for his work only after his death. Gregor Mendel, a priest by profession, did experiments on sweet pea that helped him postulate Laws of Genetics, that hold true even today. His only 'mistake' was to publish that in an 'obscure' journal. Almost three decades later, after his death Erich von Tschermak, Hugo de Vries, Carl Correns, and William Jasper Spillman independently verified several of Mendel's experimental findings, ushering in the modern age of genetics. Mendel finally got his due but it is really sad that the class system in journals prevented his work from spreading and being recognized. It is said that another very renowned scientist Charles Darwin came to know of Mendel's work much later. Otherwise Genetic would have been a much older science!
I have personally encountered some non-peer reviewed publications that in no way lack quality or substance. However when an academician publishes in one of this publication, he receives no or low credit for his work in the academic circuit. PM World Journal is one of them that is partnered none other than by the Project Management Centre of Excellence of University of Maryland that publishes some of the top quality papers and works on Project Management. Widely accessed and respected by the project management professionals, the PM World Library offers free subscription and access to the professionals in the developing and underdeveloped countries of the world.
Myth 5: Empirical research papers command greater rigour and credibility
A primary grader good in mathematics in school is always branded as the sharpest, most intelligent and a brilliant student, while those good I literature or history are generally considered as those who can memorise matter and work hard to reproduce the same during examinations. This perceptual bias extends right from the school to world of research where empirical papers are perceived by many to be superior to qualitative research papers. The truth is far from perceived 'reality'. To say that one or the other approach is "better" is simply a trivializing of what is a far more complex topic than a dichotomous choice can settle. Both quantitative and qualitative research rest on rich and varied traditions that come from multiple disciplines and both have been employed to address almost any research topic one can think of. Qualitative research too gathers data but not in numerical form. Analysis of qualitative data is difficult and requires an accurate description of participant responses. Unlike quantitative or empirical data analysis, qualitative data analysis not supported by software (like SPSS, 'R' etc.) or computer applications (like MS Excel).
Myth 6: All published work is good research
All published work is not good research and all good research is not necessarily published. Recognizing this fact is important to bring in greater level of objectivity in academic 'rigour' of research as many times this proudly touted 'academic rigour' makes the process of publishing painfully long and many a times a meaningless journey. The present system of peer review is based on the foundation of critiquing and many a times the review comments supposed to be incorporated by the authors of the paper lack direction and are meant more for editorial fulfilment than contributing to the improvement of the paper that enhances its application aspect. On the other hand I have from very close quarters seen consultants doing far more faster, sharper research to decide product/app launches by companies, involving big bucks and getting it right. They do not go through all this academic rigmarole, yet come out winner where stakes are much higher. I am not trying to discard the rigour in academic research but stretching it too far only makes academic research less objective and more self-fulfilling.
Myth 7: Academicians are always good researchers
Academicians are expected to be good researchers. Shouldn’t they be? But then ground realities in countries like India are different. Higher education institutions are more teaching schools than research centres and hence right from hiring to compensation of teachers/professors is based on selecting and compensating for teaching than research. No wonder even after so many years, the salaries of professors and teachers are no match to compensation in the corporate sector. The system in itself does not necessarily attract the brightest and the best in the academic profession. There are outlier private institutions but in general the focus is on getting professors at low cost and making them to teach. This also means that there is no dedicated time available for research. Research funding in countries like India is almost non-existent. Added to this is the pressure to publish and academicians are expected to show published research output every year for getting fresh increments and promotions. Jens Skou, a 1997 Nobel Laureate, put it this way in his Nobel biographical statement: today's system puts pressure on scientists for, "too fast publication, and to publish too short papers, and the evaluation process uses a lot of manpower. It does not give time to become absorbed in a problem as the previous system did." Another Nobel laureate Peter Higgs (Physics, 2013) echoed similar sentiments when he said that - "Today I wouldn't get an academic job. It's as simple as that. I don't think I would be regarded as productive enough." In short, a system that generally attracts average researchers, provides them with no time or money incentive for research and at the same time expects them to produce fast published research output can only results in poor research and peer reviews. It has also spurred increasing numbers of low-quality “predatory publishers†who spam researchers with weekly “calls for papers†and charge steep fees for articles that they often don't even read before accepting.
Myth 8: Tightly controlled academic publication is the future
With evidences of deeper dysfunction within existing tightly controlled, peer-reviewed publications, their reputation is fast on the wane. And such deep anomalies have affected rank and file of research journals, all over the world. Hence the trust that high-ranked peer reviewed journals are necessarily producing high-end research is no more confirmatory. The research needs to be deregulated and must have greater social accountability. The engagement of the civic society in accessing research and exacting accountability out of the same is the need of the hour. Application of research felt by the civic society must be the only barometer of quality. In such a scenario poor research will automatically fizzle out. Future of research publication has to more "open," less-reliant on pre-publication peer review, and self-supported by scholars. Research must also be freely accessible for all and not limited to 'privileged' few to determine its usefulness and application value. World Bank recently announced its launching of the Open Knowledge Repository, which implements an open access policy for its research outputs and knowledge products.
Debashish Sengupta
Dr. Debashish Sengupta completed his Ph.D. in Management from Central University of Nicaragua (UCN). He is the co-author of the Crossword Bestseller and KPMG cited book, 'Employee Engagement' and in addition he has also authored 5 acclaimed books. His research papers have been featured in leading international journals. His strategic and practical insights guide leaders of large and small organizations worldwide, through his teaching, writing, and direct consultation to major corporations and governments.