Scholarly publications evaluation: from the printed literature to the Open Archives
Main Article Content
Abstract
The publications evaluation is crucial to establish the value of authors and Institutions.
The competition between Universities, research groups and individuals to get funding, acknowledgment and tenures makes the publication list of each competitor the first important reference to exhibit.
Evaluation tools may be qualitative or quantitative. Bibliometrics refer to the quantitative tools.
Citation analysis has had its first expansion and utilization by Eugene Garfield information studies, dating back the 1950'. He build up the Science Citation Index as a instrument to study and evaluate the
scientific research, though at that time some authors were warning him against the possible misuse of the citation indicators.
A very useful review by Borgman and Furner is reported and discussed: the evaluative link analysis may help to understand when a citation is done more for some personal or irrational reasons than the pure reference to an important previous paper.
The ISI Impact Factor is widely used to evaluate research in many countries and Institutions; but even in the biomedical area (where is well known and extensively used) many are criticising IF, because inappropriate to evaluate research, and strongly biased by the American journals trend to cite each other.
A new perspective is offered by the Open Archive Initiative tools to evaluate usage of self-archived papers. In the digital library it is possible to trace any access and usage; indicators can be much more precise about the readers' activity (and therefore it is now possible to have a Reading Factor instead of just the citation Impact Factor). Very recently the OpCit Project has launched the CiteBase software, which is very useful
to show the usage and citation of self archived papers: this software therefore answers the evaluation need of researchers who publish not only on commercial journals but also in the Open Archives.
The competition between Universities, research groups and individuals to get funding, acknowledgment and tenures makes the publication list of each competitor the first important reference to exhibit.
Evaluation tools may be qualitative or quantitative. Bibliometrics refer to the quantitative tools.
Citation analysis has had its first expansion and utilization by Eugene Garfield information studies, dating back the 1950'. He build up the Science Citation Index as a instrument to study and evaluate the
scientific research, though at that time some authors were warning him against the possible misuse of the citation indicators.
A very useful review by Borgman and Furner is reported and discussed: the evaluative link analysis may help to understand when a citation is done more for some personal or irrational reasons than the pure reference to an important previous paper.
The ISI Impact Factor is widely used to evaluate research in many countries and Institutions; but even in the biomedical area (where is well known and extensively used) many are criticising IF, because inappropriate to evaluate research, and strongly biased by the American journals trend to cite each other.
A new perspective is offered by the Open Archive Initiative tools to evaluate usage of self-archived papers. In the digital library it is possible to trace any access and usage; indicators can be much more precise about the readers' activity (and therefore it is now possible to have a Reading Factor instead of just the citation Impact Factor). Very recently the OpCit Project has launched the CiteBase software, which is very useful
to show the usage and citation of self archived papers: this software therefore answers the evaluation need of researchers who publish not only on commercial journals but also in the Open Archives.
Article Details
Section
Articles
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.