Troppo o troppo poco? Web of science, Scopus, Google scholar: tre database a confronto (un caso di studio)

Contenuto principale dell'articolo

Ezio Tarantino


The BIDS (Digital Library of “La Sapienza” project-service) was established in the “La Sapienza” University of Rome in 1999. Its first year of life saw the acquisition of 10 data bases and the stipulation of the first large consortium contract with a large electronic periodicals' publisher (Elsevier). The project has gradually grown to include today about 90 data bases and over 15,000 electronic periodicals. Among those first ten electronic data bases was the Web of science, the data base produced by the ISI, Institute for Scientific Information.
At the time the Web of science (WoS) was the only source that apart from providing a rich bibliographic data base, also functioned as a tool of citation indexing, very much appreciated by academic users.
In 2004 Elsevier proposed that we try the new Scopus database, without any costs or commitments on our part. This was not because the digital Library needed a new multidisciplinary database, but because we needed a new updated citation indexing tool. The first comparative analysis that we did therefore was on the number of periodicals examined by Scopus, to see if, on the basis of this, it were not perhaps possible to eliminate some of the disciplinary data bases to which we were subscribing.
In the meantime, November 2004 saw the appearance of Google scholar, the new Google motor for academic research. It was not the first example of this kind (we must at least mention the sectorial experience of Citeseer and that of Scirus, the research engine specialized in scientific documentation, this too produced by Elsevier, and inserted into Scopus): but the popularity, authoritativeness and ambition of Google's project immediately made it a truly fearful competitor. Thus, in the space of two months, WoS found itself face to face with not one, but two competitors, one of which moreover free of charge.
An investigation was therefore carried out, not to evaluate the superiority of one database over another (to do this one would have to carefully cross-examine all the comparative bibliography that is appearing with a certain regularity in recent months); but rather to help us see how these tools, even though powerful and with results that are overall more than satisfactory, cannot guarantee uniformity, consistency, real trustworthiness. The entrance in the market (or in the free territory of the net that is open to everyone, as in the case of Google) of new actors results in highlighting the gaps in a citation index, no matter what agency carries out the survey, so warning users of its inevitable lack of precision. And this process of relativization of automatic indexing mechanisms (so important especially in defining academic careers) should be considered, at least at awareness level, as a very positive result.

Dettagli dell'articolo