Crowdsourcing metrics of digital collections
DOI:
https://doi.org/10.18352/lq.10090Keywords:
digitized collections, crowdsourcing, metrics, monitoringAbstract
In the National Library of Finland (NLF) there are millions of digitized newspaper and journal pages, which are openly available via the public website http://digi.kansalliskirjasto.fi. To serve users better, last year the front end was completely overhauled with its main aim in crowdsourcing features, e.g., by giving end-users the opportunity to create digital clippings and a personal scrapbook from the digital collections. But how can you know whether crowdsourcing has had an impact? How much crowdsourcing functionalities have been used so far? Did crowdsourcing work? In this paper the statistics and metrics of a recent crowdsourcing effort are analysed across the different digitized material types (newspapers, journals, ephemera). The subjects, categories and keywords given by the users are analysed to see which topics are the most appealing. Some notable public uses of the crowdsourced article clippings are highlighted. These metrics give us indications on how the end-users, based on their own interests, are investigating and using the digital collections. Therefore, the suggested metrics illustrate the versatility of the information needs of the users, varying from citizen science to research purposes. By analysing the user patterns, we can respond to the new needs of the users by making minor changes to accommodate the most active participants, while still making the service more approachable for those who are trying out the functionalities for the first time. Participation in the clippings and annotations can enrich the materials in unexpected ways and can possibly pave the way for opportunities of using crowdsourcing more also in research contexts. This creates more opportunities for the goals of open science since source data becomes available, making it possible for researchers to reach out to the general public for help. In the long term, utilizing, for example, text mining methods can allow these different end-user segments to achieve more. Based on our current initial experiences, we feel that crowdsourcing gives an opportunity for a library context to get closer to the user base and to obtain insight into the numerous opportunities, which the digitized content provides for them and for the library. Gathering the first prototype qualitative and quantitative metrics for this particular crowdsourcing case gives information on how to further improve both the service and the metrics so that they can give valid information for decision-making.Downloads
Download data is not yet available.
Downloads
Published
2015-12-04
Issue
Section
Articles
License
Copyright (c) 2015 Tuula Pääkkönen
This work is licensed under a Creative Commons Attribution 4.0 International License.
How to Cite
Crowdsourcing metrics of digital collections. (2015). LIBER Quarterly: The Journal of the Association of European Research Libraries, 25(2), 41-55. https://doi.org/10.18352/lq.10090