More efficient manual review of automatically transcribed tabular data

Author(s)

  • Bjørn-Richard Pedersen UiT The Arctic University of Norway
  • Rigmor Katrine Johansen Department of Health and Care Sciences, UiT The Arctic University of Norway.
  • Einar Holsbø Department of Computer Science, UiT The Arctic University of Norway.
  • Hilde Sommerseth Norwegian Historical Data Centre, UiT The Arctic University of Norway.
  • Lars Ailo Bongo Department of Computer Science, UiT The Arctic University of Norway.
  • Kay Pepping

DOI:

https://doi.org/10.51964/hlcs15456

Keywords:

Population census data, Machine Learning, Historical data, Manual review, Occupation codes, Norway 1950, Norwegian population data, Efficient manual review, Automatically transcribed, Tabular data, Manual review and correction, Interviews, Norwegian occupation data, Historical occupation data, Norwegian Historical Data Centre, UiT The Arctic University of Norway

Abstract

Machine learning methods have proven useful in transcribing historical data. However, results from even highly accurate methods require manual verification and correction. Such manual review can be time-consuming and expensive, therefore the objective of this paper was to make it more efficient.

Previously, we used machine learning to transcribe 2.3 million handwritten occupation codes from the Norwegian 1950 census with high accuracy (97%). We manually reviewed the 90,000 (3%) codes with the lowest model confidence. We allocated those 90,000 codes to human reviewers, who used our annotation tool to review the codes. To assess reviewer agreement, some codes were assigned to multiple reviewers. We then analyzed the review results to understand the relationship between accuracy improvements and effort. Additionally, we interviewed the reviewers to improve the workflow.

The reviewers corrected 62.8% of the labels and agreed with the model label in 31.9% of cases. About 0.2% of the images could not be assigned a label, while for 5.1% the reviewers were uncertain, or they assigned an invalid label. 9,000 images were independently reviewed by multiple reviewers, resulting in an agreement of 86.43% and disagreement of 8.96%.

We learned that our automatic transcription is biased towards the most frequent codes, with a higher degree of misclassification for the lowest frequency codes. Our interview findings show that the reviewers did internal quality control and found our custom tool well-suited. So, only one reviewer is needed, but they should report uncertainty.

Downloads

Download data is not yet available.

Published

2024-10-17

Issue

Section

Articles

How to Cite

More efficient manual review of automatically transcribed tabular data. (2024). Historical Life Course Studies, 7. https://doi.org/10.51964/hlcs15456