What's Wrong With This Translation? Simplifying Error Annotation For Crowd Evaluation

Research output: Contribution to conferencePaperpeer-review

Abstract

Machine translation (MT) for Faroese faces challenges due to limited expert annotators and a lack of robust evaluation metrics. This study addresses these challenges by developing an MQM-inspired expert annotation framework to identify key error types and a simplified crowd evaluation scheme to enable broader participation. Our findings based on an analysis of 200 sentences translated by three models demonstrate that simplified crowd evaluations align with expert assessments, paving the way for improved accessibility and democratization of MT evaluation.
Original languageEnglish
Pages42-47
Number of pages6
Publication statusPublished - 2025
EventNB-REAL – Nordic-Baltic Responsible Evaluation and Alignment of Language models - Tallinn, Estonia
Duration: 2 Mar 20252 Mar 2025
https://nbreal.xyz/

Workshop

WorkshopNB-REAL – Nordic-Baltic Responsible Evaluation and Alignment of Language models
Abbreviated titleNB-REAL
Country/TerritoryEstonia
CityTallinn
Period2/03/252/03/25
Internet address

Keywords

  • Machine translation

Fingerprint

Dive into the research topics of 'What's Wrong With This Translation? Simplifying Error Annotation For Crowd Evaluation'. Together they form a unique fingerprint.

Cite this