Abstract
Machine translation (MT) for Faroese faces challenges due to limited expert annotators and a lack of robust evaluation metrics. This study addresses these challenges by developing an MQM-inspired expert annotation framework to identify key error types and a simplified crowd evaluation scheme to enable broader participation. Our findings based on an analysis of 200 sentences translated by three models demonstrate that simplified crowd evaluations align with expert assessments, paving the way for improved accessibility and democratization of MT evaluation.
Original language | English |
---|---|
Pages | 42-47 |
Number of pages | 6 |
Publication status | Published - 2025 |
Event | NB-REAL – Nordic-Baltic Responsible Evaluation and Alignment of Language models - Tallinn, Estonia Duration: 2 Mar 2025 → 2 Mar 2025 https://nbreal.xyz/ |
Workshop
Workshop | NB-REAL – Nordic-Baltic Responsible Evaluation and Alignment of Language models |
---|---|
Abbreviated title | NB-REAL |
Country/Territory | Estonia |
City | Tallinn |
Period | 2/03/25 → 2/03/25 |
Internet address |
Keywords
- Machine translation