Abstract
We present FoQA, a Faroese extractive question-answering (QA) dataset with
2,000 samples, created using a semiautomated approach combining Large
Language Models (LLMs) and human validation. The dataset was generated from
Faroese Wikipedia articles using GPT-4-
turbo for initial QA generation, followed
by question rephrasing to increase complexity and native speaker validation to
ensure quality. We provide baseline performance metrics for FoQA across multiple models, including LLMs and BERT,
demonstrating its effectiveness in evaluating Faroese QA performance. The dataset
is released in three versions: a validated
set of 2,000 samples, a complete set of
all 10,001 generated samples, and a set of
2,395 rejected samples for error analysis.
2,000 samples, created using a semiautomated approach combining Large
Language Models (LLMs) and human validation. The dataset was generated from
Faroese Wikipedia articles using GPT-4-
turbo for initial QA generation, followed
by question rephrasing to increase complexity and native speaker validation to
ensure quality. We provide baseline performance metrics for FoQA across multiple models, including LLMs and BERT,
demonstrating its effectiveness in evaluating Faroese QA performance. The dataset
is released in three versions: a validated
set of 2,000 samples, a complete set of
all 10,001 generated samples, and a set of
2,395 rejected samples for error analysis.
| Original language | English |
|---|---|
| Title of host publication | Proceedings of the Third Workshop on Resources and Representations for Under-Resourced Languages and Domains (RESOURCEFUL 2025) |
| Place of Publication | Tallinn |
| Publisher | University of Tartu Library |
| Pages | 48-57 |
| Number of pages | 10 |
| Publication status | Published - 2025 |
| Externally published | Yes |
Keywords
- LLM
- Large language models
- datasets