Research

– 10 min read

Fusion-in-Decoder: Achieving state-of-the-art open-domain QA performance

Avatar photo

Writer Team   |  September 1, 2021

Feature image research

Open-domain question answering (QA) has recently made significant progress, with generative models like Transformers demonstrating impressive performance. However, these models are computationally expensive to train and query, limiting their practical application. In this whitepaper, we introduce a novel approach to open-domain QA that combines the strengths of retrieval and generative models, aiming to achieve more efficient and accurate question answering.

Our approach, termed Fusion-in-Decoder (FiD), retrieves informative passages and leverages them with a sequence-to-sequence model to generate answers. This method demonstrates state-of-the-art results on benchmarks like Natural Questions and TriviaQA, and offers a highly scalable framework for aggregating and combining information from multiple passages.

Key takeaways and findings:

  • The FiD method effectively combines retrieval and generative models to achieve state-of-the-art results on benchmarks like Natural Questions and TriviaQA.
  • The method demonstrates high scalability and efficiency by retrieving informative passages and leveraging them with a sequence-to-sequence model to generate answers.
  • Experiments show that FiD surpasses other methods in terms of Exact Match (EM) and F1 scores across multiple datasets.
  • The research highlights the impact of using different retrieval methods (sparse vs. dense) and generative models (T5 vs. BART), as well as the influence of the number of retrieved passages on QA performance.

The Fusion-in-Decoder research is a milestone because it uses a new way to solve open-domain questions. It combines retrieval and generative models to get the best results in question-answering. The method is highly efficient in processing multiple passages to generate the most accurate answers.