SDPWG Wiki

NTCIR-12 Core Task: "Spoken Query and Spoken Document Retrieval 2 (SpokenQuery&Doc-2)"

Introduction

NTCIR-12 SpokenQuery&Doc-2 task evaluates spoken document retrieval from spontaneously spoken query. Current information retrieval framework seems to face bottleneck in its human interface for drawing out one's information need. SpokenQuery&Doc task tries to overcome it by making use of spontaneously spoken queries. One of the advantage of the use of speech as an input method to retrieval systems is that it enables users to easily submit long queries to give systems rich clues for retrieval, because the unconstrained speech is common in daily use for human and the most natural and easy method to express one's thought. The target document collection is also spoken documents.

Data Set

The lecture speech data, the recordings of the annual Spoken Document Processing Workshop (the SDPWS data set), are going to be used as the target document in SpokenQuery&Doc-2. For this speech data, the manual and automatic transcriptions (with several ASR conditions) are to be provided. These enable researchers interested in SDR, but without access to their own ASR system to participate in the tasks.

Transcription

Standard STD and SDR methods first transcribe the audio signal into its textual representation by using Large Vocabulary Continuous Speech Recognition (LVCSR), followed by text-based retrieval. The participants can use the following three types of transcriptions.

  1. Manual transcription

    It is mainly used for evaluating the upper-bound performance.

  2. Reference Automatic Transcriptions

    The task organizers are going to provide reference automatic transcriptions for the target speech data. These enabled researchers interested in SDR, but without access to their own ASR system to participate in the tasks. They also enabled comparisons of the IR methods based on the same underlying ASR performance.

    The textual representation of them will be both the n-best list of the word or syllable sequence depending on the two background ASR systems, and the lattice representation of them.

    1. Word-based transcription

      Obtained by using a word-based ASR system. In other words, a word n-gram model is used for the language model of the ASR system. With the textual representation, it also provides the vocabulary list used in the ASR, which determines the distinction between the in-vocabulary (IV) query terms and the our-of-vocabulary (OOV) query terms used in our STD subtask.

    2. Syllable-based transcription

      Obtained by using a syllable-based ASR system. The syllable n-gram model is used for the language model, where the vocabulary is the all Japanese syllables. The use of it can avoid the OOV problem of the spoken document retrieval. The participants who want to focus on the open vocabulary STD and SDR can use this transcription.

  3. Participant's own transcription

    The participants can use their own ASR systems for the transcription. In order to enjoy the same IV and OOV condition, their word-based ASR systems are recommended to use the same vocabulary list of our reference transcription, but not necessary. When participating with the own transcription, the participants are encouraged to provide it to the organizers for the future SpokenDoc test collections.

Task Description

Schedule

2015-02-27NTCIR-11 Kick-off event
2015-11-16 2015-11-23Dry run
2015-12-07Formal run
2015-02-01Evaluation result release

Organizers

  • Tomoyosi Akiba (Toyohashi University of Technology)
  • Hiromitsu Nishizaki (University of Yamanashi)
  • Hiroaki Nanjo (Ryukoku University)
  • Gareth Jones (Dublin City University)

Registration

Registration form is available at the official page of NTCIR-12.

Link


トップ   編集 凍結 差分 バックアップ 添付 複製 名前変更 リロード   新規 一覧 単語検索 最終更新   ヘルプ   最終更新のRSS
Last-modified: 2015-11-19 (木) 22:54:09 (732d)